The Cambridge Handbook of Moral Psychology 1108794955, 9781108794954

The Cambridge Handbook of Moral Psychology is an essential guide to the study of moral cognition and behavior. Originati

159 115 7MB

English Pages 664 [666] Year 2025

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The Cambridge Handbook of Moral Psychology
 1108794955, 9781108794954

Table of contents :
Front Cover
Half Title
Title Page
Copyright Page
Contents
Figures
Tables
Contributors
Preface
1 - Modern Moral Psychology: A Guide to the Terrain
Part I - Building Blocks
2 - Moral Character
3 - Moral Motivation
4 - Norms: Inference and Interventions
5 - Moral Dilemmas
6 - The Moral Domain: What Is Wrong, What Is Right, and How Your Mind Knows the Difference
Part II - Thinking and Feeling
7 - Moral Decision Making: The Value of Actions
8 - Are Moral Judgments Rational?
9 - Moral Categorization and Mind Perception
10 - Moral Emotions: Are They Both Distinct and Good?
11 - The Benefits and Costs of Empathy in Moral Decision Making
Part III - Behavior
12 - Prosociality
13 - Antisocial and Moral Behavior: A Review and Synthesis
14 - Intergroup Conflict and Dehumanization
15 - Blame and Punishment: Two Distinct Mechanisms for Regulating Moral Behavior
16 - Moral Communication
Part IV - Origins, Development, and Variation
17 - Grounding Moral Psychology in Evolution, Neurobiology, and Culture
18 - Moral Babies? Evidence for Core Moral Responses in Infants and Toddlers
19 - An Integrative Approach to Moral DevelopmentDuring Adolescence
20 - Morality in Culture: The Fate of Moral Absolutes in History
Part V - Applications and Extensions
21 - Criminal Law, Intuitive Blame, and Moral Character
22 - Moral Dimensions of Political Attitudes and Behavior
23 - Moral and Religious Systems
24 - Lessons from Moral Psychology for Moral Philosophy
Index
Back Cover

Citation preview

The Cambridge Handbook of Moral Psychology The Cambridge Handbook of Moral Psychology is an essential guide to the study of moral cognition and behavior. Originating as a philosophical exploration of values and virtues, moral psychology has evolved into a robust empirical science intersecting psychology, philosophy, anthropology, sociology, and neuroscience. Contributors to this interdisciplinary handbook explore a diverse set of topics, including moral judgment and decision making, altruism and empathy, and blame and punishment. Tailored for graduate students and researchers across psychology, philosophy, anthropology, neuroscience, political science, and economics, it offers a comprehensive survey of the latest research in moral psychology, illuminating both foundational concepts and cutting-edge developments.  .  (PhD, Stanford University) is Professor in the Department of Cognitive and Psychological Sciences at Brown University. He received a National Science Foundation CAREER award and several publication awards, and is Fellow of the Association for Psychological Science, the Society of Experimental Social Psychology, the Society for Personality and Social Psychology, and the Cognitive Science Society. His research focuses on social cognition, moral psychology, trust, and human–machine interaction. He is author of How the Mind Explains Behavior (2004) and coeditor of Intentions and Intentionality (2001) and Other Minds (2005).   (PhD, University of Chicago) is Associate Professor and Chair of Philosophy at the University of Missouri. His research focuses on experimental philosophy, moral psychology, and philosophy of psychology. He is coeditor of The Cambridge Handbook of Situated Cognition (2009) and editor or coeditor of special issues of Consciousness and Cognition (2005) and Cognitive Systems Research (2015).

Cambridge Handbooks in Psychology

The Cambridge Handbook of Moral Psychology Edited by

Bertram F. Malle Brown University

Philip Robbins University of Missouri

Shaftesbury Road, Cambridge CB2 8EA, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 103 Penang Road, #05–06/07, Visioncrest Commercial, Singapore 238467 Cambridge University Press is part of Cambridge University Press & Assessment, a department of the University of Cambridge. We share the University’s mission to contribute to society through the pursuit of education, learning and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781108841597 DOI: 10.1017/9781108894357 © Cambridge University Press & Assessment 2025 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press & Assessment. When citing this work, please include a reference to the DOI 10.1017/9781108894357 First published 2025 A catalogue record for this publication is available from the British Library A Cataloging-in-Publication data record for this book is available from the Library of Congress ISBN 978-1-108-84159-7 Hardback ISBN 978-1-108-79495-4 Paperback Cambridge University Press & Assessment has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

Contents

List of Figures List of Tables List of Contributors Preface

page viii ix x xiii

1 Modern Moral Psychology: A Guide to the Terrain

 .    

1

Part I Building Blocks 2 Moral Character

 .    . 

33

3 Moral Motivation

    . 

55

4 Norms: Inference and Interventions

    

78

5 Moral Dilemmas

 -   

101

6 The Moral Domain: What Is Wrong, What Is Right, and How Your Mind Knows the Difference

    

124

Part II Thinking and Feeling 7 Moral Decision Making: The Value of Actions

    

153

8 Are Moral Judgments Rational?

 

172

9 Moral Categorization and Mind Perception

 

198

v

vi

Contents

10 Moral Emotions: Are They Both Distinct and Good?

  

222

11 The Benefits and Costs of Empathy in Moral Decision Making

 

248

Part III Behavior 12 Prosociality

   - 

273

13 Antisocial and Moral Behavior: A Review and Synthesis

    

303

14 Intergroup Conflict and Dehumanization

 

331

15 Blame and Punishment: Two Distinct Mechanisms for Regulating Moral Behavior

 . 

354

16 Moral Communication

    

382

Part IV Origins, Development, and Variation 17 Grounding Moral Psychology in Evolution, Neurobiology, and Culture

 

409

18 Moral Babies? Evidence for Core Moral Responses in Infants and Toddlers

    

433

19 An Integrative Approach to Moral Development During Adolescence

 .    . 

462

20 Morality in Culture: The Fate of Moral Absolutes in History

 . ,  . ,   

492

Part V Applications and Extensions 21 Criminal Law, Intuitive Blame, and Moral Character

 

523

Contents

22 Moral Dimensions of Political Attitudes and Behavior

 . ,  ,   

549

23 Moral and Religious Systems

     

575

24 Lessons from Moral Psychology for Moral Philosophy

    -

596

Index

621

vii

Figures

1.1 The landscape of morality and its five major territories 5.1 The components and psychological characteristics of genuine moral dilemmas 5.2 The distinction between trivial trade-offs and genuine moral dilemmas 7.1 Decision tree for umbrella case 7.2 Decision tree for lying case 11.1 Brain circuits associated with different functional components of empathy 13.1 The neuromoral model of antisocial behavior 15.1 The Path Model of Blame 23.1 Reported presence of moralistic supernatural punishment

viii

page 4 103 104 155 160 254 318 365 585

Tables

1.1 Journals that publish the largest proportion of research on moral psychology 6.1 How each moral psychological paradigm characterizes the moral domain 10.1 Summary of moral emotions criteria 17.1 Morally relevant aspects of human existence 23.1 Prisoner’s dilemma payoff matrix 24.1 Some demographic differences in moral judgment 24.2 Order and word framing effects on moral judgment 24.3 Some influences of incidental affect on moral judgment

page 2 126 224 411 579 601 603 605

ix

Contributors

 , University of North Carolina at Chapel Hill, United States  , National Research Council, Italy  . , Vassar College, United States  , University of Pennsylvania, United States  , American Academy of Arts & Sciences, United States  , Aarhus University, Denmark  , University of Chicago, United States  -, University of Oxford, England  , Brown University, United States  , University of Munich, Germany  . , University of Pennsylvania, United States  , University of North Carolina at Chapel Hill, United States  . , University of British Columbia, Canada  , University of British Columbia, Canada  , University of Melbourne, Australia  , University of British Columbia, Canada  . , Brigham Young University, United States  , University of Oxford, England  . , Nova Southeastern University, United States  , University of British Columbia, Canada  . , Brown University, United States  . , Harvard University, United States  , Princeton University, United States x

List of Contributors

 , Northwestern University, United States  , University of Notre Dame, United States  , Cornell University, United States  , Cornell University, United States  , University of New South Wales, Australia   , Aarhus University, Denmark  , University of Pennsylvania, United States  , Trinity College Dublin, Republic of Ireland  , Utrecht University, the Netherlands  , University of Missouri, United States  . , University of California, Santa Barbara, United States   , University of Surrey, United Kingdom  . , University of Chicago, United States  -, Duke University, United States - , Leiden University, the Netherlands  , National Research Council, Italy  , University of British Columbia, Canada

xi

Preface

Creating a handbook on any topic is daunting; one on moral psychology especially so, given the wide range of disciplines contributing to this rapidly emerging field. Moreover, the field has strong traditions and positions that have prompted significant debate, from philosophical giants like Aristotle, Hume, and Kant to the developmental classics of Piaget, Kohlberg, and Turiel, to more recent contributions by a rapidly growing number of researchers. Moral psychology’s philosophical roots and rising prominence in the empirical sciences have motivated us to represent multiple disciplines in this handbook: not only philosophy and psychology but also neuroscience, anthropology, behavioral economics, and legal studies. Given the interdisciplinary breadth of moral psychology, we could not represent all disciplines within the scope of 24 chapters, but we try to make up for some of the omissions in the introductory chapter. Shortly before we began the project the pandemic hit, and the next two years were sad and strange and challenging. Most of our authors came through, some had to jump ship, and a few brave ones came on board later. Together we all did our best to keep up with the expanding literature, new journals and books, and world events that questioned our understanding of moral psychology. As such, the handbook reflects the latest phase of evolving insights into human morality. We are grateful to Stephen Acerra, our editor at Cambridge University Press, for supporting the project from its inception and helping us bring it to fruition. During the production process, we had excellent help from Penny Harper, who copy-edited the manuscript; Jim Diggins, who prepared the index; and Auropradeepa Rajapriyan, who managed the project. Thanks also to our contributors, and to the army of external reviewers who assisted with the editorial process. Bertram Malle is grateful to Brown University for a sabbatical in the fall of 2021 that enabled him to focus on editing and writing, to the students in his Blame and Punishment course who helped shape the eponymous handbook chapter, and to the National Science Foundation (IIP-2727702) and the Air Force Office of Scientific Research (FA9550-21-1-0359) for grant support over the past years. He is grateful for the beautiful forest that supports his mental, physical, and creative health. Most of all, he thanks his colleagues and friends for support, joy, and delightful shared meals, and his wife, Lara London, for

xiii

xiv

Preface

her unwavering love and patience throughout this project (and all the other ones). Philip Robbins is indebted to his colleagues at the University of Missouri, who have made the job of chairing the Philosophy department less taxing, and more enjoyable, than he expected it to be. Securing the time to see a big project through can be tricky, and it helps a lot to work with people who work well together. His biggest debt, though, is to his family – Valerie, Judah, and Clara – for emotional and moral support, and to Lucy, the best and brightest of border collies, for steadfast companionship.

1 Modern Moral Psychology A Guide to the Terrain Bertram F. Malle and Philip Robbins

The term moral psychology is commonly used in at least two different senses. In the history of philosophy, moral psychology has referred to a branch of moral philosophy that addresses conceptual and theoretical questions about the psychological basis of morality, often (but not always) from a normative perspective (Tiberius, 2015). In the empirical investigations of psychology, anthropology, sociology, and adjacent fields, moral psychology has examined the cognitive, social, and cultural mechanisms that serve moral judgment and decision making, including emotions, norms, and values, as well as biological and evolutionary contributions to the foundations of morality. Since 2010, over six thousand articles in academic journals have investigated the nature of morality from a descriptive-empirical perspective, and this is the perspective the handbook emphasizes. Our overarching goal in this volume, however, is to bring philosophical and psychological perspectives on moral psychology into closer contact while maintaining a commitment to empirical science as the foundation of evidence. Striving toward this goal, we have tried to cast a wide net of questions and approaches, but naturally we could not cover all topics, issues, and positions. We offer some guidance to omitted topics later in this introduction, which we hope will allow the reader to take first steps into those additional domains. The chapters try to strike a balance between being up to date in a fast-moving field and making salient insights that have garnered attention for an extended time. The reader may consult some of the main journals publishing on moral psychology to follow the latest research in the field. Table 1.1 lists the journals that, in a recent database search, published the largest number of articles on “moral psychology.” We see frequent contributions from journals in social and cognitive psychology but also from generalist and philosophy journals. As several of the chapters illustrate, much research in moral psychology has been informed in one way or another by the work of moral philosophers. For this reason, it may be helpful for the reader to bear in mind the theoretical perspectives on morality that tend to dominate the philosophical literature in ethics and metaethics. Four of these perspectives have been especially influential in moral psychology. First, there is act utilitarianism, the idea that right actions are those actions that have the best consequences, as measured by aggregate utility (Mill, 1998). Second, there is deontology, according to which the moral permissibility of an action is determined by whether it conforms to a set of 1

2

    .       

Table 1.1 Journals that publish the largest proportion of research on moral psychology 10 general journals (in alphabetical order) Cognition Developmental Psychology Frontiers in Psychology Journal of Experimental Psychology: General Journal of Experimental Social Psychology Journal of Personality and Social Psychology Personality and Individual Differences Personality and Social Psychology Bulletin PLoS ONE Social Psychological and Personality Science Next 10 British Journal of Social Psychology Emotion European Journal of Social Psychology Journal of Applied Psychology Judgment and Decision Making Philosophical Psychology Psychological Science Social Cognitive and Affective Neuroscience

Social Psychology Zeitschrift für Sozialpsychologie Applied or Domain Focus Ethics & Behavior Journal of Business Ethics Journal of Moral Education Nursing Ethics Psychological Trauma Social Science & Medicine Traumatology Additionally: Theoretical Focus Ethics Journal of Philosophy Mind and Language Personality and Social Psychology Review Philosophical Review Psychological Review Review of Philosophy and Psychology

Note. The first three groups stem from a search for keyword moral* conducted July 31, 2023, on all Academic Search Premier databases, showing the peer-reviewed journals that published the highest number of empirical articles between 2010 and 2023. The fourth category is a listing of important outlets for theoretical work in the field.

abstract rules, such as the rule that people should be treated as ends in themselves, rather than solely as means to an end (Kant, 1785/1998). The contrast between act utilitarian and deontological commitments is vividly illustrated by sacrificial dilemma cases, in which prioritizing the good of the many would violate the rights of the one (Thomson, 1985). A third option, which effectively splits the difference between act utilitarianism and deontology, is rule utilitarianism, according to which an action is morally right just in case it is required by an optimific social rule, that is, a rule that would tend to maximize aggregate utility if everyone were to follow it. Finally, there is virtue ethics, which shifts the focus from how people should behave to what sort of character traits people should cultivate (namely, the virtues). On this view, moral standards for behavior are determined by what a hypothetical virtuous person or persons would do in the context: Right actions are actions that all virtuous people would do, wrong actions are actions that no virtuous person would do, and merely permissible (i.e., neither right nor wrong) actions are actions that some virtuous people would do. We put no constraints on authors to align themselves with a particular metaethical position. Some of the chapters could be assigned to well-known positions that have influenced the field: utilitarian (Baron, in Chapter 8; Niemi

Modern Moral Psychology: A Guide to the Terrain

& Nichols, in Chapter 7), deontological (Andrighetto & Vriens, in Chapter 4; Malle, in Chapter 15), and virtue-based (Narvaez, in Chapter 17). Other chapters do not take any particular position but speak to topics relevant to those positions (Demaree-Cotton & Kahane, in Chapter 5; Goodwin & Landy, in Chapter 2; Robbins, in Chapter 9; Shweder et al., in Chapter 20).

1.1 The Landscape of Morality The broad topic of morality encompasses a variety of more specific phenomena, such as moral judgment, moral decision making, moral emotions, moral norms, and more. In this section we briefly discuss and distinguish these different phenomena that make up the landscape of morality (following Bello & Malle, 2023) and highlight which of the chapters speak to each of the phenomena. Morality exists only against the background of a community’s moral standards. Moral judgment would not be moral judgment unless it is made relative to a set of moral standards, which are typically referred to as norms and values; the same holds for what makes decision making moral, what makes communication moral, and so on. Though all are embedded in norms and values, several phenomena of morality must be distinguished, and Figure 1.1 highlights some of the more important distinctions. There is diversity within each of the phenomena: Within moral communication, for instance, we would find forgiving, justifying, and praising, and numerous emotions have been considered moral emotions (e.g., guilt, outrage, contempt, and disgust). But the boundaries between the phenomena can be drawn in meaningful ways, at least to organize the sets of questions and psychological mechanisms under investigation.

1.1.1 Moral Behavior Moral behavior includes intentional acts (often studied under the label moral decision making) but also unintended or negligent behavior. The processes underlying an agent’s moral behavior are distinct from the processes underlying an observer’s moral judgments of another person’s behavior. This distinction helps sharpen terms like moral sense (Marazziti et al., 2013; Wilson, 1993), which can refer to behavioral phenomena such as altruism and moral disengagement or to evaluations and judgments of other people’s behavior (and sometimes one's own). Moral judgments and moral behaviors take the same norms into account and are responsive to similar kinds of information (e.g., justifying reasons, causal counterfactuals), but their underlying processes are distinct, and conclusions from one do not necessarily apply to the other. Philosophy and psychology have long focused on moral decision making as the primary driver of moral behavior. Decision making is moral if it refers to choices between possible paths of action in light of moral norms. In principle, moral decisions are no different from other decisions (Zeelenberg et al., 2012). But because of the deep involvement of norms, moral decisions take on key

3

4

    .       

Figure 1.1 The landscape of morality and its five major territories: moral behavior (including moral decision making), moral judgments (including multiple types, such as evaluation, wrongness, and blame), moral sanctions, moral emotions, and moral communication (expanded from Bello & Malle, 2023, Figure 31.1). Source: Sun, The Cambridge Handbook of Computational Cognitive Sciences, 2023 ©, published by Cambridge University Press, reproduced with permission.

properties of norms, including their substantial context sensitivity (Bartels et al., 2015) and keen responsiveness to what the community thinks and does (Bicchieri, 2006). A long tradition of psychological work has also examined moral decision making in light of moral values and principles (Kohlberg, 1981; Schwartz, 1992). Although such abstract guides can undoubtedly influence concrete moral decisions (and the justifications of those decisions), the question of whether a particular principle applies to a given problem is still guided by context-specific normative considerations. In fact, details in the setting and type of action under considerations can distinctly affect which moral principles dominate other principles (Christensen & Gomila, 2012). However, moral decisions – intentional acts by their nature – are only one part of the territory of moral behavior. Many morally significant behaviors are unintentional (such as negligence, recklessness, preventable accidents, or unintended side effects), and moral communities respond strongly to such behaviors (Laurent et al., 2016; Monroe & Malle, 2019). These responses, in turn, constitute the second major territory of the moral landscape: moral judgments.

Modern Moral Psychology: A Guide to the Terrain

1.1.2 Moral Judgments When people make a moral judgment, they appraise an object in light of moral norms. These appraisals differ considerably depending on the object of appraisal – an event, behavior, or person – and the information that guides the appraisal – about an action, its reasons, caused outcomes, counterfactuals, and more (Alicke, 2000; Cushman, 2008; Malle, 2021). In the philosophical literature, the term moral judgment often refers to one of these kinds: firstperson appraisals that a behavior one might perform is right or wrong (e.g., Ratoff & Roskies, Chapter 3 in this volume). In this meaning of the term, moral judgment directly underlies moral (intentional) action. We follow here the broader use of the term (more common in empirical moral psychology), in which moral judgments can refer to both first-person and third-person good– bad evaluations, norm judgments (what is prescribed or prohibited), and wrongness judgments, as well as blame judgments and character judgments (Malle, 2021). One might distinguish these kinds of moral judgments by their position in a processing hierarchy. Very often, the flow of information processing begins with the detection of a norm violation, so norms may already be cognitively activated when the other moral judgments are formed. The simplest and fastest judgments are evaluations (Yoder & Decety, 2014), followed by wrongness judgments (Cameron et al., 2017); more complex are judgments of blame and character (Malle et al., 2014; Murray et al., 2024), which build on the simpler ones. Another way to distinguish moral judgments is by the functions they serve when expressed in social settings. Norm judgments serve to persuade others to (not) take certain actions (“That’s not allowed!”), declare applicable norms (e.g., posted rules of conduct), and teach others (“The appropriate thing to do here is . . .”). Stating a behavior’s moral wrongness mainly serves to mark a behavior as a moral transgression, especially when it is seen as intentional. Blame, finally, criticizes, influences reputation, and regulates relationships (Coates & Tognazzini, 2012). While blame has been investigated extensively, less research is available on praise. Praise and blame are by no means mirror images of one another, and scholars have documented numerous asymmetries between the two judgments (Bostyn & Roets, 2016; Guglielmo & Malle, 2019; Hindriks, 2008; Pizarro et al., 2003). Both take into account the agent’s mental states and the performed behavior’s relation to relevant norms. But whereas blame tries to bring an agent who violated a norm back in line with the norm, praise identifies an action that exceeds normative expectations (Monroe et al., 2018) and rewards the agent for that action, helping to build social relationships (Anderson et al., 2020).

1.1.3 Moral Sanctions Aside from examining moral decisions and judgments, scholars have examined another class of responses to morally significant behavior: moral sanctions.

5

6

    .       

Whereas moral judgments are typically considered in the perceiver’s head, moral sanctions are social acts that express a moral judgment, impose a cost on the transgressor, and regulate the transgressor’s and other people’s future behavior. Most prominent among sanctions is punishment, often cast as the backward-looking act of retribution (literally payback), said to fulfill a desire to hurt the transgressor (Goodwin & Gromet, 2014). However, punishment is more complex. First, punishment can be an act of affirming a norm system (Fehr & Fischbacher, 2004) and of teaching (Cushman, 2013); if done properly, it can maintain cooperation in a community (Fehr & Gächter, 2000), mainly when it is accompanied by communication about the relevant norms (Andrighetto et al., 2013). Second, many forms of punishment have emerged rather recently in cultural human history, primarily as institutional behavior regulation closely tied to the law (Cushman, 2013; Malle, Chapter 15 in this volume). By contrast, everyday moral sanctions x are rarely as harsh and physical as the institutional ones; instead, they range from complaints (Drew, 1998) and acts of moral criticism (Moisuc & Brauer, 2019) to shaming or exclusion (Panagopoulou-Koutnatzi, 2014). In further contrast to institutional punishment, these informal sanctions are often negotiable and can even be taken back, and they are subject to social scrutiny to ensure they are appropriate and fair (Friedman, 2013; Malle et al., 2022).

1.1.4 Moral Emotions Emotions play at least two roles in the landscape of morality. First, many scholars have identified so-called moral emotions, such as guilt, shame, or contempt (Haidt, 2003; Prinz & Nichols, 2010; Tangney et al., 2007). Which emotions fall under this special label has long been debated, and Russell (Chapter 10, this volume) examines in detail the possible criteria one might apply to such designations. Second, scholars have considered the role of emotions as either causes, concomitants, or consequences of moral judgments and decisions (Monin et al., 2007). We see historical oscillations between opposing positions on this matter, between philosophers such as Kant and Hume as well as, more recently, between waves of empirical research, claiming moral judgments to be either primarily a matter of reason or primarily a matter of emotion. Over the past half century, early research cast moral judgment as deliberate, mature, and complex (Kohlberg, 1981). Then a revolution occurred in which morality was reframed as based primarily on evaluations, emotions, and unreasoned intuitions (Greene, 2008; Haidt, 2001; Prinz, 2006). Over the past decade, several scholars have cast doubt on previous evidence for this “unreason” picture of moral judgment (Guglielmo, 2015; Royzman et al., 2009; Sauer, 2012), provided new evidence for the significant role of cognition and reasoning (Guglielmo & Malle, 2017; Martin & Cushman, 2016; Monroe & Malle, 2019; Royzman et al., 2011) and even evidence for the possible temporal precedence of moral cognition over emotion (Cusimano et al., 2017; Yang et al., 2013). Models that ascribe primary causality to affect and emotion have been

Modern Moral Psychology: A Guide to the Terrain

called out as underspecified (Huebner et al., 2009; Mikhail, 2008), and perhaps in response, information processing models have aimed to offer more theoretical detail while still allowing room for the role of emotion (Malle et al., 2014; May, 2018; Sauer, 2011). Interestingly, many researchers dedicated to the study of emotion per se (whether or not involved in morality) have developed models that integrate cognitive and affective processes (e.g., Scherer, 2009). A newly emerging position is that moral emotions have pronounced social functions, including to signal norm commitments and express moral judgments (Grappi et al., 2013; Kupfer & Giner-Sorolla, 2017; Sorial, 2016).

1.1.5 Moral Communication This brings us to the final territory of morality, the tools and practices of communicating about moral norms, violations, and their sequelae. When a norm violation occurs, people almost automatically make moral judgments in their heads, but at least some of the time they express their moral criticism (Molho et al., 2020; Przepiorka & Berger, 2016), ask transgressors to account for their actions (Semin & Manstead, 1983), and sometimes grant forgiveness even for grave atrocities (Gobodo-Madikizela, 2002). Transgressors, on their part, will often try to explain or justify their violations (Gollan & Witte, 2008; Riordan et al., 1983) and mitigate others’ criticism with remorse, apologies, or compensation (Tedeschi & Reiss, 1981; Yucel & Vaish, 2021). Even though it is through communication that people typically regulate other community members’ moral behavior (Andrighetto et al., 2013; Shank et al., 2019), we see overall less research in moral psychology dedicated to these socialcommunicative processes than to the cognitive processes that undergird them. In this handbook, therefore, one chapter (Funk & McGeer, Chapter 16) directly speaks to the important communicative sphere, and several others draw connections (Guan et al., Chapter 22; Malle, Chapter 15; Shweder et al., Chapter 20).

1.2 Guide to Additional Topics Given the vast landscape of morality, this handbook cannot be a complete map of its territory. In this section, we provide pointers to topics that did not end up in the handbook but present exciting and valuable directions of work.

1.2.1 Moral Psychology of Artificial Agents The first topic of interest, centering on facets of “artificial morality,” has seen a rapid rise over the past 10 years. Two recent reviews in the psychological literature took stock of some of the garnered insights (Bonnefon et al., 2024; Ladak et al., 2023), and several other reviews have surveyed some of the core questions and initial answers (Bigman et al., 2019; Malle, 2016; Misselhorn,

7

8

    .       

2018; Pereira & Lopes, 2020). The range of questions is broad: how to design machines that follow norms and make moral judgments and decisions (Cervantes et al., 2020; Malle & Scheutz, 2019; Tolmeijer et al., 2021) and how humans do and will perceive such (potential) moral machines (Malle et al., 2015; Shank & DeSanti, 2018; Stuart & Kneer, 2021); legal and ethical challenges that come with robotics (Lin et al., 2011), such as challenges posed by social robots (Boada et al., 2021; Salem et al., 2015), autonomous vehicles (Bonnefon et al., 2016; Zhang et al., 2021), autonomous weapons systems (Galliott et al., 2021), and large language models (Harrer, 2023; Yan et al., 2024); deep concerns over newly developed algorithms that perpetuate sexism, racism, or ageism; and tension over the use of robots in childcare, eldercare, and health care, which is both sorely needed and highly controversial (Sharkey & Sharkey, 2010; Sio & Wynsberghe, 2015). Artificial agents raise a number of vexing philosophical questions, such as whether they could ever have consciousness or free will, whether those properties would be required to grant them rights and moral-legal standing, whether machines could ever be genuine moral agents (or moral patients), and many more. Work on artificial agents can also inform psychological theories of moral phenomena. For example, what features do artificial agents have to have in order for people to spontaneously impose norms on them, ascribe morally relevant mental states to them (e.g., justified reasons), and exchange moral emotions with them (e.g., forgiveness to reduce guilt)? Is evidence for deep psychological complexity necessary, or might mere humanlike appearance trigger fundamental moral responses? Finally, artificial agents provide opportunities to develop more precise theoretical and computational models of moral phenomena (e.g., norms, decisions), to test them first in simulations, and eventually in actual physical implementations. But as computationally more sophisticated designs begin to show sign of moral competence (Bello & Malle, 2023; Conte et al., 2013; Pereira & Lopes, 2020), is there something lost by reducing moral judgments, decisions, and emotions to long strings of code?

1.2.2 Morality, the Self, and Identity The nature of the self and identity has long been a central topic in social psychology (Leary et al., 2003; Suls, 2014; Wylie, 1979) and philosophy (Metzinger, 2004; Olson, 1999; Parfit, 1992). There is also a rich literature connecting the study of morality to the self and identity. We can divide this literature into two main strands. The first strand connects morality and the self; the second strand connects morality and identity. In social psychology, the word “self” is used to mean a variety of different things (Leary & Tangney, 2012). Two of the more common meanings are the executive self, which regulates an agent’s behavior (Baumeister & Vohs, 2003), and the evaluative self, which is comprised of the thoughts and feelings people have about themselves, especially in relation to others (Tesser, 1988). The executive self plays a key role in the production of moral behavior. Bandura

Modern Moral Psychology: A Guide to the Terrain

(1999) calls inhibitive moral agency the capacity to refrain from acting inhumanely toward others. This kind of behavioral self-regulation in turn depends upon the capacity for self-directed negative emotions (e.g., guilt, regret, shame), which motivate conformity to moral norms by making immoral behavior aversive to the agent (Bandura, 1999; Silver & Silver, 2021). Indeed, when self-directed negative emotions are uncoupled from immoral behavior – whether through self-serving justification of the behavior, displacement of responsibility for its consequences, or dehumanization and blaming of the victims – the results can be literally catastrophic (e.g., war, mass murder, genocide) (Bandura, 1999). The evaluative self can also be a significant determinant of moral behavior, since effective self-regulation requires the capacity for monitoring one’s own behavior and evaluating it in relation to moral standards. Such standards are an essential part of most people’s self-concept, and people are highly motivated to think of themselves as morally upright (Steele, 1988). In fact, research has shown that a moral self-concept emerges in early childhood and is a predictor of prosocial behavior (Christner et al., 2020). Reflecting on oneself as a good person can indeed lead to more good behavior (Young et al., 2012), especially when such reflections link up to abstract values; more concrete recall of past good deeds can lead to opposite effects, called moral licensing (Merritt et al., 2010), whereby people become more prone to immoral behavior after engaging in behavior that boosts their moral self-esteem. Thus, the executive self and the evaluative self do not always pull in the same moral direction. Morality is also intimately tied up with identity, in two key meanings of the term “identity.” Diachronic identity encompasses the features that ground the perceived persistence of persons over time. Multiple studies provide support for the idea that continuity of moral traits is seen as essential to the persistence of persons over time, insofar as someone who changes their moral stripes (or loses them altogether) is no longer seen as the same person (Hitlin, 2011; Prinz & Nichols, 2010; Strohminger & Nichols, 2014). Further support for this idea comes from studies showing that dramatic improvement of a person’s moral character tends to result in mitigation of blame and reduction of moral responsibility for past immoral behavior (Gomez-Lavin & Prinz, 2019) – almost as if another person had performed those behaviors. Synchronic identity encompasses the features that determine (or seem to determine) who a person is at a given time. In particular, the “true self” refers to the features that are seen as essential to a person’s synchronic identity (Strohminger et al., 2017), and they are often features related to morality (see Goodwin & Landy, Chapter 2 in this volume). For example, whether an emotionally driven action is seen as expressive of an agent’s true self depends on whether the action is morally bad or morally good (Newman et al., 2015). Further, studies suggest that moral evaluation of a person’s behavior, including blame for immoral behavior and praise for moral behavior, is sensitive to perception of whether the behavior expresses the agent’s true self (Newman et al., 2015; Robbins & Alvear, 2023).

9

10

    .       

1.2.3 Free Will and Moral Responsibility Philosophers agree on two principles: first, moral judgments apply to an action only if the agent is morally responsible for performing it; second, agents are morally responsible only for those actions that they freely choose to perform. Beyond these points, the consensus tends to break down. For example, philosophers debate what it means for an action to be freely chosen and whether human actions ever meet that condition. One major source of disagreement is the assumption of causal determinism, shared by most philosophers – the idea that every event, including every human choice, is fully determined by the causal history of the world leading up to it. According to one view, an action is freely chosen just in case the agent could have made a different choice even if every link in the causal chain of events leading up to the actual choice had been the same. On this view, known as incompatibilism, the existence of free will – and by extension, that of moral responsibility – is incompatible with causal determinism. According to an alternative view, an action is freely chosen if the choice was free of certain types of constraints, either external (e.g., coercion) or internal (e.g., compulsion). On this second view, known as compatibilism, the existence of free will – and by extension, that of moral responsibility – is compatible with causal determinism. But those are just the views of philosophers. What do ordinary people think about these matters? How do they make sense of the ideas of free will, moral responsibility, and causal determinism? Do ordinary people find incompatibilism more intuitive than compatibilism, or the other way around? These questions have animated empirical research with the goal of identifying the psychological origins of a philosophical problem as old as philosophy itself (Nichols, 2011). Regarding the commonsense concept of free will, the preponderance of evidence suggests that ordinary people do not think of free will in the metaphysically demanding way presupposed by incompatibilism (Monroe et al., 2017; Monroe & Malle, 2010; Nahmias et al., 2005; Vonasch et al., 2018). In commonsense thinking, free choice is a matter of freedom from the kinds of local constraints that make it difficult for an agent to express their values and commitments (Woolfolk et al., 2006). This ordinary concept of free will corresponds most closely to the compatibilist one, according to which the metaphysical issue of causal determinism is irrelevant to the reality of free will (Strawson, 1962). This is not altogether surprising, given that the concept of causal determinism is sufficiently esoteric that it may be difficult for people without philosophical training to understand what incompatibilists are worried about (Sommers, 2010). By contrast, empirical studies of ordinary people’s intuitions about moral responsibility have yielded mixed results. Alongside evidence of the intuitive appeal of compatibilism about moral responsibility, together with evidence for the idea that incompatibilist intuitions result from confusing causal determinism with fatalism (Nahmias et al., 2007; Nahmias et al. 2014), there is evidence

Modern Moral Psychology: A Guide to the Terrain

for the opposite view, as well as support for the idea that compatibilist intuitions about moral responsibility result from affective bias (Nichols & Knobe, 2007). Making sense of the diversity of findings in this area, much of it originating in work by experimental philosophers, is an ongoing project in moral psychology. The same applies to research on the effect of free will beliefs on moral behavior, some of which suggests that disbelief in free will (in the metaphysically robust sense presupposed by incompatibilism) is associated with a greater propensity for aggression and dishonesty (Vohs & Schooler, 2008), whereas other studies find no evidence for these claims (Open Science Collaboration, 2015). Likewise, there is some empirical support for the idea that disbelief in free will makes people more punitive (Krueger et al., 2013), but efforts to replicate such results have failed (Monroe et al., 2014). In fact, a recent metaanalysis of 145 experiments showed that manipulating free will beliefs has few, if any, downstream consequences (Genschow et al., 2023).

1.2.4 Other Topics Yet The chapters included in this handbook touch on numerous other exciting strands of moral psychology that did not receive a dedicated chapter to review their full respective literatures. For example, chapters by Decety (Chapter 11) and by FeldmanHall and Vives (Chapter 12) engage with the affective and cognitive neuroscience of morality, chapters by Narvaez (Chapter 17) and by Baird and Matthews (Chapter 19) connect to its neurobiological underpinnings. The reader may consult additional recent work that uses insights from neuroscience to analyze long-standing philosophical issues, such as free will, consciousness, and rationalism of moral judgment (Castro-Toledo et al., 2023; May, 2023). Likewise, methods and insights from behavioral economics appear in chapters by FeldmanHall and Vives (Chapter 12), Niemi and Nichols (Chapter 7), and Purzycki and Bendixen (Chapter 23), and the reader may want to explore additional work on the interplay between economic and moral behavior (VilaHenninger, 2021) and on the moral impact of exposure to market processes (Bartling & Özdemir, 2023; Enke, 2023; Fike, 2023). The connection between behavioral economics paradigms and computational and cognitive neuroscience measures is another interesting recent direction (Fornari et al., 2023; Lengersdorff et al., 2020). Evolutionary perspectives on the origins of morality are distributed over chapters by Narvaez (Chapter 17), Shweder et al. (Chapter 20), and Malle (Chapter 15), whereby the latter two focus on cultural rather than biological perspectives. Animal behavior work arises in chapters by Decety (Chapter 11) as well as FeldmanHall and Vives (Chapter 12), and the reader may benefit from integrative perspectives on phylogenetic and cultural evolution by Boehm (2018), de Waal (2014), and Tomasello (2016), and a provocative recent proposal that links genetic heritability patterns to domains of cooperative morality (Zakharin et al., 2024).

11

12

    .       

Additional topics with less representation but no less significance include the group dynamics of morality (Ellemers et al., 2023), moral learning (Cushman et al., 2017), trust (Bach et al., 2022; Malle & Ullman, 2021; Sztompka, 2019), and morality in organizations and collectives (Blomberg & Petersson, 2024; Dhillon & Nicolò, 2023; Sattler et al., 2023).

1.3 Overview of the Chapters We now offer brief summaries of each handbook chapter, hoping that the reader will find many of these contributions enticing for further reading.

1.3.1 Part I: Building Blocks Part I introduces some of the basic building blocks of moral psychology, topics of both core theoretical concern and major historical significance. Geoff Goodwin and Justin Landy (Chapter 2) review empirical research on moral character, which has only recently attained a prominent role in psychology, in contrast to long traditions in ethics and education. A person’s moral character comprises the dispositions to think, feel, and act morally, and these dispositions are cross-situationally and temporally fairly consistent. Against a long-standing belief in psychology that the personality disposition of warmth most strongly influences people’s impressions of one another, the evidence suggests that moral character occupies this central position. Moral character exerts its influence on impressions quite independently of other personality traits, and it features prominently in people’s representations of their own personality as well. Moral character is also a central element in a person’s perceived identity – who the person is perceived to be “deep down” (cf. our discussion of the “true self” in the Morality, the Self, and Identity subsection [Section 1.2.2]). Finally, the authors close by charting some of the features from which people infer another’s moral character, including actions but also, critically, mental states such as goals and intentions. William Ratoff and Adina Roskies (Chapter 3) tackle the question of how first-person moral judgments and moral behavior are conceptually linked. They frame their discussion in terms of a philosophical puzzle known as “Hume’s problem.” The puzzle arises from the conjunction of three ideas: Humeanism, the idea that beliefs alone do not suffice to motivate action; internalism, the idea that moral judgments are intrinsically motivating; and cognitivism, the idea that moral judgments are beliefs. These three ideas are jointly inconsistent, so at least one of them must be false. But which one? The authors focus their attention on two possible solutions to the puzzle: the externalist solution, which denies that moral judgments are intrinsically motivating (rescinding internalism), and the noncognitivist solution, which denies that moral judgments are beliefs (rescinding cognitivism). The authors review empirical research to explore whether either of the solutions is supported by evidence. On the issue

Modern Moral Psychology: A Guide to the Terrain

of whether moral judgments are intrinsically motivating, they argue that studies of moral cognition in psychopathy and acquired sociopathy do not settle the matter, nor do studies of folk intuitions about internalism. Likewise, studies of the influence of emotion on moral judgment do not settle the dispute between cognitivism and noncognitivism, since they do not establish that emotion is constitutive of moral judgment in the way that noncognitivism requires. Thus, an empirically compelling solution to Hume’s problem remains to be found. Giulia Andrighetto and Eva Vriens (Chapter 4) examine the foundational role of norms in moral psychology, a topic that has long garnered cross-disciplinary interest from philosophy to biology, from anthropology to computer science. The authors touch briefly on the debates over potentially different types of norm (e.g., conventional, social, moral, legal) and maintain that social and moral norms, in particular, are difficult to separate unless one adopts a specific theoretical position. The authors’ treatment centers on a core feature of most or all social and moral norms: that people, in complying or not complying with norms, are sensitive to other community members’ norm-relevant beliefs and attitudes. By recognizing this sensitivity, scientists can, first, gain a better scientific understanding of norm inference, the complex processes by which people learn which norms apply to a given setting and how strong the norms are; and second, they can better diagnose whether (and how strongly) a given norm actually exists in a community. All these insights pave the way for potential interventions on people’s beliefs about the community’s norms, which are easier to change than individual moral convictions. Joanna Demaree-Cotton and Guy Kahane (Chapter 5) introduce a frequently discussed topic in recent moral psychology: moral dilemmas. They characterize moral dilemmas as a decision-making situation that has three features: first, every available course of action has a high moral cost and therefore involves a difficult moral trade-off; second, it is morally appropriate for the agent to feel conflicted about what choice to make; and third, it is morally appropriate for the agent to feel some regret about whatever choice they made. The authors then explore different empirical accounts of why some moral trade-offs, but not others, are experienced as difficult or impossible to resolve. Among the most influential of these accounts is Greene’s (2008) dual-process theory, which traces the experience of moral dilemmas to a conflict between a value backed by intuition (“System 1”) and a value backed by reflection (“System 2”). The authors also review empirical research bearing on the psychological mechanisms underpinning a person’s resolution of moral dilemmas and the phenomenon of “moral residue” (regret or guilt over one’s resolution). They argue that further empirical work is needed to understand how people weigh competing values against one another and that such understanding requires expanding the range of moral dilemmas to include cases beyond those targeted in recent research (e.g., sacrificial dilemmas). Samantha Abrams and Kurt Gray (Chapter 6) tackle another foundational question: What constitutes the moral domain? To answer this question, they explore three approaches to modeling moral cognition, focusing on three issues:

13

14

    .       

first, what behaviors are seen as morally wrong; second, whether moral norms are universal rather than culturally variable; and third, what psychological mechanisms underlie judgments of moral wrongness. According to Turiel’s model, wrong behaviors are those seen as harmful or unfair, moral norms are universal, and wrongness judgments are largely the result of conscious reasoning from abstract principles. By contrast, in Haidt’s model, wrong behaviors are not just those seen as harmful or unfair, but also those seen as disloyal, disrespectful of authority, or impure; moral norms exhibit substantial crosscultural variation; and wrongness judgments are typically the product of intuition, rather than conscious reasoning. The model favored by the authors combines elements from both of these approaches: from Turiel, the idea that perceptions of wrongness boil down to perceptions of harm; and from Haidt, the idea that moral norms are culturally variable and the idea that wrongness judgments are more a product of intuition than reasoning.

1.3.2 Part II: Thinking and Feeling Part II focuses on the cognitive and affective processes that make up various moral phenomena: moral decision making, moral judgment, the categorization of agents and patients, and moral emotions. Laura Niemi and Shaun Nichols (Chapter 7) introduce some core elements of moral decision making by taking expected utility theory as a starting point. In its classic form, expected utility theory focuses on the outcomes of actions: The expected utility of a decision is the sum of the values associated with the different possible outcomes of the decision weighted by the probability of their occurrence. As such, expected utility theory is well suited to explain the moral choices recommended by utilitarianism, which characterizes right actions in terms of the maximization of aggregate utility. However, to account for more complex, nonutilitarian decisions, expected utility theory must be modeled to assign utilities to actions themselves. This action-based form of expected utility theory can readily accommodate the fact that people tend to assign low utility to actions that violate moral norms (even when the outcomes of those actions might have positive utility, such as when lying would lead to financial gain). The authors then apply this expanded action-based expected utility theory of moral decision making to questions regarding what actions count as fair, how the decision maker’s actions take other people’s outcomes into account, and how the value of actions changes when directed at one’s own or another’s group. Jonathan Baron (Chapter 8) posits utilitarianism as a standard of rational moral judgment. He does not directly defend utilitarianism as a theory but investigates cases of apparent contradiction between people’s moral decisions (sometimes grounded in nonutilitarian principles) and the consequences of those decisions that they themselves would consider worse for themselves and everybody else. For example, when some people use a moral principle (e.g., bodily autonomy) to assertively make a decision (e.g., to not get vaccinated), it can have negative moral consequences for others (e.g., infecting people) and for

Modern Moral Psychology: A Guide to the Terrain

themselves (risking infection). Baron asks whether such contradictions in moral reasoning can provide insights into some of the determinants of such reasoning. These insights, importantly, are valuable even for those who do not adopt utilitarianism as a normative model. From over a dozen candidate moral contradictions, Baron concludes that many deviations from utilitarian considerations in moral contexts are reflections of familiar nonmoral cognitive biases (e.g., framing effects, certain concepts of causality), but some arise from adherence to strong moral rules or principles (e.g., protected or sacred values). Philip Robbins (Chapter 9) discusses the role of mind perception in the categorization of individuals as moral agents and moral patients. Moral agents are defined as individuals who can commit morally wrong actions and deserve to be held accountable for those actions; moral patients are defined as individuals who can be morally wronged and whose interests are worthy of moral consideration. It is generally agreed that the attribution of moral agency and moral patiency is linked to the attribution of mental capacities. Robbins surveys a variety of models of mind perception, some of which focus on the representation of mental capacities, some of which focus on the representation of mental traits. The dominant model of mind perception in moral psychology is the experience–agency model (Gray et al., 2007), which divides the space of mindedness into experiential capacities like sentience and self-awareness, and agentic capacities like deliberative reasoning and self-control. Reviewing the empirical literature on moral categorization, Robbins argues that neither the experience–agency model nor any of the major alternatives to it (i.e., the warmth–competence model, the agency–communion model, and the human nature–human uniqueness model), captures the full panoply of mental features to which everyday attributions of moral agency and moral patiency are sensitive. Pascale Sophie Russell (Chapter 10) asks whether, and in what ways, emotions can be designated as “moral.” Several emotions have been shown to be associated with moral judgments or moral behaviors. But more than association must be shown if we label some emotions characteristically moral. Russell guides the reader through a voluminous literature and applies two criteria to test the moral credentials of emotions. The first criterion is whether the emotion is significantly elicited by moral stimuli (e.g., transgressions); the second is whether it has significant community-benefiting consequences. This second criterion, less often used in past analyses, tries to capture the fact that moral norms, judgments, and decisions are all intended to benefit the community, so moral emotions should too. From this analysis, the author concludes that anger clearly meets the criteria, contempt and disgust less so. Guilt passes easily, and shame fares better than some may expect. Among the positive candidates, compassion and empathy both meet the criteria but are somewhat difficult to separate. Finally, elevation and awe have numerous prosocial consequences, but awe is rarely triggered by moral stimuli. Jean Decety (Chapter 11) examines the complex relation between empathy and prosocial behavior and considers findings from animal behavior,

15

16

    .       

neuroscience, and psychological studies. He begins by distinguishing three components of the broader phenomenon of empathy: emotional contagion, empathic concern, and perspective taking. He reviews evidence suggesting that emotional contagion of a conspecific’s pain often leads to helping behavior, but such contagion is modulated by group membership, levels of intimacy, and attitudes toward the other. Thus, contagion is not an automatic trigger for prosocial behavior. Empathic concern, too, is a powerful motivator of prosocial behaviors but is also socially modulated – extended to some people more than others and to individuals more than groups. Effortful perspective taking, finally, can provide a better understanding of other people’s minds but does not always generate prosocial behavior, even when it facilitates empathic concern. In sum, various forms of empathy can motivate prosocial behaviors, but empathy is fragile and often stops short of its potential when people engage with large groups, people outside of their tribe, or anonymous strangers.

1.3.3 Part III: Behavior Part III focuses on some of the central classes of behaviors that scholars of morality have puzzled over: prosocial behavior, antisocial behavior, conflict, and dehumanization. It also examines the primary moral sanctioning behaviors (blame and punishment) by which humans respond to moral violations, and it ends on the topic of moral communication. Oriel FeldmanHall and Marc-Lluís Vives (Chapter 12) highlight that, for successful social living, humans’ capacity to be prosocial had to surpass their capacity for selfish and harmful behavior. The authors provide an overview of the scientific study of prosocial capacities, with a focus on experimental research. Summarizing extensive work in laboratory paradigms of behavioral economics and social psychology, the authors document a strong human tendency toward behaving prosocially. They then briefly examine the phyologenetic and developmental origins of behaving prosocially and its different motives, such as reputational concerns and caring for others, as well as emotions that facilitate prosocial behavior, such as empathy or guilt. (See Decety’s chapter [Chapter 11] on empathy for a more comprehensive treatment of empathy.) FeldmanHall and Vives also summarize insights from cognitive neuroscience on the brain networks that undergird prosocial behavior. They close with a call for more naturalistic experimental paradigms and the consideration of temporal dynamics of prosocial behavior. Kean Poon and Adrian Raine (Chapter 13) provide the counterweight to FeldmanHall and Vives by inspecting the relationship between antisociality and morality from the dual perspectives of moral psychology and moral neuroscience. Their chapter provides a comprehensive overview of research on the moral cognition of different types of antisocial individuals, focusing on the interplay between cognition and emotion in psychopathic individuals. Based on their review of the research, the authors suggest that the capacity for moral reasoning in psychopathy is less defective than generally assumed. While the

Modern Moral Psychology: A Guide to the Terrain

propensity of psychopathic individuals to engage in immoral behavior is due largely to affective deficits (e.g., low empathy), it also stems from dysfunction in the neural circuitry underlying moral decision making. This simple narrative, however, is complicated by the fact that there is no single explanation of the immoral behavior exhibited by the full range of antisocial individuals. For example, while dysfunction in the neural circuitry of moral decision making may account for the immoral behavior of individuals with primary psychopathy and individuals prone to proactive (i.e., instrumental) aggression, it is less apt for explaining similar behavior by individuals with secondary psychopathy and a propensity for reactive aggression. Nick Haslam (Chapter 14) introduces dehumanization as another dark side of humanity. Humanness is a central concept in moral psychology, and whereas people normally treat other humans with moral consideration, they may turn to dehumanize others as a result of moral disengagement (loosening ordinary moral inhibitions) and moral exclusion (no longer applying norms of justice, fairness, and compassion to others). Haslam reviews recent psychological accounts of dehumanization that are grounded in empirical research and highlights several common threads: Dehumanization varies from subtle to extreme (e.g., genocide), interpersonal to intergroup, and from contexts of mere perception to contexts of severe conflict. In these theoretical accounts, dehumanizing a person or group means ascribing less of certain human attributes to the target – both attributes that distinguish humans from other animals (e.g., intellect, rationality, or civility) and attributes that distinguish humans from inanimate agents (e.g., essential capacities for emotion and warmth). Haslam’s analysis meshes with that of Abrams and Gray (Chapter 6, this volume) and the discussion by Robbins (Chapter 9, this volume) of mental capacities people normally ascribe to other people – thus, dehumanization is a form of dementalizing. Within this framework, Haslam reviews the empirical literature on what forms dehumanization takes and what its possible functions are. He also considers a number of critiques and debates over these findings that have recently surfaced. Bertram F. Malle (Chapter 15) compares the two major moral sanctioning behaviors of blame and punishment from two perspectives: their cultural history and their underlying psychology. He draws a dividing line between two phases of human evolution – before and after human settlement – and proposes that, before that watershed, moral sanctions were informal, nonhierarchical, and often mild, akin to today’s acts of moral blame among intimates. Soon after settlement, hierarchies emerged, in which punishment took hold as a new form of sanctioning, typically exacted by those higher up in the hierarchy, eventually by institutions of punishment. Malle reviews the empirical evidence on the cognitive and social processes underlying each of these sanctioning tools and proposes that their distinct cultural histories are reflected in their psychological properties we can observe today. Whereas blame is, on the whole, flexible, effective, and cognitively sophisticated, punishment is often more damaging, less effective, and can easily be

17

18

    .       

abused – as in past and modern forms of institutional punishment. Compare this chapter to Janice Nadler’s (Chapter 21, this volume) treatment of similarities and differences between blame in ordinary life and blame within the US legal system. Friederike Funk and Victoria McGeer (Chapter 16) close this part of the book with a discussion of moral communication, in which the topic of punishment is also center stage. Their approach to the topic is somewhat unorthodox, insofar as the term moral communication is typically used to refer to a class of behaviors distinct from moral sanctions (see earlier discussion in this chapter of the landscape of morality, depicted in Figure 1.1). The authors argue that moral norms are distinctive in that their transgression tends to provoke a desire in members of the community to punish the transgressor, and that such punishment has a communicative function. Indeed, on their view, punishment is best understood as a nonlinguistic form of moral communication, one that expresses sharp disapproval of the transgressor’s actions and attitudes. This approach to punishment has the potential to resolve conflicting results from studies of the effect of group membership on punishment, such as the fact that in-group transgressors are sometimes treated more leniently than out-group transgressors (“in-group favoritism”) and sometimes more harshly (the “black sheep effect”). The solution to the puzzle, the authors argue, is that severity of punishment depends on who the intended target of communication is and what message the punishment is intended to convey.

1.3.4 Part IV: Origins, Development, and Variation Part IV addresses questions of variability – from the evolutionary origins of morality to its development in the earliest phases of life, all the way to cultural variability. Darcia Narvaez (Chapter 17) discusses morality from an evolutionarydevelopmental, cultural, and (to a lesser extent) neurobiological perspective. The framework for her discussion is triune ethics metatheory, a main tenet of which is that healthy moral development requires the provision by the community of an “evolved nest” in which caregivers treat children with love and respect. Failure to receive this support can limit a person’s social and emotional competence necessary for species-typical moral functioning. The natural trajectory of moral development, Narvaez suggests, tends toward an engagementcentered ethic oriented around the virtues of cooperation, compassion, and egalitarianism – the ethic characteristic of Indigenous cultures (and of our hunter-gatherer ancestors, as Malle’s chapter [Chapter 15] suggests). This path of development is readily disrupted by practices of child-rearing in Western industrialized societies, which deprive children of the social and emotional resources needed for healthy moral development, thereby promoting the development of a self-protection-centered ethic oriented around competition, coldheartedness, and dominance. Thus, understanding the role of the evolved nest in scaffolding moral development is key to understanding why antisocial behavior

Modern Moral Psychology: A Guide to the Terrain

is so pervasive in modern Western culture – and to designing interventions that might help to reduce it. Kiley Hamlin and Francis Yuen (Chapter 18) present a large body of evidence suggesting that, within the first year of life, infants hold both expectations about and preferences for morally good versus bad protagonists. Across different methods, the authors show that infants distinguish between morally significant acts of helping and hindering as well as between acting fairly and unfairly; they prefer the morally good actions and the morally good protagonists; and they expect others to prefer the morally good protagonists as well. Going beyond a mere valence difference, these expectations vary systematically in response to critical factors, such as victim’s state of need, in-group/out-group membership, and a character’s intentions. Many of the findings appear in infants 8–12 months of age, some as early as 3 months of age. Questions remain, such as how consistent the findings are across experimenters and populations; whether the violated norm is truly moral or only a social expectation; and to what extent earliest learning guides these expectations and preferences. But overall, the evidence for budding moral distinctions in early infancy is highly compelling and provocative. Abigail A. Baird and Margaret M. Matthews (Chapter 19) take up the issue of moral development in adolescence, focusing on the role of individual differences in shaping the emergence of a mature moral sense. Their wide-ranging discussion touches on how differences in temperament, gender, familial and peer relationships, and lived experience influence the timing and outcome of adolescent moral development. Illustrating the role of temperament, for example, high-reactive individuals may be more prone to impulsive behavior that violates moral norms, whereas low-reactive individuals may be more likely conform to moral norms because they are more sensitive to the threat of punishment. Showing the importance of interpersonal relationships, weak attachment to caregivers in adolescence is associated with impairments of empathy and a greater propensity for antisocial and immoral behavior (a major theme in Narvaez’s chapter [Chapter 17]). Peer influence is another key predictor of both antisocial and prosocial behavior in adolescence. Further, moral development in adolescence critically depends on the maturation of capacities for empathy and self-conscious emotion (e.g., guilt, embarrassment, pride), a process that is shaped by the individual’s lived experience. In closing, the authors suggest that the powerful effects of individual differences on adolescent moral development are best accounted for by models that explain the maturation of the moral sense at multiple levels of analysis and timescales. Richard A. Shweder, Jacob R. Hickman, and Les Beldo (Chapter 20) ask how one can scientifically examine the moralities of different human groups without falling into ethnocentrism – without morally judging the practices of other groups as wrong or unacceptably different from one’s own. The authors propose to accept (at least as a methodological orientation) “moral realism” – the view that all human communities share a small set of “moral truths.” These truths are abstract and must be expressed in culturally and historically specific

19

20

    .       

ways to be workable, and their differentiated expressions across different groups can make them seem irreconcilable. But by identifying moral absolutes, the authors suggest, scientists can make sense of the great variety of cultures and moralities and still recognize their commonalities. To illustrate their points, they discuss examples of clashing moral practices, such as between Brahman Indian and Western views of a widow’s obligations, and between Native American whalers’ and whaling protesters’ attitudes toward whaling. Each of these groups sees their own moral position as “objective” (independent of social consensus) and “absolute” (true without need for justification), but underlying their seeming differences, the authors argue, there really might be shared moral truths. It is worth pointing out that Baron (Chapter 8, this volume), too, suggests that people may hold some absolute moral principles (protected or sacred values). But whereas the moral realist featured by Shweder and colleagues suggests that denying these intuitively and instantly grasped truths is a sign of irrationality, the utilitarian featured by Baron suggests that holding onto such truths can lead to irrationality.

1.3.5 Part V: Applications and Extensions Part V applies some of the core concepts and theories to the domains of law, politics, and religion and closes with a discussion of how empirical work in moral psychology bears on issues in moral philosophy. Janice Nadler (Chapter 21) examines the sanctioning doctrines within AngloAmerican criminal law and explores similarities and differences between criminal blame and ordinary social blame. Nadler takes on topics of intended but incomplete transgressive conduct, the distinction between intended and unintended outcomes, as well as questions of recklessness and the role of a transgressor’s character in ordinary and legal blame. Nadler shows the complexity of the legal blame process and its many parallels in ordinary blame. On the legal side, she considers both the codified principles of US criminal law and the unwritten body of less precise standards and practices that can deviate from the codified ideals. On the ordinary side of blame, Nadler highlights the importance of both causal and mental factors that people take into account for intentional and unintentional transgressions. Nadler concludes that there is a great deal of congruity between legal and ordinary blame, especially in concepts and evidence considerations, but somewhat different goals and certainly more severe outcomes on the legal side (especially when errors or biases take hold). Compare this chapter to Malle’s (Chapter 15, this volume) analysis of the cultural history, social regulation, and psychological processes underlying blame and punishment. Kate W. Guan, Gordon Heltzel, and Kristin Laurin (Chapter 22) discuss the moral dimensions of political attitudes and behavior. They argue that a person’s political views – both at the level of political ideology as a whole and views on

Modern Moral Psychology: A Guide to the Terrain

specific matters of economic and social policy – are profoundly shaped by their beliefs about right and wrong. These political views in turn drive people’s political behavior, not just at the ballot box or on the campaign trail, but in the community more generally. One downside of the way in which moral convictions fuel political attitudes and behavior is that they tend to interfere with productive communication across partisan divides, fueling a kind of animosity that stifles cooperation and compromise. Divergence in people’s moral convictions, then, leads inexorably to political polarization and gridlock. To address this problem, the authors discuss a number of potentially promising interventions, some of which target individuals’ attitudes (e.g., promoting empathy, reducing negative stereotypes), and others that aim at improving the quality of interpersonal relationships (e.g., increasing contact, fostering dialogue across political divides). Benjamin Grant Purzycki and Theiss Bendixen (Chapter 23) discuss the complex, multifaceted connection between morality and religion from an evolutionary perspective. After providing some much-needed conceptual ground clearing, the authors focus on accounts of the linkage between morality and religion in terms of evolved psychological mechanisms that promote cooperation and inhibit competition. One of the better known of these accounts is the supernatural punishment hypothesis. On this view, the morality–religion link is sustained by the fact that belief in an all-knowing, all-powerful god who monitors people’s behavior and punishes their moral transgressions motivates people to behave less selfishly and more cooperatively. An alternative account is that participation in religious ritual is a form of costly signaling, indicating to others that the participant can be trusted to observe the moral norms of the community, including norms of cooperation. As a result, ritual activity comes to be associated with increased cooperation and decreased competition, at least within religious groups. While there is considerable support for the idea that religion can function as a recipe for kindness and a remedy for selfishness, however, the authors caution that the psychological mechanisms underlying this function are not yet well understood. Paul Rehren and Walter Sinnott-Armstrong (Chapter 24) suggest some lessons from moral psychology for ethics and metaethics. They note that empirical research on a wide range of topics, including moral character, happiness and well-being, free will and moral responsibility, and moral judgment, has had a profound influence on recent philosophical theorizing about the foundations of morality. In their chapter they focus on one issue of particular importance: the reliability and trustworthiness of moral judgment. They critically assess multiple lines of argument that threaten to undermine epistemic confidence in our moral judgments, including evolutionary debunking arguments, process arguments, arguments from disagreement, and arguments from irrelevant influences. Though the jury is still out on how successful these arguments are, there is little question that they have potentially profound implications both for moral epistemology (insofar as they pose a threat to moral intuitionism) and

21

22

    .       

philosophical methodology (insofar as they cast doubt on the thoughtexperimental method). Perhaps the most important lesson for ethics and metaethics to be drawn from moral psychology, then, may be that future progress in moral philosophy is likely to depend on philosophers and psychologists working together, rather than in isolation from one another.

References Alicke, M. D. (2000). Culpable control and the psychology of blame. Psychological Bulletin, 126(4), 556–574. Anderson, R. A., Crockett, M. J., & Pizarro, D. A. (2020). A theory of moral praise. Trends in Cognitive Sciences, 24(9), 694–703. Andrighetto, G., Brandts, J., Conte, R., Sabater-Mir, J., Solaz, H., & Villatoro, D. (2013). Punish and voice: Punishment enhances cooperation when combined with norm-signalling. PLoS ONE, 8(6), Article e64941. Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S. (2022). A systematic literature review of user trust in AI-enabled systems: An HCI perspective. International Journal of Human–Computer Interaction, 40(3), 1–16. Bandura, A. (1999). Moral disengagement in the perpetration of inhumanities. Personality and Social Psychology Review, 3(3), 193–209. Bartels, D. M., Bauman, C. W., Cushman, F. A., Pizarro, D. A., & McGraw, A. P. (2015). Moral judgment and decision making. In G. Keren & G. Wu (Eds.), The Wiley Blackwell handbook of judgment and decision making (pp. 478–515). John Wiley & Sons, Ltd. Bartling, B., & Özdemir, Y. (2023). The limits to moral erosion in markets: Social norms and the replacement excuse. Games and Economic Behavior, 138, 143–160. Baumeister, R. F., & Vohs, K. D. (2003). Self-regulation and the executive function of the self. In M. R. Leary & J. P. Tangney (Eds.), Handbook of self and identity (pp. 197–217). Guilford Press. Bello, P., & Malle, B. F. (2023). Computational approaches to morality. In R. Sun (Ed.), Cambridge handbook of computational cognitive sciences (pp. 1037–1063). Cambridge University Press. Bicchieri, C. (2006). The grammar of society: The nature and dynamics of social norms. Cambridge University Press. Bigman, Y. E., Waytz, A., Alterovitz, R., & Gray, K. (2019). Holding robots responsible: The elements of machine morality. Trends in Cognitive Sciences, 23(5), 365–368. Blomberg, O., & Petersson, B. (2024). Team reasoning and collective moral obligation. Social Theory and Practice 50(3), 483–516. Boada, J. P., Maestre, B. R., & Genís, C. T. (2021). The ethical issues of social assistive robotics: A critical literature review. Technology in Society, 67, Article 101726. Boehm, C. (2018). Collective intentionality: A basic and early component of moral evolution. Philosophical Psychology, 31(5), 680–702. Bonnefon, J.-F., Rahwan, I., & Shariff, A. (2024). The moral psychology of artificial intelligence. Annual Review of Psychology, 75(1), 653–675. Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576.

Modern Moral Psychology: A Guide to the Terrain

Bostyn, D. H., & Roets, A. (2016). The morality of action: The asymmetry between judgments of praise and blame in the action–omission effect. Journal of Experimental Social Psychology, 63, 19–25. Cameron, C. D., Payne, B. K., Sinnott-Armstrong, W., Scheffer, J. A., & Inzlicht, M. (2017). Implicit moral evaluations: A multinomial modeling approach. Cognition, 158, 224–241. Castro-Toledo, F. J., Cerezo, P., & Gómez-Bellvís, A. B. (2023). Scratching the structure of moral agency: Insights from philosophy applied to neuroscience. Frontiers in Neuroscience, 17 Article 1198001. Cervantes, J.-A., López, S., Rodríguez, L.-F., Cervantes, S., Cervantes, F., & Ramos, F. (2020). Artificial moral agents: A survey of the current status. Science and Engineering Ethics, 26(2), 501–532. Christensen, J. F., & Gomila, A. (2012). Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review. Neuroscience & Biobehavioral Reviews, 36(4), 1249–1264. Christner, N., Pletti, C., & Paulus, M. (2020). Emotion understanding and the moral self-concept as motivators of prosocial behavior in middle childhood. Cognitive Development, 55 Article 100893. Coates, D. J., & Tognazzini, N. A. (Eds.). (2012). Blame: Its nature and norms. Oxford University Press. Conte, R., Andrighetto, G., & Campenni, M. (2013). Minding norms: Mechanisms and dynamics of social order in agent societies. Oxford University Press. Cushman, F. (2008). Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment. Cognition, 108(2), 353–380. Cushman, F. (2013). The role of learning in punishment, prosociality, and human uniqueness. In K. Sterelny, R. Joyce, B. Calcott, & B. Fraser (Eds.), Cooperation and its evolution. (pp. 333–372). MIT Press. Cushman, F., Kumar, V., & Railton, P. (Eds.). (2017). Moral learning [Special issue]. Cognition, 167, 1–282. Cusimano, C., Thapa, S., & Malle, B. F. (2017). Judgment before emotion: People access moral evaluations faster than affective states. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. J. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (pp. 1848–1853). Cognitive Science Society. de Waal, F. B. M. (2014). Natural normativity: The “is” and “ought” of animal behavior. Behaviour, 151(2–3), 185–204. Dhillon, A., & Nicolò, A. (2023). Moral costs of corruption: A review of the literature. In K. Basu & A. Mishra (Eds.), Law and economic development: Behavioral and moral foundations of a changing world (pp. 93–129). Springer International Publishing. Drew, P. (1998). Complaints about transgressions and misconduct. Research on Language & Social Interaction, 31(3–4), 295–325. Ellemers, N., Pagliaro, S., & Nunspeet, F. van (Eds.). (2023). The Routledge international handbook of the psychology of morality. Taylor & Francis. Enke, B. (2023). Market exposure and human morality. Nature Human Behaviour, 7(1), 134–141. Fehr, E., & Fischbacher, U. (2004). Third-party punishment and social norms. Evolution and Human Behavior, 25(2), 63–87.

23

24

    .       

Fehr, E., & Gächter, S. (2000). Cooperation and punishment in public goods experiments. American Economic Review, 90(4), 980–994. Fike, R. (2023). Do disruptions to the market process corrupt our morals? Review of Austrian Economics, 36(1), 99–106. Fornari, L., Ioumpa, K., Nostro, A. D., Evans, N. J., De Angelis, L., Speer, S. P. H., Paracampo, R., Gallo, S., Spezio, M., Keysers, C., & Gazzola, V. (2023). Neuro-computational mechanisms and individual biases in action-outcome learning under moral conflict. Nature Communications, 14(1), Article 1. Friedman, M. (2013). How to blame people responsibly. Journal of Value Inquiry, 47(3), 271–284. Galliott, J., Macintosh, D., & Ohlin, J. D. (Eds.). (2021). Lethal autonomous weapons: Re-examining the law and ethics of robotic warfare. Oxford University Press. Genschow, O., Cracco, E., Schneider, J., Protzko, J., Wisniewski, D., Brass, M., & Schooler, J. W. (2023). Manipulating belief in free will and its downstream consequences: A meta-analysis. Personality and Social Psychology Review, 27(1), 52–82. Gobodo-Madikizela, P. (2002). Remorse, forgiveness, and rehumanization: Stories from South Africa. Journal of Humanistic Psychology, 42(1), 7–32. Gollan, T., & Witte, E. H. (2008). “It was right to do it, because . . .” Social Psychology, 39(3), 189–196. Gomez-Lavin, J., & Prinz, J. (2019). Parole and the moral self: Moral change mitigates responsibility. Journal of Moral Education, 48(1), 65–83. Goodwin, G. P., & Gromet, D. M. (2014). Punishment. Cognitive Science, 5(5), 561–572. Grappi, S., Romani, S., & Bagozzi, R. P. (2013). Consumer response to corporate irresponsible behavior: Moral emotions and virtues. Journal of Business Research, 66(10), 1814–1821. Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619. Greene, J. D. (2008). The secret joke of Kant’s soul. In W. Sinnott-Armstrong (Ed.), Moral psychology, Vol 3: The neuroscience of morality: Emotion, brain disorders, and development (pp. 35–80). MIT Press. Guglielmo, S. (2015). Moral judgment as information processing: An integrative review. Frontiers in Psychology, 6, Article 1637. Guglielmo, S., & Malle, B. F. (2017). Information-acquisition processes in moral judgments of blame. Personality and Social Psychology Bulletin, 43(7), 957–971. Guglielmo, S., & Malle, B. F. (2019). Asymmetric morality: Blame is more differentiated and more extreme than praise. PLoS ONE, 14(3), Article e0213544. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834. Haidt, J. (2003). The moral emotions. In R. J. Davidson, K. R. Scherer, & H. H. Goldsmith (Eds.), Handbook of affective sciences (pp. 852–870). Oxford University Press. Harrer, S. (2023). Attention is not all you need: The complicated case of ethically using large language models in healthcare and medicine. eBioMedicine, 90, Article 104512.

Modern Moral Psychology: A Guide to the Terrain

Hindriks, F. (2008). Intentional action and the praise-blame asymmetry. Philosophical Quarterly, 58(233), 630–641. Hitlin, S. (2011). Values, personal identity, and the moral self. In S. J. Schwartz, K. Luyckx, & V. L. Vignoles (Eds.), Handbook of identity theory and research (pp. 515–529). Springer. Huebner, B., Dwyer, S., & Hauser, M. (2009). The role of emotion in moral psychology. Trends in Cognitive Sciences, 13(1), 1–6. Kant, I. (1998). Groundwork of the metaphysics of morals (M. J. Gregor, Ed. & Trans.). Cambridge University Press. (Original work published 1785) Kohlberg, L. (1981). Essays on moral development. Harper & Row. Krueger, F., Hoffman, M., Walter, H., & Grafman, J. (2013). An fMRI investigation of the effects of belief in free will on third-party punishment. Social Cognitive and Affective Neuroscience, 9(8), 1143–1149. Kupfer, T. R., & Giner-Sorolla, R. (2017). Communicating moral motives: The social signaling function of disgust. Social Psychological and Personality Science, 8 (6), 632–640. Ladak, A., Loughnan, S., & Wilks, M. (2023). The moral psychology of artificial intelligence. Current Directions in Psychological Science, 33(1), 27–34. Laurent, S. M., Nuñez, N. L., & Schweitzer, K. A. (2016). Unintended, but still blameworthy: The roles of awareness, desire, and anger in negligence, restitution, and punishment. Cognition & Emotion, 30(7), 1271–1288. Leary, M. R., & Tangney, J. P. (Eds.). (2012). Handbook of self and identity (2nd ed.). Guilford Press. Leary, M. R., Tangney, J. P., & Leary, M. (Eds.). (2003). Handbook of self and identity (Paperback ed). Guilford Press. Lengersdorff, L. L., Wagner, I. C., Lockwood, P. L., & Lamm, C. (2020). When implicit prosociality trumps selfishness: The neural valuation system underpins more optimal choices when learning to avoid harm to others than to oneself. Journal of Neuroscience, 40(38), 7286–7299. Lin, P., Abney, K., & Bekey, G. A. (Eds.). (2011). Robot ethics: The ethical and social implications of robotics. MIT Press. Malle, B. F. (2016). Integrating robot ethics and machine morality: The study and design of moral competence in robots. Ethics and Information Technology, 18(4), 243–256. Malle, B. F. (2021). Moral judgments. Annual Review of Psychology, 72, 293–318. Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A theory of blame. Psychological Inquiry, 25(2), 147–186. Malle, B. F., Guglielmo, S., Voiklis, J., & Monroe, A. E. (2022). Cognitive blame is socially shaped. Current Directions in Psychological Science, 31(2), 169–176. Malle, B. F., & Scheutz, M. (2019). Learning how to behave: Moral competence for social robots. In O. Bendel (Ed.), Handbuch Maschinenethik [Handbook of machine ethics]. Springer. Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI’15 (pp. 117–124). ACM.

25

26

    .       

Malle, B. F., & Ullman, D. (2021). A multidimensional conception and measure of human-robot trust. In C. S. Nam & J. B. Lyons (Eds.), Trust in human-robot interaction (pp. 3–25). Academic Press. Marazziti, D., Baroni, S., Landi, P., Ceresoli, D., & Dell’Osso, L. (2013). The neurobiology of moral sense: Facts or hypotheses? Annals of General Psychiatry, 12(1), Article 6. Martin, J. W., & Cushman, F. (2016). Why we forgive what can’t be controlled. Cognition, 147, 133–143. May, J. (2018). Regard for reason in the moral mind. Oxford University Press. May, J. (2023). Moral rationalism on the brain. Mind & Language, 38(1), 237–255. Merritt, A. C., Effron, D. A., & Monin, B. (2010). Moral self-licensing: When being good frees us to be bad. Social and Personality Psychology Compass, 4(5), 344–357. Metzinger, T. (2004). Being no one: The self-model theory of subjectivity. MIT Press. Mikhail, J. (2008). Moral cognition and computational theory. In W. Sinnott-Armstrong (Ed.), Moral psychology: Vol. 3. The neuroscience of morality (pp. 81–92). MIT Press. Mill, J. S. (1998). Utilitarianism. Oxford University Press. Misselhorn, C. (2018). Artificial morality: Concepts, issues and challenges. Society, 55(2), 161–169. Moisuc, A., & Brauer, M. (2019). Social norms are enforced by friends: The effect of relationship closeness on bystanders’ tendency to confront perpetrators of uncivil, immoral, and discriminatory behaviors. European Journal of Social Psychology, 49(4), 824–830. Molho, C., Tybur, J. M., Van Lange, P. A. M., & Balliet, D. (2020). Direct and indirect punishment of norm violations in daily life. Nature Communications, 11, Article 34. Monin, B., Pizarro, D. A., & Beer, J. S. (2007). Deciding versus reacting: Conceptions of moral judgment and the reason-affect debate. Review of General Psychology, 11(2), 99–111. Monroe, A. E., Brady, G. L., & Malle, B. F. (2017). This isn’t the free will worth looking for: General free will beliefs do not influence moral judgments, agent-specific choice ascriptions do. Social Psychological and Personality Science, 8(2), 191–199. Monroe, A. E., Dillon, K. D., Guglielmo, S., & Baumeister, R. F. (2018). It’s not what you do, but what everyone else does: On the role of descriptive norms and subjectivism in moral judgment. Journal of Experimental Social Psychology, 77, 1–10. Monroe, A. E., Dillon, K. D., & Malle, B. F. (2014). Bringing free will down to Earth: People’s psychological concept of free will and its role in moral judgment. Consciousness and Cognition, 27, 100–108. Monroe, A. E., & Malle, B. F. (2010). From uncaused will to conscious choice: The need to study, not speculate about people’s folk concept of free will. Review of Philosophy and Psychology, 1(2), 211–224. Monroe, A. E., & Malle, B. F. (2019). People systematically update moral judgments of blame. Journal of Personality and Social Psychology, 116(2), 215–236. Murray, S., O’Neill, K., Bridges, J., Sytsma, J., & Irving, Z. (2024). Blame for Hum(e)an beings: The role of character information in judgments of blame. Social Psychological and Personality Science. https://doi.org/10.1177/19485506241233708

Modern Moral Psychology: A Guide to the Terrain

Nahmias, E., Coates, D. J., & Kvaran, T. (2007). Free will, moral responsibility, and mechanism: Experiments on folk intuitions. Midwest Studies in Philosophy, 31(1), 214–242. Nahmias, E., Morris, S., Nadelhoffer, T., & Turner, J. (2005). Surveying freedom: Folk intuitions about free will and moral responsibility. Philosophical Psychology, 18(5), 561–584. Nahmias, E., Shepard, J., & Reuter, S. (2014). It’s OK if ‘my brain made me do it’: People’s intuitions about free will and neuroscientific prediction. Cognition, 133(2), 502–516. Newman, G. E., De Freitas, J., & Knobe, J. (2015). Beliefs about the true self explain asymmetries based on moral judgment. Cognitive Science, 39(1), 96–125. Nichols, S. (2011). Experimental philosophy and the problem of free will. Science, 331(6023), 1401–1403. Nichols, S., & Knobe, J. (2007). Moral responsibility and determinism: The cognitive science of folk intuitions. Noûs, 41(4), 663–685. Olson, E. T. (1999). The human animal: Personal identity without psychology. Oxford University Press. Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), Article aac4716. Panagopoulou-Koutnatzi, F. (2014). The practice of naming and shaming through the publicizing of “culprit” lists. In C. M. Akrivopoulou & N. Garipidis (Eds.), Human rights and the impact of ICT in the public sphere: Participation, democracy, and political autonomy (pp. 145–155). Information Science Reference/IGI Global. Parfit, D. (1992). Reasons and persons. Clarendon Press. Pereira, L. M., & Lopes, A. B. (2020). Machine ethics: From machine morals to the machinery of morality. Springer Nature. Pizarro, D. A., Uhlmann, E., & Salovey, P. (2003). Asymmetry in judgments of moral blame and praise. Psychological Science, 14(3), 267–272. Prinz, J. (2006). The emotional basis of moral judgments. Philosophical Explorations, 9(1), 29–43. Prinz, J., & Nichols, S. (2010). Moral emotions. In J. M. Doris (Ed.), The moral psychology handbook (pp. 111–146). Oxford University Press. Przepiorka, W., & Berger, J. (2016). The sanctioning dilemma: A quasi-experiment on social norm enforcement in the train. European Sociological Review, 32(3), 439–451. Riordan, C. A., Marlin, N. A., & Kellogg, R. T. (1983). The effectiveness of accounts following transgression. Social Psychology Quarterly, 46(3), 213–219. Robbins, P., & Alvear, F. (2023). Deformative experience: Explaining the effects of adversity on moral evaluation. Social Cognition, 41(5), 415–446. Royzman, E. B., Goodwin, G. P., & Leeman, R. F. (2011). When sentimental rules collide: “Norms with feelings” in the dilemmatic context. Cognition, 121(1), 101–114. Royzman, E. B., Leeman, R. F., & Baron, J. (2009). Unsentimental ethics: Towards a content-specific account of the moral-conventional distinction. Cognition, 112(1), 159–174. Salem, M., Lakatos, G., Amirabdollahian, F., & Dautenhahn. (2015). Towards safe and trustworthy social robots: Ethical challenges and practical issues. In A. Tapus,

27

28

    .       

E. André, J.-C. Martin, F. Ferland, & M. Ammi (Eds.), Social Robotics: 7th International Conference, ICSR 2015, Proceedings (pp. 584–593). Springer International Publishing. Sattler, S., Dubljevi´c, V., & Racine, E. (2023). Cooperative behavior in the workplace: Empirical evidence from the agent-deed-consequences model of moral judgment. Frontiers in Psychology, 13, Article 1064442. Sauer, H. (2011). Social intuitionism and the psychology of moral reasoning. Philosophy Compass, 6(10), 708–721. Sauer, H. (2012). Morally irrelevant factors: What’s left of the dual process-model of moral cognition? Philosophical Psychology, 25(6), 783–811. Scherer, K. R. (2009). The dynamic architecture of emotion: Evidence for the component process model. Cognition and Emotion, 23(7), 1307–1351. Schwartz, S. H. (1992). Universals in the content and structure of values: Theoretical advances and empirical tests in 20 countries. In M. P. Zanna (Ed.), Advances in experimental social psychology (Vol. 25, pp. 1–65). Academic Press. Semin, G. R., & Manstead, A. S. R. (1983). The accountability of conduct: A social psychological analysis. Academic Press. Shank, D. B., & DeSanti, A. (2018). Attributions of morality and mind to artificial intelligence after real-world moral violations. Computers in Human Behavior, 86, 401–411. Shank, D. B., Kashima, Y., Peters, K., Li, Y., Robins, G., & Kirley, M. (2019). Norm talk and human cooperation: Can we talk ourselves into cooperation? Journal of Personality and Social Psychology, 117(1), 99–123. Sharkey, A., & Sharkey, N. (2010). Granny and the robots: Ethical issues in robot care for the elderly. Ethics and Information Technology, 14(1), 27–40. Silver, E., & Silver, J. R. (2021). Morality and self-control: The role of binding and individualizing moral motives. Deviant Behavior, 42(3), 366–385. Sio, F. S. de, & Wynsberghe, A. van. (2016). When should we use care robots? The nature-of-activities approach. Science and Engineering Ethics, 22, 1745–1760. Sommers, T. (2010). Experimental philosophy and free will. Philosophy Compass, 5(2), 199–212. Sorial, S. (2016). Performing anger to signal injustice: The expression of anger in victim impact statements. In C. Abell, J. Smith, C. Abell, & J. Smith (Eds.), The expression of emotion: Philosophical, psychological and legal perspectives (pp. 287–310). Cambridge University Press. Steele, C. M. (1988). The psychology of self-affirmation: Sustaining the integrity of the self. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 21, pp. 261–302). Academic Press. Strawson, P. F. (1962). Freedom and resentment. Proceedings of the British Academy, 48, 1–25. Strohminger, N., Knobe, J., & Newman, G. (2017). The true self: A psychological concept distinct from the self. Perspectives on Psychological Science, 12(4), 551–560. Strohminger, N., & Nichols, S. (2014). The essential moral self. Cognition, 131(1), 159–171. Stuart, M. T., & Kneer, M. (2021). Guilty artificial minds: Folk attributions of mens rea and culpability to artificially intelligent agents. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), Article 363, 1–27.

Modern Moral Psychology: A Guide to the Terrain

Suls, J. (Ed.). (2014). Psychological perspectives on the self: Vol. 4. The self in social perspective. Psychology Press. Sztompka, P. (2019). Trust in the moral space. In M. Sasaki (Ed.), Trust in contemporary society (Vol. 42, pp. 31–40). Koninklijke Brill NV. Tangney, J. P., Stuewig, J., & Mashek, D. J. (2007). Moral emotions and moral behavior. Annual Review of Psychology, 58, 345–372. Tedeschi, J. T., & Reiss, M. (1981). Verbal strategies as impression management. In C. Antaki (Ed.), The psychology of ordinary social behaviour (pp. 271–309). Academic Press. Tesser, A. (1988). Towards a self-evaluation maintenance model of social behavior. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 21, pp. 181–227). Academic Press. Thomson, J. J. (1985). The trolley problem. Yale Law Journal, 94, 1395–1415. Tiberius, V. (2015). Moral psychology: A contemporary introduction. Routledge/ Taylor & Francis Group. Tolmeijer, S., Kneer, M., Sarasua, C., Christen, M., & Bernstein, A. (2021). Implementations in machine ethics: A survey. ACM Computing Surveys, 53(6), Article 132, 1–38. Tomasello, M. (2016). A natural history of human morality. Harvard University Press. Vila-Henninger, L. A. (2021). A dual-process model of economic behavior: Using culture and cognition, economic sociology, behavioral economics, and neuroscience to reconcile moral and self-interested economic action. Sociological Forum, 36(Suppl. 1), 1271–1296. Vohs, K. D., & Schooler, J. W. (2008). The value of believing in free will: Encouraging a belief in determinism increases cheating. Psychological Science, 19(1), 49–54. Vonasch, A. J., Baumeister, R. F., & Mele, A. R. (2018). Ordinary people think free will is a lack of constraint, not the presence of a soul. Consciousness and Cognition, 60, 133–151. Wilson, J. Q. (1993). The moral sense. American Political Science Review, 87(1), 1–11. Woolfolk, R. L., Doris, J. M., & Darley, J. M. (2006). Identification, situational constraint, and social cognition: Studies in the attribution of moral responsibility. Cognition, 100(2), 283–301. Wylie, R. C. (1979). The self-concept (Vol. 2). University of Nebraska Press. Yan, L., Sha, L., Zhao, L., Li, Y., Martinez-Maldonado, R., Chen, G., Li, X., Jin, Y., & Gaševi´c, D. (2024). Practical and ethical challenges of large language models in education: A systematic scoping review. British Journal of Educational Technology, 55(1), 90–112. Yang, Q., Yan, L., Luo, J., Li, A., Zhang, Y., Tian, X., & Zhang, D. (2013). Temporal dynamics of disgust and morality: An event-related potential study. PLoS ONE, 8(5), Article e65094. Yoder, K. J., & Decety, J. (2014). Spatiotemporal neural dynamics of moral judgment: A high-density ERP study. Neuropsychologia, 60, 39–45. Young, L., Chakroff, A., & Tom, J. (2012). Doing good leads to more good: The reinforcing power of a moral self-concept. Review of Philosophy and Psychology, 3(3), 325–334. Yucel, M., & Vaish, A. (2021). Eliciting forgiveness. WIREs Cognitive Science, 12(6), Article e1572. Zakharin, M., Curry, O. S., Lewis, G., & Bates, T. C. (2024). Modular morals: The genetic architecture of morality as cooperation. PsyArXiv. https://doi.org/10 .31234/osf.io/srjyq

29

30

    .       

Zeelenberg, M., Breugelmans, S. M., & de Hooge, I. E. (2012). Moral sentiments: A behavioral economics approach. In A. Innocenti & A. Sirigu (Eds.), Neuroscience and the economics of decision making (pp. 73–85). Routledge/Taylor & Francis Group. Zhang, Q., Wallbridge, C. D., Jones, D. M., & Morgan, P. (2021). The blame game: Double standards apply to autonomous vehicle accidents. In N. Stanton (Ed.), Advances in human aspects of transportation (pp. 308–314). Springer International Publishing.

PART I

Building Blocks

2 Moral Character Geoffrey P. Goodwin and Justin F. Landy

Edgar McGregor is a 20-year-old climate activist who regularly posts videos on Twitter showing him cleaning up trash in the park near where he lives. As of this writing, he has just posted his 955th trash pickup announcement. He does this work alone and, apparently, for no monetary reward. His actions evoke admiration and even awe, because he appears to be acting selflessly in service of a humanitarian and ecological goal that is greater than himself. His actions are those of a person possessing a notably high degree of moral character. A person’s moral character comprises the moral dimension of their personality (Kupperman, 1991). Although no definition of morality is universally endorsed, one feasible and ecumenical conception is that morality concerns a system of informal public norms pertaining to serious matters of right and wrong, or good and bad (see, e.g., Gert & Gert, 2020). Accordingly, a moral issue is one that is captured within such a system of norms, and moral character pertains to a person’s propensity to enact moral behaviors, and to think, desire, and emote in ways that are relevant to such moral norms (see also Fleeson et al., 2014). Within late twentieth-century moral philosophy, the revival of virtue ethics as a competitor to deontology and consequentialism spurred renewed interest in moral character. The core notion encapsulated within virtue ethics is that an act’s moral goodness (or rightness) should be evaluated in terms of whether it accords with what a virtuous agent would do under the circumstances, rather than in terms of its consequences or its accordance with a moral rule. Such views come in several distinct forms, which differ in their conceptions of what virtue is and how it should guide our decision making (e.g., Adams, 2006; Annas, 2011; Anscombe, 1958; Foot, 1978; Hursthouse, 1999; McDowell, 1979; Swanton, 2003; for a critical perspective, see Louden, 1984). Nonetheless, at the center of each of these views is the idea that virtue and moral character are foundational concepts in ethics. This resurgence of interest in virtue ethics has refocused attention on moral character traits within philosophy, and has spurred targeted inquiry into specific moral character traits (for a review of virtue ethics, see Hursthouse & Pettigrove, 2018). Alongside these philosophical developments, moral character has undergone a recent revival in the field of both personality and social psychology, having earlier been eschewed as a topic of inquiry. In the course of this review, we aim to summarize the role of moral character in four distinct areas of psychology. 33

34

     .        .    

First, we examine the question of whether moral character actually exists. Second, we review evidence demonstrating that moral character is a fundamental input to person perception and global impression formation. In this section, we will also consider limits to the hypothesis that morality is central to person perception, including whether there are clear cases in which moral people are not preferred or evaluated positively. Third, we review evidence that moral character is considered central to people’s identity. Last, we examine how people draw inferences of moral character.

2.1 Does Moral Character Exist? Although it seems commonsensical that there is such a thing as moral character, which people possess to varying degrees, there are noted skeptics of this idea (Doris, 1998, 2002; Harman, 1999). Indeed, some philosophers have argued that belief in moral character is not only empirically unsupported but also dangerous (Harman, 1999). Skepticism about broad moral character traits comprises at least two main themes (see Fleeson et al., 2015; Helzer et al., 2018). First, it is claimed that there are insufficient cross-situational correlations between purported moral character traits. This claim traces back to the seminal work of Hartshorne and May (1928), who observed that the correlation between children’s honesty behaviors across any two situations was only around 0.20, which seemed too low to support the existence of broad character traits. Second, the success of situationist social psychology is taken to suggest that situational rather than personality variables overwhelmingly determine moral behavior. Milgram’s (1963, 1974) obedience studies and Darley and Latane’s (1968) bystander intervention study are among those held up as powerful illustrations of the role of situational forces. These critiques were arguably exaggerated even at the time they were made (see, e.g., Miller, 2003), but recent evidence has made them even less tenable. The central question at issue is the degree of consistency present in people’s moral behavior, across time and across situations. One sort of recent study shows direct evidence for cross-situational stability in specific moral character traits. Meindl et al. (2015) tracked college students for nine days using an experience sampling method and found consistent differences between people in their (self-reported) levels of honesty, compassion, and fairness. Knowing how someone behaved at one point during a given day predicted how they would behave at a later point during the day (on the same trait), with correlations ranging from 0.35 to 0.74. More importantly, when behavior was aggregated, knowing how someone behaved during one half of the study predicted how well they would behave in the other half, with correlations ranging from 0.66 to 0.97 (see also Bleidorn & Denissen, 2015, for similar evidence). Bollich et al. (2016) used a similar longitudinal design but used coders to categorize snippets of speech rather than relying on self-reports. These researchers also observed significant and sizable within-person

Moral Character

correlations for moral behaviors such as empathy, gratitude, blame, and condescension, when examining both individual behaviors and behaviors in the aggregate (rs > 0.47). Research within personality psychology over the past 20 years also provides broad support for the existence of stable moral character traits. During this time, an alternative to the “Big Five” model of personality was proposed, which added a sixth dimension designed specifically to capture honesty and humility (Ashton & Lee, 2007; Lee & Ashton, 2004). Many specific moral character traits have also been explored in depth, including gratitude (McCullough et al., 2002; Wood et al., 2008) and humility (Davis et al., 2011). Personality psychologists have also studied antisocial aspects of personality, including the recently proposed “dark factor” of personality, which purportedly extends beyond the narrower “dark triad” of personality traits (Moshagen et al., 2018). An additional form of evidence examines consistency between ratings of a target’s moral character. Cohen et al. (2013) found moderate correlations between a target person’s own ratings of their honesty-humility and guilt proneness and the ratings of that person made by a well-acquainted other. Similarly, Helzer et al. (2014) also found moderate self–other agreement in ratings of moral character traits. Perhaps more important, they also observed correlations between several others’ ratings of a target person’s moral character – both in terms of their general moral character as well as in terms of discrete moral character traits (fairness, honesty, compassion, temperance, moral concern). Although this provides only indirect evidence, its most parsimonious explanation is that moral character is both visible to others and crosssituationally consistent. In sum, there is accumulating evidence for the cross-situational and intertemporal stability of a variety of moral character traits, such that radical skepticism about their existence is not well supported.

2.2 Moral Character and Person Perception With this backdrop in place, we turn now to the role of moral character in person perception and social cognition. Just as personality psychologists eschewed study of moral character for many years, considering it too “valueladen” to constitute a respectable part of personality science (see Nicholson, 1998, for a review), so too did social cognition researchers. Instead, the field emphasized evaluations of “warmth,” rather than morality, as fundamental. Warmth overlaps with morality to some degree, but there are dissociations, such that a given behavior or trait can be warm but not moral (e.g., extroverted, funny), or moral but not warm (e.g., honest, principled). In our view, this historical focus on warmth obscured the fundamental importance of moral character to the impressions we form of others. We defend this claim, beginning first with a review of research on warmth,

35

36

     .        .    

followed by a review of more recent evidence on the fundamental role of morality in person perception.

2.2.1 Early Studies of the Warmth–Coldness Dimension An especially influential early finding showed that adding the single terms “warm” or “cold” to a list of traits describing a person generated wide-ranging inferences about other aspects of that person (Asch, 1946). For instance, warm individuals were inferred to be more humane, wise, altruistic, and even goodlooking than cold individuals (see also Brambilla et al., 2021, for a review). Warmth had a long legacy in social cognition following Asch’s seminal studies. Rosenberg et al. (1968) conducted a highly influential study in which participants were asked to sort 64 trait terms into categories that were likely to be associated within the same people. Their results suggested that the traits could be organized in a two-dimensional space, with one dimension capturing good and bad social properties (warmth) and another dimension capturing good and bad intellectual properties (competence). This study is very well known and was a major precursor of later two-dimensional models of group and individual perception, which made a similar distinction between warmth and competence, or related terms (e.g., Abele & Wojciszke, 2013; Fiske et al., 2002; Judd et al., 2005). However, much less well known to researchers is that Rosenberg did not himself strongly defend a two-dimensional approach to person perception. In a later study using the same task, but with a different set of 60 traits, Rosenberg and Olshan (1970) found that a three-dimensional solution better fit the data (hard–soft, good–bad, active–passive). Using a somewhat different task, Peabody (1984) suggested that a minimum of four dimensions were needed to capture personality perception, though the exact dimensions depended on the method of factor extraction. Nonetheless, the twodimensional approach established a firm foothold and came to dominate the study of person perception in the ensuing years.

2.2.2 The Rise of Morality From our perspective, however, a notable shortcoming of such models is that they pay little attention to the moral dimension of human personality. On the one hand, many traits that are central to warmth have little to do with morality (e.g., sociability, extroversion, playfulness, and so on). And on the other hand, while there are some moral traits that overlap conceptually with warmth (e.g., kindness, benevolence), there are also many highly central moral traits (e.g., trustworthiness, courage, integrity, and so on) that can be enacted without warmth. For instance, Edgar McGregor’s dedication to preserving his local natural park does not require warmth; indeed, it is not even an interpersonal behavior. Moreover, the operationalization of warmth in the literature has been quite inconsistent and has sometimes neglected morality altogether. For instance, Fiske et al. (2002) operationalized warmth as warmth,

Moral Character

good-naturedness, tolerance, and sincerity; the former two traits have minimal moral relevance but the latter two are quite important to moral character. However, Kervyn et al. (2012) operationalized warmth only as warmth, friendliness, niceness, and sociability, and Cuddy et al. (2007) operationalized it as simply warmth and friendliness. In these latter two cases, a substantive moral element is lacking. Thus, although warmth has some conceptual connection to aspects of morality, it has been operationalized inconsistently, often with no moral content at all. An important paper by Wojciszke, Bazinska, and Jaworski (1998) refocused the field’s attention on morality. The goal of this investigation was to compare the relative influence of morality and competence information on global impressions of others. Global impressions represent people’s overall impressions of people on a simple, valenced, positive–negative response dimension. Using a sequence of varied and elegant study designs, these researchers established the overall dominance of morality over competence information. They first showed that when people are asked to state the traits that they “personally think are most important in others,” the traits they generate tend to be related more to morality than to competence (Study 1). Next, they demonstrated that when people rated individuals they are acquainted with on both morality- and competence-related traits, the moral traits predicted their global impressions of these people appreciably better than did the competence traits (Study 3). Finally, using an experimental design, they showed that morality traits exerted a larger causal influence on global impressions than did competence traits, with an effect size more than double that for competence traits (Study 4). These findings strikingly highlight the importance of morality in person perception. The focus of our own research was to establish a further separation between warmth and morality. In light of the tendency for research in the twodimensional tradition to blur the distinction between warmth and morality, and sometimes to ignore morality altogether, we aimed to discover whether these dimensions are truly separate evaluations and, if so, which one exerts a greater influence on person perception. Our first step, following Wojciszke, Dowhyluk, and Jaworski (1998), was to conduct a “bottom-up” norming study, in which 170 traits were rated on their usefulness for evaluating a person on various higher-order dimensions, including morality, warmth, and abilities (i.e., competence). This study then set the stage for later studies, which separated traits into four categories: those useful for judging morality but less useful for judging warmth (“pure morality traits,” e.g., honest, trustworthy, principled), those useful for judging warmth but less useful for judging morality (“pure warmth traits,” e.g., sociable, warm, funny), those useful for judging both dimensions (“blended traits,” e.g., kind, humble, cooperative), and those not useful for judging either dimension (e.g., athletic, musical, intelligent). In one study, participants thought about individuals they were familiar with, and rated them on a variety of different traits (Goodwin et al., 2014, Study 3). The result of this study was that the pure morality traits predicted global impressions of the target individuals much better than did the pure warmth

37

38

     .        .    

traits. Another correlational study showed that people’s valenced impressions of prominent deceased individuals described in the New York Times obituary page were better predicted by independent ratings of their morality than by ratings of their warmth (Study 7). Experimental studies further showed that when morality and warmth were manipulated orthogonally in descriptions of hypothetical targets who varied in their interpersonal closeness to the self, the overall effect of morality was consistently larger than that of warmth (Studies 4–6). Furthermore, the dominance of morality over warmth was clearest for social roles rated as most important. Complementing this research, other research shows that evaluations of morality are central to both liking and respecting others (Hartley et al., 2016). And people tend to search preferentially for morality information when forming impressions of others as well. That is, they think that learning about moral traits would be more relevant than learning about either sociability or competence traits for forming a global impression of another person (Brambilla et al., 2011). Thus, overall, the evidence from these studies strongly indicates that morality is separable from warmth, and a stronger overall contributor to overall person impressions (for a review, see Goodwin, 2015).

2.2.3 Morality Influences Impressions Differently Than Sociability or Competence One question that lingered after these studies, however, was how decisively morality can be distinguished from warmth or sociability.1 Based on the studies we have described, our view was that this separation was quite pronounced. An alternative view is to try to absorb these results within a two-dimensional framework, by arguing that morality and sociability comprise separate facets of an overarching warmth dimension (see, e.g., Fiske et al., 2002, who note that “the warmth scale includes elements of both sociality [good-natured, warm, tolerant] and morality [sincere], but all are prosocial traits” [p. 889]; see also Bergsieker et al., 2012, who state that “success in navigating interpersonal interactions requires accurately inferring others’ warmth (i.e., morality) and competence” [p. 1216]; and Fiske et al., 2007, who describe warmth as “the moral-social dimension” [pp. 77–78]). We therefore attempted to adjudicate between these two theoretical views. We reasoned that, from a functional perspective, morality is important in global impressions because it indicates the nature of others’ intentions toward us. Moral individuals are likely to be beneficial or benign, while immoral individuals are likely to intend harm. In contrast, both competence and sociability information serve as “amplifiers” of a person’s intentions. If a moral person is 1

In our original investigations on this topic, we contrasted morality and warmth (Goodwin et al., 2014), whereas in our later investigations we contrasted morality and sociability (Landy et al., 2016), which arguably represents a similar but slightly cleaner contrast. As we have already described, warmth incorporates some elements of morality whereas sociability does not.

Moral Character

competent, or sociable, this is a good thing, because they will be more effective in bringing about or promoting their desired ends. This is clear for competence, which involves effective goal pursuit by definition. Indeed, Wojciszke, Bazinska, and Jaworski (1998, Study 4) had previously found that a competent, immoral actor was viewed more negatively than an incompetent, immoral actor. We theorized that this logic should also apply to sociability, in that highly sociable individuals are better able to recruit allies, persuade others, and generally drum up support for their desired ends. The natural extension of this reasoning is that, similar to competence, sociability is not desirable in immoral persons. An immoral person who is also competent or sociable is more dangerous than one who lacks these traits. Thus, whereas moral traits are valuable regardless of the other traits that a person possesses, both competence and sociability traits are only valuable conditional on the moral traits that the person possesses. We dubbed this pair of ideas the morality dominance hypothesis and the morality dependence hypothesis (Landy et al., 2016). To test these predictions, we employed a variety of study designs. One pair of studies examined global impressions by factorially manipulating a target’s morality, competence, and sociability at the level of abstract traits (Landy et al., 2016, Study 2) or descriptions of behavior (Study 3). Morality dominated these impressions, such that moral individuals were always viewed either neutrally or positively, whereas immoral individuals were always viewed very negatively. Sociability and competence information had much less influence on people’s impressions. A subsequent study examined people’s preferences for various traits in hypothetical targets (Landy et al., 2016, Study 4). Participants greatly preferred targets to be moral rather than immoral, regardless of whether the target was already known to be sociable or unsociable, competent or incompetent, thereby supporting the morality dominance hypothesis. Furthermore, when a target was known to be moral, participants preferred them also to be sociable and competent, but when targets were known to be immoral, participants preferred them to be unsociable and incompetent, thereby supporting the morality dependence hypothesis. Corroborating these results, when participants were asked to anticipate potential changes in their overall impressions, they indicated that their impression of a highly immoral target would become significantly more negative with the addition of positive information about his sociability or competence, whereas they indicated an anticipated positive shift when the target was only slightly immoral, or when he was moral (Landy et al., 2016, Study 5). Thus, in both studies, morality was valued unconditionally, whereas competence and sociability were valued conditional on a target person’s prevailing morality. In sum, morality and sociability function quite differently in how they contribute to global impressions. Morality contributes positively to impressions, regardless of a person’s other traits, whereas sociability can be negative in the presence of immorality. If it were the case that morality and sociability are both subcomponents of a superordinate warmth dimension, one would not expect that they would be thought about in such divergent ways.

39

40

     .        .    

Thus, these findings pose a compelling challenge to two-dimensional models of social cognition.

2.2.4 Limitations to Morality Dominance The unconditional valuation of morality traits provides an additional means of illustrating the considerable power of morality in shaping social evaluations, consistent with much other research. Yet, it would be premature to conclude from this work that all morality traits are valued at all times, in all contexts, and by all people. Several lines of research have sought to identify circumstances in which people respond negatively to morality in others. Some research suggests that people dislike morality when others’ righteous behavior seems implicitly to impugn their own morality (Monin et al., 2008). In one study, participants first completed a task that was plausibly construed as assessing racial bias (Monin et al., 2008, Study 2). The task induced White participants to choose an African American man as the culprit for a robbery (the evidence implicated him in particular), and most of them did indeed do so. These participants were then asked to make judgments of another (fictional) participant who had objected to the task from the outset and refused to participate further, claiming that the task itself reflected racial bias on the researchers’ part. These “actor” participants judged this “moral rebel” negatively, ostensibly because the rebel’s behavior cast their own morality in disrepute. In contrast, mere observers tended to judge the rebel quite positively. However, it is not entirely clear that the actors themselves perceived the rebel’s behavior as genuinely moral and thereby worthy of admiration or emulation. Indeed, a later study (Monin et al., 2008, Study 4) showed that the actors’ ratings of the morality of the rebel were quite low. One likely reason is that actors perceived the rebel’s behavior as reflecting moral grandstanding and self-righteousness, rather than genuine morality. As such, they may have thought that the rebel misinterpreted the meaning of the experimental task (e.g., as expressing rather than measuring prejudice). In contrast, since the observers did not actually participate in the task, they likely did not devote as much attention or cognitive resources to discerning its various possible meanings, and so the rebel’s charge of racism may have seemed more plausible to them. In sum, while intriguing, this research falls short of showing that people sometimes react negatively toward individuals that they themselves consider genuinely moral. Other recent research has shown that people sometimes dislike morality in others in a different way. Melnikoff and Bailey (2018) specifically challenged the morality dominance hypothesis proposed by Landy et al. (2016), arguing that rather than being valued unconditionally, moral traits are instead valued conditionally, depending on a person’s current goals. Four studies supported this notion. In an initial study, participants playing the role of a prosecuting attorney role preferred a merciless over a merciful juror and showed greater explicit and implicit liking toward the

Moral Character

merciless juror – thereby showing a “preference” for the less moral person. In a second study, participants envisaged themselves in an espionage-themed game, in which they and another party would spy on one another. Participants generally regarded an honest spy as a more moral person than a dishonest spy. Yet, when choosing a spy for their own side, participants preferred to have a dishonest spy and liked this spy more than an honest spy, showing a preference for the immoral person. Only participants who chose a spy for the other side preferred the honest spy (presumably because a habitually honest spy would be less effective). A third study showed that males in a committed relationship liked a fidelitous female more than an infidelitous female, whereas this difference was attenuated among males not in a committed relationship. In a fourth and final study, participants who had behaved selfishly in a dictator game and faced the possibility of reward or punishment by a third party, preferred a third party who expressed no concern with fairness. In contrast, participants who had made fair or generous offers in the dictator game preferred third parties who were concerned with fairness. In sum, circumstances exist in which people seem to prefer and like immoral individuals more than moral individuals, with the reason being that the immoral individuals better suit their goals. Melnikoff and Bailey interpret this evidence as a significant challenge to the morality dominance hypothesis because it shows that the valuation of morality is fundamentally conditional rather than unconditional. This evidence constitutes an important boundary condition to the valuation of morality, and warrants an amendment to the morality dominance hypothesis (see also Landy et al., 2018). However, perhaps the most important feature of these studies is the nature of the dependent variables. Melnikoff and Bailey (2018) focus their attention on participants’ preferences as well as various explicit and implicit liking measures. Most research on the importance of morality in impression formation has focused on a different variable, namely global impressions. Of particular note in this regard is that Melnikoff and Bailey collected other measures, the results of which are much more consistent with our (and others’) theorizing on morality dominance. For instance, in two studies, in addition to collecting preference ratings, Melnikoff and Bailey also asked participants which of the two individuals they would prefer to be friends with. This is arguably more similar to an overall impression measure than are the context-specific preference and liking measures. In Study 1, both the prosecutor and the defense attorney participants indicated a significantly greater desire to be friends with the merciful as opposed to the unmerciful juror. This difference was attenuated among the prosecutors, but still significant, and a complete reversal from who they preferred as a juror. Similarly, in Study 2, participants indicated a significantly greater preference to be friends with the honest spy, which was not moderated by which spy they were choosing – a result which again departs from the preference ratings. Desire for friendship was not measured in Studies 3–4, but one would assume that it would haved shown a similar pattern.

41

42

     .        .    

In essence, people sometimes do appear to have situation- and role-specific “preferences” for immoral individuals, in line with Melnikoff and Bailey’s (2018) “goal-conditional” account. Nonetheless, even in these cases, they retain more positive overall impressions (as assessed by the friendship measure) of moral individuals, in line with the morality dominance hypothesis. Indeed, these results corroborate a speculation we had earlier made about one possible limit to morality dominance, namely that morality may not be valued by individuals who consider themselves immoral (Landy et al., 2016, pp. 1288–1289). Melnikoff and Bailey’s research substantiates this idea by showing that among individuals who themselves are pursuing immoral (or at least amoral) goals, such as infidelitous relationships, unfair allocations of resources, or partisan prosecutorial goals, immorality may be valued in others who are instrumental in serving those goals. However, since most people – even prisoners incarcerated for violent crimes – consider themselves to be highly moral (Sedikides et al., 2014), circumstances in which people consider their own goals to be immoral may be fairly rare (see also Allison et al., 1989; Brown, 2012). Melnikoff and Bailey’s (2018) data also help refine the framing of the morality dominance hypothesis. We had originally written that “positive morality traits are always positive in person perception, and negative morality traits are always negative” (Landy et al., 2016, p. 1274). This framing is clearly too broad. Instead, the existing data support two more moderate claims. First, whereas the valuation of competence and sociability traits depends on the target person’s morality, moral traits are valued unconditional of the other traits that a person possesses – in what might be described as a “trait-unconditional” manner. Second, moral traits exert an almost uniformly positive impact on global (rather than context-specific) evaluations of people and do so to a larger extent than do either competence or sociability traits. Another perspective on whether moral traits are ever viewed negatively comes from other research we conducted, in which we drew a distinction between “core goodness” traits that are valued unconditionally (e.g., honesty, kindness, trustworthiness), and “value commitment” traits that are valued in a conditional way (e.g., dedication, commitment, discipline; Piazza et al., 2014; see also Slote, 1983, and Gert, 2004, 2005, for earlier statements of this idea). Like competence and sociability traits, value commitment traits amplify the badness of immoral actors (and the goodness of moral actors). For example, a “dedicated Nazi” should be seen as worse than simply a “Nazi,” notwithstanding that dedication is typically a positive trait. However, unlike the amplification effects arising from competence or sociability traits, which result solely from actors’ increased effectiveness in pursuing their goals, value commitment traits amplify actors’ valuation of their goals (in addition to their effectiveness). A “dedicated Nazi” is not just a more competent Nazi, but also a more fervent one. Several studies corroborated this idea, using evaluations of moral character, rather than global impressions, as the primary dependent variable. When value commitment traits were attached to bad actors like terrorists or Nazis, they

Moral Character

worsened participants’ impressions of the actors’ moral character (Piazza et al., 2014, Study 1). In contrast, the same traits improved moral impressions of positive or neutral agents (though not always significantly). Interestingly, value commitment traits such as dedication and commitment worsened evaluations of immoral agents to a greater extent than did a competence trait, intelligence (Piazza et al., 2014, Study 2). Whereas competence increases the effectiveness of a bad agent, it speaks less directly to the agent’s values. Corroborating this idea, mediation analyses showed that perceptions of value endorsement (to the relevant cause) rather than effectiveness mediated the effect of commitment traits on moral character evaluations for bad agents (Piazza et al., 2014, Studies 3 and 4). In sum, core goodness traits such as honesty, kindness, and trustworthiness tend to be valued positively, unconditional on the other traits a person possesses, whereas value commitment traits are valued conditionally, in that they make someone with generally bad character seem even worse. This research therefore shows that certain types of moral traits in others are sometimes evaluated negatively. A critique of this research was recently published by Royzman and Hagan (2017). They argue that even “core goodness” traits can be negative in some cases. On their analysis, the reason why people consider a “dedicated Nazi” worse than a “kind Nazi” is not because of a fundamental distinction between value commitment and core goodness traits, but because people interpret these traits as having differential scope. Essentially, people interpret a “dedicated Nazi” as being dedicated to Nazi causes specifically, whereas they interpret a “kind Nazi” as being kind generally, including to victims of the Nazi regime. Royzman and Hagan showed that once this difference is equated, by stipulating that someone is kind to Nazis, “core goodness” traits are valued conditionally – that is, they amplify rather than diminish the negativity of bad actors. The authors concluded that there is therefore no fundamental distinction between core goodness and value commitment traits, and that the supposed distinction is an experimental artifact. This is a perceptive analysis, but it overlooks the fact that value commitment traits tend inherently to have a narrower scope of application than do core goodness traits. When you learn that a person is “dedicated,” it makes sense to ask to what they are dedicated. A person could be dedicated to all of their own interests to a roughly equal extent, but they cannot be equally dedicated to all possible human interests, since some will conflict. This is not true to the same extent for core goodness traits. When you learn that a person is “kind,” it is not so natural to wonder to whom this person is kind. Unlike dedication, which must attach to a limited set of interests, it is entirely possible for a person to be generally kind, including toward people to whom they had no prior connection or attachment, or toward two people whose goals are bitterly opposed to one another. Consequently, generalized kindness is logically coherent whereas generalized dedication is not. A person who is equally dedicated to both Nazi and anti-Nazi causes is inconsistent or even nonsensical, whereas this is not true of a person who is kind to both Nazis and their opponents. In sum, because kindness

43

44

     .        .    

and other core goodness traits are more general than value commitment traits,2 the difference that Royzman and Hagan explicate is best characterized not as an experimental artifact, but as a naturally occurring difference in meaning. Overall, then, while there are some circumstances under which some kinds of moral traits can sometimes be judged negatively (depending on what kind of judgment is elicited, and which kind of trait is studied), we maintain that, when forming overall impressions of a person, people generally consider moral traits to be unambiguously positive and desirable.

2.3 Morality and Identity In Section 2.2, we focused our attention on the relation between moral character and global impressions. We turn now to the relation between moral character and identity. People seem to treat a person’s moral character as highly central to their identity. In one study, we asked participants to indicate how central each of 80 positively valenced traits was to a person’s identity. Across traits, this judgment correlated significantly with those traits’ judged relevance to morality, r(78) ¼ 0.64, significantly more strongly than it did with their relevance to warmth, r(78) ¼ 0.46 (Goodwin et al., 2014, Study 2). Strohminger and Nichols (2014) investigated this relationship more comprehensively. They asked participants to consider various possible changes that a person might undergo and to consider how much of the original person was still present after these changes. Their focus was on comparing moral changes with other sorts of change, including changes in nonmoral personality, autobiographical memory, desires, basic cognition, somatic states, and perceptual capacities. These changes were instantiated using a variety of specific trait descriptions, and they included both positive and negative changes (e.g., a person becoming more honest or a person becoming more evil). Across all studies, changes to a person’s moral traits led to the greatest perceived identity change. Similar results have also been observed among children (Heiphetz et al., 2017; Heiphetz et al., 2018). The same conclusion was also supported in a naturalistic context (Strohminger & Nichols, 2015). Family members of individuals suffering from neurodegenerative diseases reported the degree of identity change that their relative had undergone, as well as the extent to which their relationship with this person had deteriorated. Three diseases were studied, frontotemporal dementia (FTD), Alzheimer’s disease (AD), and amyotrophic lateral sclerosis (ALS). Of these three, FTD is typically reported to induce the greatest moral changes in people, including increased antisocial behavior, increased dishonesty, and reduced empathy. In contrast, ALS is associated with motor degeneration and tends 2

This is just as clear for other core goodness traits such as honesty or trustworthiness, which were not studied by Royzman and Hagan (2017).

Moral Character

to produce the fewest moral changes, with AD somewhere in the middle in terms of moral change. Consistent with the idea that moral character is fundamental to identity, family members rated FTD sufferers as having undergone the largest changes in identity, closely followed by AD sufferers, and then ALS sufferers, though ratings of the overall impact on daily functioning did not differ across the three diseases. Moral changes were also most strongly associated with family members’ reports of relationship deterioration, with this link being mediated by perceived identity change. Thus, several lines of evidence support the notion that morality is fundamental to identity, but it is not yet known why this is. In our view, the most probable explanation is that morality is important to perceptions of identity for the same functional reason that it is important in impression formation: Knowing a person’s moral traits is essential to predicting how harmful or helpful they are likely to be (Goodwin et al., 2014; Landy et al., 2016). It is therefore wise to pay special attention to moral characteristics in others. This in turn may lead to the sense that morality is essential to personal identity (Strohminger & Nichols, 2014). This is not the only possible explanation, however. For instance, Strohminger and Nichols (2014) hypothesize that morality may also be important to perceptions of identity because it is a uniquely human attribute (unlike, say, memory; see also Haslam, 2006). To our knowledge, there has not yet been targeted investigation of the underlying explanatory basis for the link between morality and identity.

2.4 Inferences of Moral Character Thus far, this review has primarily focused on how people process and use moral character information once they are in possession of it. To close, we consider the question: How do people infer moral character in the first place? One obvious answer is that people infer character from others’ behavior. Indeed, perhaps because it is so obvious, no study that we know of has focused solely on establishing this point. Nonetheless, ample evidence exists to support it. For instance, in a study of impression updating, Reeder and Coovert (1986) showed that people readily draw global moral character inferences from single instances of moral or immoral behavior. Reeder and Spores (1983) made a similar demonstration in a study on the effect of situational demands on moral character inferences. Many other studies make a similar point, but we do not review them further here given how uncontroversial this point is. The two studies also established an effect of valence, such that immoral behaviors tend to promote stronger inferences about moral character than do moral behaviors. For instance, Reeder and Spores (1983) showed that people were more inclined to take situational constraints into account when inferring positive moral character from moral behaviors than when inferring negative moral character from immoral behaviors. Reeder and Coovert (1986) further showed that initially negative moral impressions are updated less by the

45

46

     .        .    

addition of new positive information than initially positive impressions are updated by the addition of new negative information. In a similar vein, other studies have shown that negative moral information is more influential on overall impressions than is positive moral information (Riskey & Birnbaum, 1974; Skowronski & Carlston, 1989). These effects likely do not reflect general negativity dominance (e.g., Baumeister et al., 2001; Rozin & Royzman, 2001), as there tends to be a positivity bias in the ability domain (Skowronski & Carlston, 1989). Instead, they likely reflect trait diagnosticity – moral traits and behaviors are generally expected in others, and so negative information is more informative, whereas the reverse is true in the ability domain (Reeder & Brewer, 1979; Skowronski & Carlston, 1989). More recent research has contested the accepted view of negativity dominance in the moral domain, arguing that impressions of negative moral character are inherently more uncertain and therefore more labile than impressions of positive moral character (Siegel et al., 2018). The difference may in part be methodological – whereas earlier studies provided descriptions of real-life behaviors that tended to be rather extreme and rare, Siegel et al.’s (2018) studies provided real-time evidence of more moderate behaviors enacted in a laboratory context, specifically, the decision to inflict mild electrical shocks on another person for money (see Crockett et al., 2021). People also draw inferences of moral character from other sources, chiefly a person’s mental states. Just as judgments of blame hinge on intentionality (Malle et al., 2014), so too do judgments of moral character. For instance, people judge others based simply on their intention to commit various actions, even when those intentions are thwarted or not acted upon (Hirozawa et al., 2020; for related evidence on the role of intentions, see Martin & Cushman, 2015 and Martin et al., 2022, who examine “partner choice” rather than character). This is especially true for immoral rather than moral intentions (Hirozawa et al., 2020). Similarly, immoral desires, which are necessary though not sufficient components of intentions (Malle & Knobe, 1997), are taken to be indicative of poor character, at least among American Protestants (Cohen & Rozin, 2001). Character judgments are also heavily influenced by a person’s reasons for acting, such that the same act performed for different reasons can lead to very different impressions of its author. For instance, an act of aggression that is performed for calculated, self-beneficial reasons leads to more negative person inferences than the very same act if is performed reactively in response to provocation (Reeder et al., 2002). Here too, the influence of a person’s reasons for acting on character judgments parallels the effect of reasons on judgments of blame (Malle et al., 2014). Other studies demonstrate the role of deliberative processes rather than reasons per se. For instance, Critcher et al. (2013) showed that the time a person spends processing a moral decision can influence judgments of their moral character. A person who quickly rejects an opportunity to do something immoral is evaluated more positively than a person who takes their time to

Moral Character

arrive at the same decision. Similarly, a person who quickly takes an opportunity to do something moral is evaluated more positively than a person who takes their time to make the same decision. The preceding studies concern mental features that typically, though not inevitably, precipitate actions – intentions, desires, reasons, and deliberations. But even mental states that occur after an action is performed can influence judgments of moral character. Gromet et al. (2016) showed that an actor who feels pleasure or indifference following an immoral act they have performed is judged to have a more negative moral character, and to be more evil, than a person who is upset following the act or whose emotional reaction was not described. Taken as a whole, these studies on the role of mental states support the view that moral character is inferred from information about a person’s “moral cognitive machinery” (Critcher et al., 2020; Helzer & Critcher, 2018). Several other peripheral variables also influence judgments of moral character. For instance, people infer good moral character (specifically, trustworthiness) based on facial structure (Willis & Todorov, 2006), facial mimcry (Bocian et al., 2018), whether a person has endured incidental suffering (Schaumberg & Mullen, 2017), and whether a person makes choices that prioritize close others at the expense of the greater good (Hughes, 2017). Thus, while the role of actions and mental states suggests that, by and large, moral character judgments have a rational basis, other research suggests that moral character inferences may be tainted by normatively irrelevant factors. Finally, research also addresses which particular moral traits contribute most strongly to moral character judgments. Evidence points to trustworthiness as being particularly central. Trustworthiness was rated by US students as the most important characteristic for an ideal person to possess (Cottrell et al., 2007), and as the most important trait in a close friend or work partner by German students (Abele & Brack, 2013). Honesty and trustworthiness were also rated as the two most prototypic traits in a person with “good character” (Lapsley & Lasky, 2001). Compassion is also seen as quite central; several traits related to compassion were rated just below honesty and trustworthiness in prototypicality (Lapsley & Lasky, 2001). A similar emphasis on trustworthiness and, to a somewhat lesser extent, compassion, emerges from other studies that have called for participants to rate the prototypicality or necessity of various character traits for being a moral person (e.g., Aquino & Reed, 2002; Walker & Hennig, 2004; Walker & Pitts, 1998; see Landy & Uhlmann, 2018, for a review). Beyond these “core” traits, other research suggests that loyalty (Walker & Hennig, 2004) and fairness (Lapsley & Lasky, 2001) are also considered important. Similarly, traits like being hardworking (Amos et al., 2019; Celniker et al., 2022) and self-controlled (Berman & Small, 2018; Mooijman et al., 2018) also positively influence judgments of a person’s character, as do traits such as bravery (Piazza et al., 2014). Thus, we know that judgments of character are multifaceted, with many traits contributing to them. However, as of now, little is known about how people integrate information about multiple relevant traits

47

48

     .        .    

to arrive at a holistic judgment of a person’s moral character. Investigating this question is an important direction for future work.

2.5 Conclusion We have reviewed research on moral character from several different disciplines. Research on this topic has accumulated rapidly over the past 10 years, and we now have a solid basis on which to draw the following conclusions. Moral character exists, despite earlier skepticism. While moral behavior is not entirely consistent from one situation to another, there is enough crosssituational consistency that one’s moral character can be reliably measured, and detected by others. Moral character is a uniquely important aspect of impression formation and person perception. People weigh moral character information more heavily than they do either competence or sociability information, which appears to be because of the uniquely important functional information that another person’s moral character provides about their likely behavior toward the self. Moreover, moral character information is evaluated differently from either competence or sociability information – moral character information is valued independently of the presence of other trait information, whereas sociability and competence information is evaluated conditional on a person’s morality. This does not mean that moral character information is preferred by all people under all circumstances. Indeed, recent challenges have helped refine our understanding of when and by whom moral character information might not be valued. Even in light of these challenges, however, it still seems accurate to say that moral character information is dominant in person perception. Moral character information is also of particular relevance to judgments of personal identity. As both hypothetical and real-world data show, when a person’s morality changes, they are more likely to be seen as a “different person” than when they change in other ways. The postulated reason for the prominence of moral character in identity judgments parallels that for impression formation – moral character information provides uniquely functional information for navigating the social world. Evaluations of moral character are multifaceted, and respond to numerous kinds of information about a person. Integrating some sources of information (e.g., morally relevant behaviors and the mental states that precipitate or follow them) seems rational and normatively defensible, whereas integrating other sources (e.g., facial structure, incidental suffering) seems to reflect bias. These conclusions are all well established by research, but they do not represent a complete picture of the role of moral character in human psychology. For instance, other research that we have not reviewed here indicates that moral character information can also play an important role in moral judgments of transgressions. This research suggests that people are not solely

Moral Character

focused on evaluating transgressions in isolation from their wider context. They instead appear to use information about the transgression, its eliciting circumstances, and the mental states lying behind it, to construct a mental model of the person who committed it (see, e.g., Uhlmann et al., 2015, for a review). Recent research also indicates that despite its importance in social cognition, people generally do not possess strong desires to improve their own moral character (Sun & Goodwin, 2020). There are surely many other ways that our understanding of moral character will deepen with further research. Moral character has always been present in the world, as the case of Edgar McGregor reminds us. But, for a long time, it has been curiously absent from psychological theorizing about personality and social cognition. Pizarro and Tannenbaum’s (2011) influential chapter was titled “Bringing Character Back,” and it served as a call to researchers to devote more attention to the role of moral character in social and moral evaluation. Eleven years later, their call has been answered: Character is back.

References Abele, A. E., & Brack, S. (2013). Preference for other persons’ traits is dependent on the kind of social relationship. Social Psychology, 44(2), 84–94. Abele, A. E., & Wojciszke, B. (2013). The Big Two in social judgment and behavior. Social Psychology, 44(2), 61–62. Adams, R. M. (2006). A theory of virtue. Oxford University Press. Allison, S. T., Messick, D. M., & Goethals, G. R. (1989). On being better but not smarter than others: The Muhammad Ali effect. Social Cognition, 7(3), 275–296. Amos, C., Zhang, L., & Read, D. (2019). Hardworking as a heuristic for moral character: Why we attribute moral values to those who work hard and its implications. Journal of Business Ethics, 158(4), 1047–1062. Annas, J. (2011). Intelligent virtue. Oxford University Press. Anscombe, G. E. M. (1958). Modern moral philosophy. Philosophy, 33(124), 1–19. Aquino, K., & Reed, A. (2002). The self-importance of moral identity. Journal of Personality and Social Psychology, 83(6), 1423–1440. Asch, S. (1946). Forming impressions of personality. Journal of Abnormal and Social Psychology, 41(3), 1230–1240. Ashton, M. C., & Lee, K. (2007). Empirical, theoretical, and practical advantages of the HEXACO model of personality structure. Personality and Social Psychology Review, 11(2), 150–166. Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5(4), 323–370. Bergsieker, H. B., Leslie, L. M., Constantine, V. S., & Fiske, S. T. (2012). Stereotyping by omission: Eliminate the negative, accentuate the positive. Journal of Personality and Social Psychology, 102(6), 1214–1238. Berman, J. Z., & Small, D. A. (2018). Discipline and desire: On the relative importance of willpower and purity in signaling virtue. Journal of Experimental Social Psychology, 76, 220–230.

49

50

     .        .    

Bleidorn, W., & Denissen, J. J. A. (2015). Virtues in action – The new look of character traits. British Journal of Psychology, 106(4), 700–723. Bocian, K., Baryla, W., Kulesza, W. M., Schnall, S., & Wojciszke, B. (2018). The mere liking effect: Attitudinal influences on attributions of moral character. Journal of Experimental Social Psychology, 79, 9–20. Bollich, K. L., Doris, J. M., Vazire, S., Raison, C. L., Jackson, J. J., & Mehl, M. R. (2016). Eavesdropping on character: Assessing everyday moral behaviors. Journal of Research in Personality, 61, 15–21. Brambilla, M., Rusconi, P., Sacchi, S., & Cherubini, P. (2011). Looking for honesty: The primary role of morality (vs. sociability and competence) in information gathering. European Journal of Social Psychology, 41(2), 135–143. Brambilla, M., Sacchi, S., Rusconi, P., & Goodwin, G. P. (2021). The primacy of morality in impression development: Theory, research, and future directions. In B. Gawronski (Ed.), Advances in experimental social psychology (Vol. 64, pp. 187–262). Academic Press. Brown, J. D. (2012). Understanding the better than average effect: Motives (still) matter. Personality and Social Psychology Bulletin, 38(2), 209–219. Celniker, J., Gregory, A., Koo, H., Piff, P. K., Ditto, P. H., & Shariff, A. (2022). The moralization of unproductive effort. Journal of Experimental Psychology: General, 152(1), 60–79. Cohen, A. B., & Rozin, P. (2001). Religion and the morality of mentality. Journal of Personality and Social Psychology, 81(4), 697–710. Cohen, T. R., Panter, A. T., Turan, N., Morse, L., & Kim, Y. (2013). Agreement and similarity in self-other perceptions of moral character. Journal of Research in Personality, 47(6), 816–830. Cottrell, C. A., Neuberg, S. L., & Li, N. P. (2007). What do people desire in others? A sociofunctional perspective on the importance of different valued characteristics. Journal of Personality and Social Psychology, 92(2), 208–231. Critcher, C. R., Helzer, E. G., & Tannenbaum, D. (2020). Moral character evaluation: Testing another’s moral-cognitive machinery. Journal of Experimental Social Psychology, 87, Article 103906. Critcher, C. R., Inbar, Y., & Pizarro, D. A. (2013). How quick decisions illuminate moral character. Social Psychological and Personality Science, 4(3), 308–315. Crockett, M. J., Everett, J. A. C., Gill, M., & Siegel, J. Z. (2021). The relational logic of moral inference. In B. Gawronski (Ed.), Advances in experimental social psychology (Vol. 64, pp. 1–64). Academic Press. Cuddy, A. J. C., Fiske, S. T., & Glick, P. (2007). The BIAS map: Behaviors from intergroup affect and stereotypes. Journal of Personality and Social Psychology, 92(4), 631–648. Darley, J. M., & Latane, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology, 8(4), 377–383. Davis, D. E., Hook, H. N., Worthington Jr. E. L., Van Tongeren, D. R., Gartner, A. L., Jennings II, D. J., & Emmons, R. A. (2011). Relational humility: Conceptualizing and measuring humility as a personality judgment. Journal of Personality Assessment, 93(3), 225–234. Doris, J. M. (1998). Persons, situations, and virtue ethics. Noûs, 32(4), 504–530. Doris, J. M. (2002). Lack of character: Personality and moral behavior. Cambridge University Press.

Moral Character

Fiske, S. T., Cuddy, A. J. C., & Glick, P. (2007). Universal dimensions of social cognition: Warmth and competence. Trends in Cognitive Sciences, 11(2), 77–83. Fiske, S. T., Cuddy, A. J. C., Glick, P., & Xu, J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82(6), 878–902. Fleeson, W., Furr, R. M., Jayawickreme, E., Helzer, E. G., Hartley, A. G., & Meidl, P. (2015). Personality science and the foundations of character. In C. B. Miller, R. M. Furr, A. Knobel, & W. Fleeson (Eds.), Character: New directions from philosophy, psychology, and theology (pp. 41–71). Oxford University Press. Fleeson, W., Furr, R. M., Jayawickreme, E., Meindl, P., & Helzer, E. G. (2014). Character: The prospects for a personality-based perspective on morality. Social and Personality Psychology Compass, 8(4), 178–191. Foot, P. (1978). Virtues and vices. University of California Press. Gert, B. (2004). Common morality: Deciding what to do. Oxford University Press. Gert, B. (2005). Morality: Its nature and justification (revised ed.). Oxford University Press. Gert, B., & Gert, J. (2020). The definition of morality. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/fall2020/entries/ morality-definition/ Goodwin, G. P. (2015). Moral character in person perception. Current Directions in Psychological Science, 24(1), 38–44. Goodwin, G. P., Piazza, J., & Rozin, P. (2014). Moral character predominates in person perception and evaluation. Journal of Personality and Social Psychology, 106(1), 148–168. Gromet, D. M., Goodwin, G. P., & Goodman, R. A. (2016). Pleasure from another’s pain: The influence of a target’s hedonic states on attributions of immorality and evil. Personality and Social Psychology Bulletin, 42(8), 1077–1091. Harman, G. (1999). Moral philosophy meets social psychology: Virtue ethics and the fundamental attribution error. Proceedings of the Aristotelian Society, 99, 315–331. Hartley, A. G., Furr, R. M., Helzer, E. G., Jayawickreme, E., Velasquez, K. R., & Fleeson, W. (2016). Morality’s centrality to liking, respecting, and understanding others. Social Psychological and Personality Science, 7(7), 648–657. Hartshorne, H., & May, M. A. (1928). Studies in the nature of character: Vol. 1. Studies in deceit. Macmillan. Haslam, N. (2006). Dehumanization: An integrative review. Personality and Social Psychology Review, 10(3), 252–264. Heiphetz, L., Strohminger, N., Gelman, S., & Young, L. (2018). Who am I? The role of moral beliefs in children’s and adults’ understanding of identity. Journal of Experimental Social Psychology, 78, 210–219. Heiphetz, L., Strohminger, N., & Young, L. (2017). The role of moral beliefs, memories, and preferences in representations of identity. Cognitive Science, 41(3), 744–767. Helzer, E. G., & Critcher, C. R. (2018). What do we evaluate when we evaluate moral character? In K. Gray & J. Graham (Eds.), Atlas of moral psychology (pp. 99–107). Guilford Press.

51

52

     .        .    

Helzer, E. G., Furr, R. M., Hawkins, A., Barranti, M., Blackie, L. E., & Fleeson, W. (2014). Agreement on the perception of moral character. Personality and Social Psychology Bulletin, 40(12), 1698–1710. Helzer, E. G., Jayawickreme, E., & Furr, R. M. (2018). Moral character: Current insights and future directions. In V. Zeigler-Still & T. K. Shackelford (Eds.), The SAGE handbook of personality and individual differences: Vol. 2. Origins of individual and personality differences (pp. 278–300). SAGE Publications. Hirozawa, P. Y., Karasawa, M., & Matsuo, A. (2020). Intention matters to make you (im)moral: Positive-negative asymmetry in moral character evaluations. Journal of Social Psychology, 160(4), 401–415. Hughes, J. S. (2017). In a moral dilemma, choose the one you love: Impartial actors are seen as less moral than partial ones. British Journal of Social Psychology, 56(3), 561–577. Hursthouse, R. (1999). On virtue ethics. Oxford University Press. Hursthouse, R., & Pettigrove, G. (2018). Virtue ethics. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/ win2018/entries/ethics-virtue/ Judd, C., James-Hawkins, L., Yzerbyt, V., & Kashima, Y. (2005). Fundamental dimensions of social judgment: Understanding the relations between judgments of competence and warmth. Journal of Personality and Social Psychology, 89(6), 899–913. Kervyn, N., Bergsieker, H. B., & Fiske, S. T. (2012). The innuendo effect: Hearing the positive but inferring the negative. Journal of Experimental Social Psychology, 48(1), 77–85. Kupperman, J. J. (1991). Character. Oxford University Press. Landy, J., Piazza, J., & Goodwin, G. P. (2016). When it’s bad to be friendly and smart: The desirability of sociability and competence depends on morality. Personality and Social Psychology Bulletin, 42(9), 1272–1290. Landy, J. F., Piazza, J., & Goodwin, G. P. (2018). Morality traits still dominate in forming impressions of others. Proceedings of the National Academy of Sciences, 115(25), Article E5636. Landy, J. F., & Uhlmann, E. L. (2018). Morality is personal. In K. Gray & J. Graham (Eds.), Atlas of moral psychology (pp. 121–132). Guilford Press. Lapsley, D. K., & Lasky, B. (2001). Prototypic moral character. Identity, 1(4), 345–363. Lee, K., & Ashton, M. C. (2004). Psychometric properties of the HEXACO personality inventory. Multivariate Behavioral Research, 39(2), 329–358. Louden, R. B. (1984). On some vices of virtue ethics. American Philosophical Quarterly, 21(3), 227–236. Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A theory of blame. Psychological Inquiry, 25(2), 147–186. Malle, B. F., & Knobe, J. (1997). The folk concept of intentionality. Journal of Experimental Social Psychology, 33(2), 101–121. Martin, J. W., & Cushman, F. (2015). To punish or to leave: Distinct cognitive processes underlie partner control and partner choice behaviors. PLoS ONE, 10(4), Article e0125193. Martin, J. W., Leddy, K., Young, L., & McAuliffe, K. (2022). An earlier role for intent in children’s partner choice versus punishment. Journal of Experimental Psychology: General, 151(3), 597–612.

Moral Character

McCullough, M. E., Emmons, R. A., & Tsang, J.-A. (2002). The grateful disposition: A conceptual and empirical topography. Journal of Personality and Social Psychology, 82(1), 112–127. McDowell, J. (1979). Virtue and reason. The Monist, 62(3), 331–350. Meindl, P., Jayawickreme, E., Furr, R. M., & Fleeson, W. (2015). A foundation beam for studying morality from a personological point of view: Are individual differences in moral behaviors and thoughts consistent? Journal of Research in Personality, 59, 81–92. Melnikoff, D. E., & Bailey, A. H. (2018). Preferences for moral vs. immoral traits are conditional. Proceedings of the National Academy of Sciences, 115(4), E592–E600. Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67(4), 371–378. Milgram, S. (1974). Obedience to authority: An experimental view. Harper & Row. Miller, C. (2003). Social psychology and virtue ethics. Journal of Ethics, 7(4), 365–392. Mooijman, M., Meindl, P., Oyserman, D., Monterosso, J., Dehghani, M., Doris, J. M., & Graham, J. (2018). Resisting temptation for the good of the group: Binding moral values and the moralization of self-control. Journal of Personality and Social Psychology, 115(3), 585–599. Monin, B., Sawyer, P. J., & Marquez, M. J. (2008). The rejection of moral rebels: Resenting those who do the right thing. Journal of Personality and Social Psychology, 95(1), 76–93. Moshagen, M., Hilbig, B. E., & Zettler, I. (2018). The dark core of personality. Psychological Review, 125(5), 656–688. Nicholson, I. (1998). Gordon Allport, character, and the “culture of personality,” 1897–1937. History of Psychology, 1(1), 52–68. Peabody, D. (1984). Personality dimensions through trait inferences. Journal of Personality and Social Psychology, 46(2), 384–403. Piazza, J., Goodwin, G. P., & Rozin, P, & Royzman, E. B. (2014). When a virtue is not a virtue: Conditional virtues in moral evaluation. Social Cognition, 32(6), 528–558. Pizarro, D. A., & Tannebaum, D. (2011). Bringing character back: How the motivation to evaluate character influences judgments of moral blame. In M. Mikulincer & P. R. Shaver (Eds.), The social psychology of morality: Exploring the causes of good and evil (pp. 91–108). American Psychological Association. Reeder, G. D., & Brewer, M. B. (1979). A schematic model of dispositional attribution in interpersonal perception. Psychological Review, 86(1), 61–79. Reeder, G. D., & Coovert, M. D. (1986). Revising an impression of morality. Social Cognition, 4(1), 1–17. Reeder, G. D., Kumar, S., Hesson-McInnis, M. S., & Trafimow, D. (2002). Inferences about the morality of an aggressor: The role of perceived motive. Journal of Personality and Social Psychology, 83(4), 789–803. Reeder, G. D., & Spores, J. M. (1983). The attribution of morality. Journal of Personality and Social Psychology, 44(4), 736–745. Riskey, D. R., & Birnbaum, M. H. (1974). Compensatory effects in moral judgment: Two rights don’t make up for a wrong. Journal of Experimental Psychology, 103(1), 171–173. Rosenberg, S., Nelson, C., & Vivekananthan, P. S. (1968). A multidimensional approach to the structure of personality impressions. Journal of Personality and Social Psychology, 9(4), 283–294.

53

54

     .        .    

Rosenberg, S., & Olshan, K. (1970). Evaluative and descriptive aspects in personality perception. Journal of Personality and Social Psychology, 16(4), 619–626. Royzman, E. B. & Hagan, J. P. (2017). The shadow and the tree: Inference and transformation of content in psychology of moral judgment. In J.-F. Bonnefon & B. Tremoliere (Eds.), Moral inferences (pp. 56–74). Routledge. Rozin, P., & Royzman, E. B. (2001). Negativity bias, negativity dominance, and contagion. Personality and Social Psychology Review, 5(4), 296–320. Schaumberg, R. L., & Mullen, E. (2017). From incidental harms to moral elevation: The positive effect of experiencing unintentional, uncontrollable, and unavoidable harms on perceived moral character. Journal of Experimental Social Psychology, 73, 86–96. Sedikides, C., Meek, R., Alicke, M. D., & Taylor, S. (2014). Behind bars but above the bar: Prisoners consider themselves more prosocial than non-prisoners. British Journal of Social Psychology, 53(2), 396–403. Siegel, J. Z., Mathys, C., Rutledge, R. B., & Crockett, M. J. (2018). Beliefs about bad people are volatile. Nature Human Behaviour, 2(10), 750–756. Skowronski, J. J., & Carlston, D. E. (1989). Negativity and extremity biases in impression formation: A review of explanations. Psychological Bulletin, 105(1), 131–142. Slote, M. A. (1983). Goods and virtues. Clarendon. Strohminger, N., & Nichols, S. (2014). The essential moral self. Cognition, 141(1), 159–171. Strohminger, N., & Nichols, S. (2015). Neurodegeneration and identity. Psychological Science, 26(9), 1468–1479. Sun, J., & Goodwin, G. P. (2020). Do people want to be more moral? Psychological Science, 31(3), 243–257. Swanton, C. (2003). Virtue ethics: A pluralistic view. Oxford University Press. Uhlmann, E. L., Pizarro, D. A., & Diermeier, D. (2015). A person-centered approach to moral judgment. Perspectives on Psychological Science, 10(1), 72–81. Walker, L. J., & Hennig, K. H. (2004). Differing conceptions of moral exemplarity: Just, brave, and caring. Journal of Personality and Social Psychology, 86(4), 629–647. Walker, L. J., & Pitts, R. C. (1998). Naturalistic conceptions of moral maturity. Developmental Psychology, 34(3), 403–419. Willis, J., & Todorov, A. (2006). First impressions: Making up your mind after a 100-ms exposure to a face. Psychological Science, 17(7), 592–598. Wojciszke, B., Bazinska, R., & Jaworski, M. (1998). On the dominance of moral categories in impression formation. Personality and Social Psychology Bulletin, 24(12), 1251–1263. Wojciszke, B., Dowhyluk, M., & Jaworski, M. (1998). Moral and competence-related traits: How do they differ? Polish Psychological Bulletin, 29(4), 283–294. Wood, A. M., Maltby, J., Steward, N., & Joseph, S. (2008). Conceptualizing gratitude and appreciation as a unitary personality trait. Personality and Individual Differences, 44(3), 621–632.

3 Moral Motivation William Ratoff and Adina L. Roskies

Hugo is racing down College Street on an uncomfortably hot afternoon, late for a meeting with students. He skipped lunch and is now experiencing hunger pangs. Suddenly, he spies a small unattended child, whose mother has just popped into the post office, enjoying a voluminous ice cream sundae. He experiences a strong urge to snatch the sundae from the unsuspecting child and eat it himself. However, almost simultaneously, Hugo judges that it would be morally wrong for him to do that. He experiences this moral judgment as exerting psychic pressure on him to abstain from stealing the sundae. After a momentary internal struggle, Hugo’s better judgment wins out: He shoots past the child and on to his meeting. As the vignette illustrates, it is part of our commonsense conception of the mind that there is an intimate connection between moral judgment and motivation: Such judgments seem to exert motivational pressure on their host subject to act as they recommend. In this chapter, we critically review the literature on moral motivation. Philosophers and psychologists alike have been puzzled by the phenomenon of moral motivation: How can a moral judgment exert motivational force? The nature of moral motivation has long been seen as a “hinge issue” around which core debates in moral philosophy – over the nature of moral judgment and the reality of objective moral facts, etc. – turn. Here we sketch the contours of this pivotal issue in moral psychology and catalogue and define the various key theoretical positions. We then turn our attention to evaluating the relevance of certain empirical results that, some philosophers have argued, bear on these positions.

3.1 What Are Motivational States? Before we investigate the nature of moral motivation, we ought to first say something about the nature of motivation in general. Paradigm examples of motivational states include desires, intentions, and emotions such as fear, etc. But what unites these mental states together as distinctively motivational? What feature do they have in common, in virtue of which they all count as being a motivational state and that explains their motivational character? We propose to (partially) answer this question by reviewing some of the folk-psychological commitments that are broadly agreed by philosophers to 55

56

           .    

characterize the class of motivational mental states. The first such commitment is that motivational states are intimately related to action – yet distinct from it. If you intentionally perform action F, then you must have had some motivation to F. But this motivation to F, by itself, is insufficient for your actually performing action F. After all, you can be motivated to F yet fail to do F for a variety of reasons – for example, because you were more strongly motivated to perform action G that was incompatible with your doing F. Second, motivational mental states often appear to produce action only in tandem with the right background beliefs (Smith, 1994). My desire to drink soda, in tandem with my belief that I can drink soda if I walk to the soda machine and insert a dollar into it, can prompt me to do just that. Likewise, my intention to drink from the bottle of soda I am holding, together with my belief that I can drink from that bottle if I raise it to my lips, can cause me to raise the bottle to my lips. As these cases suggest, a background belief (of the right sort) appears to be necessary for a motivational mental state to produce action. If I had no beliefs about how I might acquire soda, then my desire for soda wouldn’t prompt me to perform any particular action. And such beliefs play an important role in determining which action a motivational state will bring about. After all, had I believed that I could acquire soda by praying to the Carbonated Beverage Gods, then my desire for soda would have caused me to feverishly pray rather than walk to the soda machine. In general, then, motivational mental states appear to occupy a certain (coarse-grained) functional role in your mental economy: They can produce action in conjunction with the right (means–end) belief.1 Third, motivational mental states play a particular functional role, distinct from that played by means–end beliefs, in the production of action: In particular, they set the target for action and provide the impetus, or “push,” that generates it. In contrast, means–end beliefs, although necessary for the generation of action, play a mere “coordinating” or “guiding” role: They pick out the causal relations among various actions, that relate means to ends (Dretske, 1988). Such beliefs are not the driving source of action; that is the role of the motivational states. Rather, their role is to guide motivational states toward a successful realization of their aim. So, although motivational mental states and means–end beliefs are each necessary and jointly sufficient (in the right circumstances) for the production of action, they play very different roles in the psychic generation of overt behavior. The former provides “the aim and the impetus,” the latter the guidance. Fourth, it is widely held that motivational states are subject to different norms than those governing beliefs and other cognitive mental states. Belief, it is broadly agreed, is governed by epistemic norms alone (Adler, 2002; Parfit, 2011; Shah, 2006; Way, 2016). This is the doctrine known as evidentialism. Epistemic norms include requirements of theoretical rationality – such as the 1

Granting, of course, that certain background conditions are satisfied – for example, that the agent hasn’t suddenly suffered a total muscle paralysis, etc.

Moral Motivation

prohibition against believing contradictory propositions – and considerations that count as evidence in favor of one proposition or another. So the only factors relevant to whether or not you should believe the proposition that the Moon is made of cheese are epistemic norms – such as our decisive evidence from scientific inquiry that the Moon is made only of noncheese substances. Practical considerations, such as your self-interest, are neither here nor there. For example, the fact that an eccentric billionaire has promised you ten million dollars if you believe by next Tuesday that the Moon is made of cheese is not a reason for you to believe that proposition – although it could certainly be a reason for you to take steps to ensure that you believe by next Tuesday that the Moon is made of cheese, perhaps through experimental neurosurgery or hypothetical “belief pills.” Of course, there are dissenters from this evidentialist orthodoxy: Pragmatists about belief hold that practical considerations can count (sometimes) as reasons for belief (Hieronymi, 2005; Leary, 2017a). But the standard evidentialist view has it that belief is subject to epistemic norms alone. Now, whereas beliefs and other cognitive states are governed by epistemic norms alone, motivational mental states are also governed by distinctively practical norms. By way of illustration, suppose that I have much to gain from ingratiating myself to the host of the party I am attending tonight. Given this, I have strong reason to ingratiate myself to her – perhaps through engaging her in charming small talk. Consequently, I am warranted in forming the intention to attempt to ingratiate myself to my host. This case demonstrates how intention, unlike belief, is governed by practical considerations: In light of my strong reasons to F, I am justified in forming an intention to (attempt to) F. It is also highly plausible that intention is governed, in addition, by certain epistemic norms. Suppose, for example, that I am trapped down a well. If I had good evidence that I could take flight by flapping my arms, then it would be rational for me to form the intention to do so. After all, I could remove myself from my current predicament by flapping my arms. However, very plausibly, you cannot rationally intend to do something that you cannot rationally believe you can possibly do. Given my evidence, I cannot rationally believe that I can possibly take flight by flapping my arms. Consequently, I cannot rationally intend, given my evidence, to so take flight. Thus, your intentions are governed by an epistemic norm of consistency with your evidence, in addition to distinctively practical norms concerning what you have reason to do (Bratman, 1987; Holton, 2009; Marusic & Schwenkler, 2018; Ross, 2009; Setiya, 2008; Velleman, 1989). Motivational mental states beyond intention, such as desire, are also – according to some philosophers (Parfit, 2011; Scanlon, 2013) – governed by practical norms. For example, in light of our decisive moral reasons to avoid killing others with our actions, we ought to form the desire to avoid killing others. This is how a fully rational agent, it is claimed, would respond to these reasons. Of course, there are dissenters from this view. For example, those who endorse a Humean view of reasons hold that our (ultimate) desires are not subject to reason or rational criticism. Rather, our (ultimate) desires are the source of our reasons for action. You have a reason to F, for the Humean, just

57

58

           .    

when you have some ultimate desire that would be satisfied (at least, somewhat) if you performed action F (Schroeder, 2007). Nevertheless, on both views, there is an intimate relation between reasons for action and desires to so act. This contrasts with the case of belief and other nonmotivational cognitive mental states. These mental states, by the lights of common sense (Stahl et al., 2016), are subject to epistemic norms alone. But motivational mental states, as we have just seen, should be understood too through reference to their connections to distinctively practical norms. So, it is not just their characteristic functional role that distinguishes the motivational mental states from the nonmotivational cognitive ones, but also the nature of the norms to which they are subject or related. This is a key aspect of our folk-psychological understanding of the motivational, and the mental more broadly. Let us summarize our theoretical position on the nature of the motivational mental states. Motivational mental states can produce action in tandem with the right means–end beliefs; play a certain functional role in the production of action (they provide “the aim and the impetus”); and are governed by distinctively practical norms (in addition, perhaps, to epistemic norms). Of course, the functional roles constitutive of, and norms governing, particular motivational mental states – the desire that p, the intention to F, etc. – can be articulated in a more fine-grained manner. For example, the functional role constitutive of desire might be thought to be fully expressed by the role assigned to it by normative decision theory (Lewis, 1988, 1996) or a completed cognitive neuroscience (Schroeder, 2004). And the functional and normative nature of intention has also been more finely articulated (e.g., Bratman, 1987). But the features we have catalogued here capture, we believe, what is distinctive of motivational states qua motivational states.

3.2 The Problem of Moral Motivation Let us now turn to the nature of moral motivation. The locus classicus of the debate over the nature of moral motivation is a line of thought widely attributed to David Hume in his (1739) Treatise of Human Nature.2 We propose – taking inspiration from Smith (1994) – to formulate this dialectic as an inconsistent triad (i.e., a trio of jointly inconsistent propositions) that we dub “Hume’s problem.” Despite being jointly inconsistent, each of the three propositions is found highly plausible by philosophers. Each expresses a central organizing principle in moral philosophy. Moreover, each proposition can be empirically investigated. The three doctrines in question are the Humean theory of motivation; cognitivism about moral judgment; and moral judgment internalism. But, given their joint inconsistency, one must be given up. And the choice you make here determines, to a significant extent, the theoretical options available to you in a host of central debates in moral philosophy. 2

For example, by Bjornsson et al. (2015) or Parfit (2011).

Moral Motivation

Hume’s problem is formulated as follows: (1) (2) (3)

Beliefs cannot causally suffice, by themselves, for motivation. Moral judgments are beliefs. Moral judgments causally suffice, by themselves, for motivation.

Proposition (1) is the statement of the Humean theory of motivation. Proposition (2) expresses cognitivism about moral judgment. And proposition (3) articulates moral judgment internalism. Before we outline the inconsistency Hume finds between these doctrines, let us first characterize each of these views and review the reasons for endorsing them. The Humean theory of motivation is the doctrine that beliefs alone cannot causally suffice for motivation. Rather, desire is necessary too: Motivation and action are generated only by a belief and a desire working in tandem (Schueler, 2009; Sinhababu, 2009; Smith, 1987, 1994). Desires can be brought about, or changed, by some chain of reasoning only if a desire features among the premises of that thinking (Sinhababu, 2009). Very briefly, this principle of philosophical psychology enjoys the status of orthodoxy on the grounds that it seems to be part of our (empirically very successful) folk-psychological theory of the mind: Our folk-psychological grip on the mental domain suggests that beliefs and motivational states bear no logically necessary connections to one another. After all, Hume noted, it seems that having any one set of beliefs regarding the ways things are, rather than some other set of beliefs, doesn’t in itself place any restrictions on how one is fundamentally motivated to act. Beliefs by themselves don’t appear to cause, or necessitate, particular desires or intentions, etc. Of course, my desire for a thirst-quenching drink might combine with my belief that I can get a drink from the refrigerator to cause me to acquire the intention to walk to the fridge. But this is consistent with beliefs alone having no causal or necessary connection to desires or to other elements of motivation. And this latter claim is all the Humean is affirming. In this way then, our commonsense grasp of the mental supports the Humean’s contention that beliefs cannot causally suffice, by themselves, for motivation. Hence, in advance of our theoretical commitments pushing us this way or that, we ought to endorse the Humean theory of motivation. It enjoys enough prima facie warrant to constitute the presumptive view in this region of philosophical psychology. The second doctrine making up “Hume’s problem” is cognitivism about moral judgment. This is simply the view that moral judgments are beliefs – in particular, beliefs whose contents are propositions concerning moral requirements or reasons. Examples of such judgments are my belief that it would be wrong for me to eat meat or your belief that you are morally required to give to charity.3 Cognitivism contrasts with noncognitivism about moral judgment. 3

Throughout this chapter, we shall be following the norm in moral philosophy of understanding the term “moral judgment” to refer to first-personal moral judgments – that is, one’s judgments concerning what one would be morally required to do or morally forbidden from doing. These are the species of moral judgment that principally concern us in debates over the motivational power of moral judgments.

59

60

           .    

This is the doctrine that moral judgments are not beliefs, but rather certain (complexes of ) noncognitive mental states, such as desires, sentiments, or states of approbation or disapprobation, etc. The phenomenon of morality, for the noncognitivist, bottoms out then, not in a realm of moral facts, but rather in the fact that we happen to be motivated to behave in certain – pro-social; transgressor-punishing; norm-endorsing, etc. – ways (Blackburn, 1998; Gibbard, 1990). The central importance of the debate between the cognitivist and the noncognitivist to moral philosophy can be appreciated when it is observed that cognitivism is a commitment of moral realism, the view that there are (objective) facts about what morality requires of us. After all, if there are moral facts, then moral judgments must be the mental states that aim to correctly represent these facts. Since beliefs are the kind of mental state that aims to correctly represent – or fit – the facts (Anscombe, 1957; Smith, 1987, 1994), the moral realist must hold that moral judgments are a variety of belief. To judge that morality requires you to F, is just to believe that morality requires you to F. Conversely, the truth of noncognitivism seems to entail the truth of moral antirealism, the doctrine that there are no facts of the matter about what morality requires of us. This opens the way for a potent antirealist argumentative strategy: simply show that cognitivism is false and, from there, derive the truth of moral antirealism (Ayer, 1936). The third, and final, doctrine making up Hume’s inconsistent triad is moral judgment internalism. This is the view that moral judgments can causally suffice, by themselves, for motivation (Prinz, 2015). More precisely, it is the doctrine that your judgment that morality requires you to do F causally suffices, by itself, for your being motivated – to some degree – to do F. In other words, that your moral judgment here can incline or push you – somewhat, at least – toward doing F. It contrasts with moral judgment externalism, the view that your judgment that morality requires you to do F does not causally suffice, by itself, for the presence of any motivation in you toward doing F. Rather, moral motivation can only result from moral judgment in conjunction with some background mental state. Philosophers who endorse internalism include Blackburn (1998), Copp (2018), Darwall (1983), Dreier (2015), Gibbard (1990), Mackie (1977), and Smith (1994). Moral judgment internalism looks to enjoy intuitive support. Arguably, the first-person phenomenology of making a moral judgment suggests that such judgments can causally suffice, by themselves, for the presence of motivations to act as they recommend. Suppose, for example, that I am in dire financial straits: An investment has gone sour and I am overleveraged on my mortgage. While filling out my taxes for the year, I realize I could salvage my financial situation somewhat by misrepresenting my income to the Federal Government. I would likely get away with it, I muse, and the negative impact on others would be negligible. I then experience the urge to cheat on my taxes. However, I suppress this urge, in part, by reminding myself that it would be morally wrong for me to cheat on my taxes. Introspectively, it certainly seems like my first-person moral judgments are always accompanied by some motivation in me to act in accord

Moral Motivation

with them. I experience them as exerting psychic pressure on me to act as they recommend. In addition, moral judgments are often cited as playing the role of a motivational state in the explanation of action. For example, suppose that I ask an eminent historian why certain Polish partisans sheltered Jews during World War II. She replies: “Because they judged that morality required them to save the lives of the innocent and they believed that they could save the lives of these innocent people by sheltering them.” Here moral judgments are depicted as playing the role of a motivational mental state in the production of action: In tandem with certain means–end beliefs, moral judgments causally suffice for action. Examples such as these are widely agreed to provide prima facie support for internalism (Shafer-Landau, 2003). The internalist should say more about what precisely she means when she says that a moral judgment can causally suffice, by itself, for motivation. Some contemporary internalists flesh their doctrine out in the following way: Internalism, they maintain, is the view that moral judgments causally suffice, by themselves, for motivation in a rational agent. This is a version of “conditional internalism,” internalism defeasible under certain conditions, that is currently the most prominent variety of internalism endorsed in the contemporary philosophical literature (Korsgaard, 1986, 1996; Smith, 1994; van Roojen, 2018; Wallace, 2006; Wedgwood, 2007). It is a step back from unconditional internalism, the view that moral judgments are necessarily motivating, that characterized the earlier literature on internalism. This new weaker version of internalism was motivated by the recognition that being in a psychologically abnormal condition, such as apathy or depression, one incompatible with full rationality, can render one’s moral judgments motivationally inert.4 Externalism should now be understood, in contrast, as the doctrine that moral judgments do not causally suffice, by themselves, for motivation in a rational agent. Rather, if moral judgments do causally suffice for appropriate motivation in a rational agent, then it must be in virtue of said judgments interacting with some or other background attitude(s). We are now in a position to understand Hume’s problem and its force. The alleged inconsistency between the Humean theory of motivation, cognitivism, and internalism can be rationally reconstructed in the following manner: Suppose that you judge that you are morally required to F. Granting (rationalist conditional) internalism – the doctrine that your moral judgments causally suffice, by themselves, for the presence of motivation to act in accord with these judgments in a rational agent – it follows that you must be in a motivational 4

An agent S is fully rational if and only if (1) S is fully coherent (i.e., possesses no conflicting attitudes) and (2) S is appropriately responsive to all her reasons. Full rationality is consequently a very difficult state for anyone to occupy. Very plausibly, apathy and depression are going to be inconsistent with full rationality. After all, even when apathetic, you still have good reasons to get out and about, reasons that you are (irrationally) not responding to appropriately. Similarly, when you are depressed, you typically have good reasons to feel things (joy) or do things (get out of bed), reasons to which you are failing to appropriately respond.

61

62

           .    

mental state that can incline, or push, you toward doing F. But the Humean theory of motivation has it that no belief can causally suffice, by itself, for the presence of any motivation whatsoever, even under conditions of full rationality. This means that your moral judgment cannot be a belief, and must rather itself be a motivational state of one sort or another – most plausibly, some kind of desire. In this way then, we have deduced the falsity of cognitivism about moral judgment, the doctrine that moral judgments are beliefs, and the truth of noncognitivism, from the conjunction of internalism and the Humean theory of motivation. Of course, a stout-hearted cognitivist will not just roll over in defeat here. Rather, she will resist this dialectic by rejecting either internalism or the Humean theory of motivation (or both). The first group of cognitivists who reject internalism are externalists. For externalists, moral judgments alone do not causally suffice for motivation. Moral judgments are, by themselves, motivationally inert. Merely judging that you are morally required to do F does not, by itself, provide any motive to do F. Rather, moral judgments can only bring about moral motivation in conjunction with the right background desire – the desire, say, to do the morally right thing, or the desire to avoid harm, provided that avoiding harm is in that instance the morally right thing to do, etc. (Boyd, 1988; Parfit, 2011; Railton, 1986). The second group of cognitivists, those who deny the Humean theory of motivation, are known as “anti-Humeans.” They hold that beliefs alone can produce motivation. It is not the case, for anti-Humeans, that motivation is only ever brought about by a belief if it is working in tandem with an appropriate desire. Consequently, on this view, your moral judgments can causally suffice for motivation, and intentional action, all consistent with these moral judgments themselves being nothing but beliefs (McNaughton, 1988; Nagel, 1970; Platts, 1991; Shafer-Landau, 2003). As we have illustrated, the nature of moral motivation is a “hinge issue” around which central debates in moral psychology and moral philosophy turn. Philosophers have long jockeyed their intuitions for and against the doctrines of internalism, cognitivism, and the Humean theory of motivation, hoping to establish their preferred resolution to Hume’s problem. We now turn to reviewing and evaluating claims made by various philosophers that certain empirical observations might bear upon these matters.

3.3 Empirical Psychology and Moral Motivation Philosophers have recently begun arguing that certain empirical findings from the psychological and brain sciences tell in favor of one or other doctrine in philosophical moral psychology. In the rest of this chapter, we review and critically discuss a sample of these claims from the literature. We shall focus our attention first on claims that empirical results settle the debate between internalists and externalists (or, at least, militate heavily in

Moral Motivation

favor of one over the other), before turning to similar claims regarding the disagreement between cognitivists and noncognitivists, and between Humeans and anti-Humeans about motivation. Philosophers have argued that the phenomenon of clinical psychopathy is pertinent to the internalist/externalist debate (Schroeder et al., 2010). Traditionally, internalists and externalists have coaxed our intuitions over the logical possibility of amoralists: hypothetical individuals who make moral judgments without being motivated in the slightest to act as they recommend, despite being seemingly rational (Brink, 1997; Shafer-Landau, 2003). If amoralists are logically possible, then internalism stands refuted. After all, the mere possibility of an amoralist would entail that it is not the case that moral judgments, by their nature, enjoy a necessary conceptual connection to motivation in a rational agent. Externalists have therefore long appealed to the seeming possibility – or conceivability – of amoralists to bolster their view. However, such conceivability arguments are problematic, both because some doubt that the conceivability of some state of affairs p is good evidence that p is possible (Putnam, 1975), and also because people’s intuition about what is conceivable differ. Given this, an actual example of a real-life amoralist would be far more convincing evidence of their possibility. On first examination, psychopaths appear to be a case of real-life amoralists. Although such people are – by and large – cognitively normal, they manifest little guilt, empathy, or remorse for morally wrong actions. Psychopaths are often perfectly intelligent, seem rational, and appear able to make appropriate moral judgments about a wide range of cases. Nevertheless, they frequently have a history of chronic antisocial behavior behind them – including, but not limited to, lying, stealing, torturing, and killing. And, even more disturbingly, they can engage in these actions without emotional cost. In short, psychopaths look cognitively equipped to make appropriate moral judgments, but are seemingly indifferent to the deliverances of such judgments or the dreadful consequences of their actions. This looks like good empirical evidence for externalism (Kelly et al., 2007). However, internalists disagree. Some philosophers have argued that the existence of psychopaths poses no problem for internalism (Nichols, 2004; Prinz, 2007). How so? Well, these thinkers argue that psychopaths don’t really understand morality or even grasp moral concepts at all. As Prinz (2007, p. 43) puts it: “Psychopaths seem to comprehend morality, but they really don’t. They use moral terms in a way that deviates strikingly from the way non-psychopaths use those terms. These deviations suggest that they do not possess moral concepts.” In other words, some internalists have sought to render the existence of psychopaths consistent with their view by denying that psychopaths really make moral judgments. After all, if psychopaths don’t even make moral judgments in the first place, then the fact that they seem not to be motivated to act in accord with the norms of commonsense morality – including norms that they profess to recognize – is no obstacle for internalism. In this way, the internalist can continue to hold that moral judgments can causally suffice, by themselves,

63

64

           .    

for motivation (and do so suffice in a rational agent), while admitting the existence of individuals with the psychic and behavioral profile characteristic of psychopaths. Of course, the position that a subset of intelligent adult humans don’t grasp moral concepts is a heavy lift. The internalist had better have some principled reason to advocate for it, independent of her desire to preserve her theory. Fortunately for the internalist, a reason is available. Nichols (2004) and Prinz (2007) argue that a grasp of moral concepts presupposes being able to distinguish the requirements of morality from the requirements of convention. After all, very plausibly, a grip on any arbitrary concept F requires being able to reliably distinguish instances of Fs from instances of non-Fs. You don’t really understand the meaning of a concept if you are systematically confused about its extension under conditions of full information. These philosophers then appeal to empirical evidence that, in tandem with the reasoning described earlier, supports the conclusion that psychopaths don’t understand the moral/conventional distinction. First, Blair (1995) found that incarcerated adult psychopaths do not reliably distinguish moral from conventional wrongs. They did not treat moral and conventional wrongs significantly differently and, unlike a control group of nonpsychopathic prisoners, they tended to ignore the victim’s welfare when explaining why some action was morally wrong. Second, Blair (1997) administered the moral/conventional wrongs test on children with psychopathic tendencies. They found that these children, unlike control children, tended to treat all wrongs as merely conventional. Morality, for these children, seemed no different from etiquette or conventions about which side of the road one should drive upon. These results are all the more striking when one learns that healthy children have already begun to master the moral/conventional distinction by the time they are three years old (Nucci, 2001; Smetana, 1981; Turiel, 1983). Taken together, these results appear to strongly support the conclusion that psychopaths lack a (full or proper) understanding of moral concepts. Prinz (2007, p. 44) sums up this picture of psychopathy as “psychopaths can give lip service to morality, but their comprehension is superficial at best.” If this is correct, then the argument for externalism from psychopathy can be defused, in the way described earlier, through appeal to the claim that psychopaths do not really make moral judgments, a proposition that is itself supported by reference to empirical evidence that psychopaths do not (properly) grasp moral concepts. However, the empirical evidence concerning whether psychopaths understand the moral/conventional distinction is inconsistent. For example, Aharoni et al. (2014) found that psychopaths correctly distinguished moral from conventional transgressions. This study employed considerably more participants (139) than Blair’s 1995 study did (20–40), and its result also cohere with other similar studies (Aharoni et al., 2012). Aharoni et al. (2014, p. 179) conclude thusly: “The observed pattern of results comports with the alternative view that psychopathic individuals ‘know right from wrong but don’t care.’”

Moral Motivation

If these studies are correct, psychopaths continue to pose a problem for internalism.5 Another group that seems unmoved by moral judgment yet understands moral concepts are patients with ventromedial prefrontal cortex (vmPFC) lesions. Some have argued that they are real-life cases of amoralists: Damasio et al. (1990) describe the pattern of behavioral and psychic deficits manifested by such patients as constituting an “acquired sociopathy.” Roskies (2003) argued that vmPFC patients constitute good empirical evidence for unconditional externalism about moral judgment and, because they possessed normal moral concepts prior to their injury, provide reason to believe that their cognitive grasp of morality remains intact. However, the moral judgments of vmPFC patients, according to Roskies, have lost their motivational punch. On Roskies’ analysis, the distinctive psychopathology of vmPFC patients reveals that moral judgments do not causally suffice, by themselves, for the presence of appropriate motivation. Disconnected from motivational systems by vmPFC lesions, their moral judgments fail to lead to motivation and action. Let’s get clearer on the nature of the standard deficits induced by a vmPFC lesion. First, such patients appear cognitively normal on a wide range of standard psychological tests, including those measuring intelligence or domain-general reasoning abilities and those probing whether their knowledge of the world has been damaged. In particular, the moral reasoning of vmPFC patients appears to be unimpaired under experimental conditions: They perform at a normal level on Kohlberg’s moral reasoning scale (Saver & Damasio, 1991) and make normal moral judgments in a variety of hypothetical scenarios (Koenigs et al., 2007).6 Second, vmPFC patients have a profound difficulty in acting in accord with social norms or, indeed, in accord with considerations of basic prudence at all. This is best demonstrated through a concrete example. The following case study of patient EVR, reported by Damasio et al. (1990, pp. 91–92), vividly illustrates this: By age 35, in 1975, EVR was a successful professional, happily married, and the father of two. He led an impeccable social life, and was a role model to younger siblings. In that year, an orbitofrontal meningioma was diagnosed and, in order to achieve its successful surgical resection, a bilateral excision of orbital and lower mesial cortices was necessary . . . EVR’s social conduct was profoundly affected by his brain injury. Over a brief period of time, he entered

5

6

Of course, conditional internalists are free to maintain that psychopaths fail to be motivated appropriately by their moral judgments because they are, for one reason or another, irrational. However, it is difficult to see what this irrationality could consist in, given that psychopaths appear cognitively normal in all other respects. However, there are exceptions. The results of Koenigs et al. (2007) indicate that the moral judgments of vmPFC patients with acquired sociopathy differ in certain domains. In particular, they make moral judgments, that are statistically significantly different from normal subjects, in situations in which that Greene et al. (2004) classify as “up close and personal.” The judgments they make are “more utilitarian” in character that those normal subjects make.

65

66

           .    

disastrous business ventures (one of which led to a predictable bankruptcy), and was divorced twice (the second marriage, which was to a prostitute, only lasted 6 months). He has been unable to hold any paying job since the time of the surgery, and his plans for future activity are defective.

EVR has a clear deficit in acting prudentially. Roskies (2003) argues that vmPFC patients like EVR also exhibit a parallel failure to abide by certain norms of commonsense morality – such as those prohibiting breaking one’s promises or reneging on one’s responsibilities, etc. – to a degree sufficient to warrant the description “acquired sociopathy.” On these grounds, Roskies concludes that vmPFC patients suffer from a moral failure. However, since the performance of such patients on moral reasoning tests is (mostly) in the normal range, she infers that this failure consists not in a deficit in making appropriate moral judgments, but rather in a deficit in associating value with those judgments, so that the patients’ moral judgments have lost their normal motivational punch. In other words, vmPFC patients, according to Roskies, fail to be motivated to act in accord with the requirements of morality, despite (being capable of ) knowing that what they are doing is wrong. In this way then, Roskies argues that empirical observations of brain-damaged patients support externalism about moral judgment. Moral judgments, by themselves, are not sufficient for generating moral motivation. Rather, they only move us when appropriately coupled to separate (and separable) motivational systems. A number of philosophers have counters to Roskies’ arguments and alternate interpretations of these empirical observations. For example, Cholbi (2006) holds that vmPFC patients lack the moral beliefs that Roskies attributes to them. Consequently, he concludes, such patients pose no trouble for internalism. And Smith (2007) takes vmPFC patients’ failure to be motivated by their moral judgments only to show that their brain damage has rendered them systematically irrational. Leary (2017b) argues that vmPFC patients make “weaker,” or less confident, normative and moral judgments than normal subjects. As a result, she maintains, they are less motivated by these judgments than normal people – though, crucially, still somewhat motivated, which allows their decision making to be overruled by desires for greater or more immediate rewards. In these ways the internalist can seek to render her view consistent with the existence of neuropsychological patients with “acquired sociopathy.” This debate is ongoing and settled consensus on the significance, if any, of vmPFC patients for the internalist/externalist dispute has not yet been reached. Much of the traditional philosophical debate concerning the internalism/ externalism dispute, as we have seen, has turned upon the logical possibility of amoralists. In recent years, some have worried that the intuitions of philosophers engaged in this debate may have been “corrupted” by their theoretical commitments. Perhaps commitment to externalism affects whether one judges amoralists to be conceivable, and therefore logically possible, whereas commitment to internalism drives judgments of inconceivability. Hoping to make concrete progress in face of this seeming stalemate, experimental philosophers have decided to consult the intuitions of nonphilosophers – “the folk” – who

Moral Motivation

almost certainly lack meta-ethical views and thus should be less subject to theoretical confirmation bias. Their reasoning here goes like this: If a substantial majority of nonphilosophers are ready to attribute moral judgments to moral subjects who lack the corresponding appropriate motivation, then the best explanation of this – ceteris paribus – is that “ordinary people” operate with an externalist conception of moral judgment and that the intuitions of internalist philosophers have been corrupted by their theoretical commitments. Likewise, for externalism and externalist philosophers, if the folk are unwilling to attribute moral judgments under such circumstances. What do the data tell us? The first such experimental philosophy study (Nichols, 2002) sought to empirically investigate the popular conditional version of internalism, introduced earlier, according to which a moral judgment only causally suffices, by itself, for appropriate motivation in a rational agent, and an irrational agent can be wholly unmoved by her moral judgments. In order to test whether moral judgments necessarily produce corresponding motivations under conditions of full rationality, Nichols presented the following vignette to “philosophically unsophisticated undergraduates”: John is a psychopathic criminal. He is an adult of normal intelligence, but has no emotional reaction to hurting other people. John has hurt and indeed killed other people when he has wanted to steal their money. He says that he knows hurting others is wrong, but that he just doesn’t care if he does things that are wrong. (Nichols, 2004, p. 74)

After reading this scenario, subjects (N ¼ 26) were asked whether John really understands that hurting others is morally wrong. Nichols’ results go like this: 85 percent of the subjects responded “Yes” and 15 percent responded “No.” These results suggest that ordinary people are mostly inclined to attribute moral understanding (and thus presumably moral beliefs) to John, despite his apparent rationality and lack of appropriate motivation. Nichols (2002, 2004) takes this as evidence that the folk operate with some externalist conception of moral judgment, in which moral judgments can fail to produce corresponding motivation in a fully rational agent, one incompatible with the truth of conditional internalism (see also Strandberg & Björklund, 2013). Nichols’ study, however, has been criticized on various grounds. First, as Joyce (2008) observes, Nichols’ vignette does not make it explicit that John is practically rational. Given this, it is consistent with the data at hand that (at least) some of these subjects are operating with a conditional internalist conception of moral judgment, like the one advocated by Smith (1994), and simply believe that John is irrational. Having failed to control for this interpretation, Nichols’ study does not serve as evidence against conditional internalism. Second, Nichols’ vignette leaves too much implicit in other respects too. For example, it doesn’t rule out the possibility that John is somewhat appropriately motivated by his moral judgments, but this is overridden by stronger motivations to the contrary. And this is all that any plausible version of internalism entails. Third, the results of Nichols’ study have failed to replicate. Björnsson

67

68

           .    

et al. (2015) ran a word-for-word duplicate of Nichols’ study, this time with 93 participants, and found that only 48 percent (rather than 85 percent) of subjects answered “Yes” to the question probing whether John understood that hurting others is morally wrong, with 52 percent denying that John understood this. Taken together, these criticisms suggest that further experimental inquiry is needed. In response to this, Björnsson et al. (2015) report a number of studies that they interpret as suggesting that a majority of ordinary people operate with some internalist conception of moral judgment. The vignettes they use are far longer and more detailed and explicit than that used by Nichols (2002). Consequently, they do not suffer from the problems described earlier. The most compelling result they report comes from a comparison of four different studies they conducted. All feature an agent Anna who can correctly classify actions as morally right or wrong but who reliably fails to act in accord with the requirements of morality. What varies between the studies is the explanation that is proffered for Anna’s actions. In one study (“Inner Struggle”), Anna was depicted as being motivated to act morally, but this motivation was then trumped by a stronger motivation to the contrary: to perform an action that she classified as wrong. Here 80 percent of subjects attributed Anna the moral belief that her action was wrong. In the second such study (“Listlessness”), Anna is presented as performing the same morally wrong action that she classifies as being wrong. However, here she is described as experiencing no motivation whatsoever to act in accord with judgment. But this is explained as being the result of her clinical depression, that has recently onset and left her listless and bereft of her previous zest for life, as well as her prosocial and otherregarding motivations. Here 70 percent of tested subjects were willing to attribute Anna the moral belief that her action was wrong. In the third study (“Psychopath”), Anna is again depicted as performing an action, that she classifies as morally wrong, and as experiencing no motivation whatsoever to refrain from doing it. However, here the explanation the vignette proffers for her failure to be so motivated is her clinical psychopathy. Now only 46 percent of test subjects attribute Anna the belief that action was morally wrong. Lastly, in the fourth such study (“No Reason”), Anna is again presented as doing something that she classifies as being morally wrong, but as having no motivation at all to abstain from so acting. Here, however, no explanation or reason is given for the absence of Anna’s expected motivation. In this case, a mere 36 percent of subjects are willing to attribute Anna the belief that her action was morally wrong. What’s the significance of all this? Well, taken together, Björnsson et al. (2015) suggest that most tested subjects operate with an internalist conception of moral judgment: In the absence of a corresponding appropriate motivation, a rational agent cannot count as holding a moral belief. After all, the best explanation of the drop in “Yes” answers between the “Inner Struggle” (80 percent), where Anna has some motivation to act as morality requires, and “No Reason” (36 percent) or “Psychopath” (46 percent) conditions, in which she has no such

Moral Motivation

motivation, is that most tested subjects regard moral judgments as entailing the existence of an appropriate motivation to act as they recommend in a rational agent. This is further supported by the fact that 70 percent of subjects were willing to attribute a moral belief to Anna in the “Listlessness” condition, in which Anna is presented as having no motivation to act in accord with morality due to her clinical depression, a mental state that is plausibly incompatible with full rationality. However, there is still reason to be skeptical over whether experimental philosophy has settled the internalist/externalist dispute. First, results are not univocal over whether the folk operate with an internalist or an externalist conception of moral judgment (Björnsson et al., 2015; Strandberg & Björklund, 2013). Second, and more importantly, the case has not yet been convincingly made that the folk’s conceptions of things – such as their conception of moral judgment – reliably carves at the joints of nature (Williamson, 2007). We have not been given good reason to think there is a straightforward link between what the folk think about internalism and externalism and the truth of these doctrines. As in the internalism/externalism debate, empirical evidence has been marshaled by partisans in the cognitivism/noncognitivism debate. In recent years, there have been plenty of psychological and neuroimaging experiments investigating the relationship between making a moral judgment and experiencing certain emotions. On the face of it, if it can be demonstrated that (part of ) what it is to make a moral judgment is to undergo a certain emotion (of admiration or anger, say), then that would constitute good evidence for noncognitivism about moral judgment – the view that moral judgments are (complexes of ) noncognitive states, such as desires and feelings, and not beliefs. After all, perhaps, emotions are themselves noncognitive states, composed (at least, in part) out of paradigm noncognitive states such as desires, inclinations, and aversions, etc. (Prinz, 2015). Of course, any such evidence would be theoryladen: The fact, if it is a fact, that moral judgments are constituted by emotional states only counts as evidence for noncognitivism on the assumption that emotions themselves are noncognitive states. Cognitivists about emotion – such as Nussbaum (2001) and Solomon (1976) – would (clearly) reject this proposition. These philosophers hold that emotions are a species of cognitive state – for example, that what it is to experience guilt is to judge that you have engaged in a wrongdoing. Nevertheless, the dominant view in philosophical psychology is that emotions are noncognitive states (Prinz, 2007). Consequently, for the purposes of this discussion, we shall assume that noncognitivism about emotion is correct and thus that any evidence that moral judgments are constituted by emotions further counts as evidence for noncognitivism. The evidence to be reviewed here in favor of noncognitivism also serves as evidence for the Humean theory of motivation. After all, the chief reason to disbelieve this Humean philosophical psychology comes from the case of moral motivation. Anti-Humeans contend that your moral beliefs – and your normative beliefs more generally – can motivate you to action in the absence of any

69

70

           .    

desire. However, if moral and practical normative judgments are really noncognitive states, such as desires, as the noncognitivist contends, then even the case of moral motivation will be consistent with Humean psychology: Your moral judgments can motivate you because they are nothing but (complexes of ) desires, sentiments, or emotions, etc. In which case, the only grounds for disbelieving the Humean theory of motivation will have been neutralized. So, although we shall present all the following evidence as pertaining to the debate between the cognitivist and the noncognitivist, the reader should bear in the mind that this evidence also speaks to the issue dividing Humeans and antiHumeans about motivation. There is now a significant amount of neuroimaging evidence suggesting that making a moral judgment cooccurs with experiencing an emotion. Indeed, every neuroimaging study investigating moral judgment seems to implicate brain areas known to be involved with emotion in moral cognition (Greene & Haidt, 2002; Prinz, 2015). For example, Heereken et al. (2003) instructed subjects to judge whether sentences are “morally incorrect” (such as “S steals R’s car”) or “semantically incorrect” (such as “S drinks the newspaper”). When subjects identified a sentence as being “morally incorrect,” their brains activated significantly more in areas associated with emotions relative to when they were identifying a sentence as “semantically incorrect.” Similarly, Moll et al. (2002) had subjects make “right” or “wrong” classifications about so-called moral sentences, such as: “They hung up an innocent person,” and “factual sentences,” such as: “Stones are made of water.” Again, areas of the brain associated with emotions were significantly more active when subjects were making judgments about “moral sentences” relative to when they were making judgments about “factual sentences.” This conclusion – that areas of the brain underpinning emotions are implicated in tasks inducing moral cognition – is supported by a growing number of neuroimaging studies (Berthoz et al., 2002; Greene et al., 2001; Moll et al., 2002; Sanfey et al., 2003). The brain structures implicated by these studies include the anterior cingulate cortex, the insula, the orbitofrontal cortex, the temporal pole, and the medial frontal gyrus – all familiar players from emotion studies (Phan et al., 2002). There is also a growing body of behavioral evidence that emotions have a causal influence on moral judgment. One prominent current in this literature is evidence that experiencing negative emotions looks to lead subjects to make more “morally critical” moral judgments than they would otherwise have made. For example, in one important study, Wheatley and Haidt (2005) hypnotized subjects to feel a pang of disgust whenever they heard the neutral words “often” or “take.” Afterwards, they were asked to morally evaluate the protagonist of various stories, some of which contained one or other of these two trigger words. For example, they might hear about a congressman who “takes bribes” or “is often bribed.” Wheatley and Haidt found that the strength of subjects’ wrongness evaluations increases when the story contained one of these neutral trigger words, relative to evaluations of protagonists in morally equivalent scenarios. In other words, the experience of disgust, induced by a morally

Moral Motivation

irrelevant trigger word, is observed to cause an increase in the strength of subjects’ moral denunciations. Furthermore, the effect remains when subjects are asked to evaluate the conduct of protagonists in morally neutral scenarios. For example, subjects report finding a student who is described as “often picking interesting topics in school discussions” – a morally neutral action – as morally suspect, even though they can’t explain why (“It just seems like he’s up to something. . .”). This result suggests that the experience of disgust, induced by a trigger word, can cause a subject to negatively morally evaluate a person who is described in ways that would not warrant such judgment. In a similar study, Schnall et al. (2008) instructed subjects to morally evaluate the conduct of protagonists in described scenarios. For example, “Frank’s dog was killed by a car in front of his house. So he cut up the body and cooked it and ate it for dinner. How wrong was that?” Subjects were either sitting at a clean, tidy desk or a filthy, messy desk (featuring such disgust-inducing items as a used tissue, a greasy pizza box, a crusty drinking cup, and a chewed pencil, etc.) while performing this task. Schnall and colleagues found that subjects who were sat at the filthy desk morally evaluated the protagonists of the described scenarios more harshly than those who sat at the clean desk. Again this suggests that simply experiencing the emotion of disgust while reading the vignettes causes the subject to form more morally negative judgments of the described actors.7 One natural explanation of these results is noncognitivism about moral judgment. For the noncognitivist, moral judgments cooccur in the mind with emotions, as the neuroimaging results described earlier suggest, because moral judgments just are constituted by certain (complexes of ) noncognitive states – namely, the moral emotions of disapprobation, anger, admiration, guilt, etc. According to the noncognitivist, negative emotions induce more critical moral judgments, as the catalogued behavioral results described earlier indicate, precisely because such judgments are nothing over and above these emotions. In this way then, research in empirical moral psychology can be marshaled in favor of a noncognitivist philosophical psychology. Of course, cognitivists about moral judgment push back here. They argue, for example, that these empirical results establish at most that there are causal relations between moral judgments and the emotions, and not that one is constituted by the other. For example, a cognitivist externalist who further holds that human subjects normally care deeply about morality can easily affirm that moral judgments cause emotions. After all, romantic music, gloomy weather, or high stakes sporting events – things many people care about – all cause emotions, without them (or our representations of them) being

7

That said, see Landy and Goodwin (2015) for a meta-analysis that shows that the effects of disgust on moral judgments are very weak and conditional upon certain background conditions obtaining. There is also compelling evidence against there being an effect of emotion on moral judgments (see, e.g., Barger & Derryberry, 2013; Gamez-Djokic & Molden, 2016; Gawronski et al., 2018).

71

72

           .    

constituted by emotions. Given that, we should expect that our judgments about morality – something we generally care deeply about – should incite our passions. Indeed, there is empirical evidence that at least some moral judgments precede associated emotional experiences (Cusimano et al., 2017), findings that are consistent with moral judgments causing said emotions. But what about (negative) emotions causing moral judgments? How can the cognitivist explain this? One option is the experience of negative emotions draws our attention to morally relevant features of a situation (Prinz, 2007). This could certainly explain the results of Schnall et al. (2008) and some of those reported by Wheatley and Haidt (2005). However, it flounders in the face of Wheatley and Haidt’s finding, recorded in the same paper and briefly catalogued earlier in this section, that subjects who are hypnotized, such that they experience pangs of disgust upon hearing a neutral trigger word (“often”), negatively morally evaluate even protagonists who are not described as engaging in any morally wrong behaviors. There are no described morally wrong features at all in the scenarios in question. Consequently, it cannot be the case that these subjects are having their attention drawn, by their emotion, to features of the described situation that warrant moral condemnation: There are no such features. At first glance the cognitivist’s explanation appears inadequate. However, cognitivists can explain these empirical results too. Very plausibly, the perception of a moral wrong, our cognitivist can hold, warrants not just the belief that a moral wrong occurred but also the (moral) emotion of disgust. One common way of expressing disapprobation toward a wrongdoer is to say something like “I’m disgusted by your actions, Jeremy.” Now, granting that we frequently experience (warranted) disgust upon learning of some morally wrong deed, and not when learning of morally neutral or praiseworthy actions, it follows that our experiencing disgust at someone’s action will be a reliable indicator, by our lights, that said action is morally wrong in one way or another. In this way then, the cognitivist can explain why a subject’s disgust at a described agent’s morally neutral action, unknowingly induced by a trigger word, can rationally lead her to form the belief that said agent is acting immorally. This account also allows the cognitivist to explain the results of various “moral dumbfounding” experiments: experiments that show that people make moral judgments they cannot explain or rationalize. Haidt et al. (2000) asked subjects to morally evaluate cases in which agents are described as engaging in harm-free acts such as consensual incestuous sex or cannibalism. One of the vignettes goes something like this: “Andy and Arianna are adult siblings who decide, after much reflection, to have consensual sex. They use contraception, both really enjoy it, and agree to keep it a secret. Did they do something morally wrong?” A total of 80 percent of subjects judged that their behavior was wrong, but they had great difficulty explaining why. Furthermore, subjects were presented with a decisive counterargument to any justification they gave. For example, some worried that they ran the risk of having deformed inbred

Moral Motivation

children – but these subjects were then reminded that Andy and Arianna used contraception (or even knew themselves to be infertile). Others worried about effects on the community – but were then reminded that Andy and Arianna kept it a secret known only to themselves. Subjects tended to concede that these counterarguments were successful, but only 17 percent revised their moral judgment. The rest doubled down on their moral judgments and emotions: “Incest is plain nasty”; “It’s disgusting”; “It’s just wrong,” etc. Again this might be thought to constitute good evidence for noncognitivism about moral judgment: Our moral attitudes seem to bottom out in emotions of disgust, etc. However, the cognitivist can still explain this through appeal to the proposition that an agent’s experience of disgust at some behavior is a reliable indicator, by the lights of that agent, of its being morally wrong. This is consistent with both Andy and Arianna’s incestuous sex really being morally wrong and also with the subjects incorrectly believing it to be so. The confidence of cognitivists can also be bolstered by the fact that Royzman et al. (2015) conducted a study attempting to replicate the moral dumbfounding effect. They found that subjects who seemed to exhibit the moral dumbfounding effect nevertheless still tended to actually believe that the incestuous relationship would still cause harm to Andy and Arianna in the future and thus that their relationship was morally wrong on these grounds.8 Furthermore, some subjects rejected the notion that wrongs entail harms, and rather held that incest was intrinsically or fundamentally morally wrong, in much the same way as harm is fundamentally morally wrong, and not wrong in virtue of some prior feature. Royzman et al. interpret their results as providing support for the cognitivist thesis that our moral attitudes bottom out in beliefs about morally pertinent features of things, and not in emotions of disgust, etc. As before, even though particular results may prima facie seem to support one position or the other, the empirical results can be argued to be consistent with the opposing position. Thus, the empirical work has not provided definitive evidence for either cognitivism or noncognitivism.

3.4 Concluding Remarks This highly circumscribed review of the philosophy and cognitive science of moral motivation attempts to trace the convoluted lines of argumentation surrounding the major questions of internalism, Humeanism, and cognitivism. As we have illustrated, the nature of moral motivation is a “hinge issue” around which organizing debates in moral psychology and moral philosophy turn. Indeed, the disputes that we focused on – between the internalist and externalist, cognitivist and noncognitivists, Humean and anti-Humean – feature 8

For further evidence that it is beliefs about harm, and not feelings of disgust, that predict moral judgments, see Gray and Schein (2016). They report that beliefs about harm predict moral judgments 30 times better than feelings of disgust do.

73

74

           .    

among the central issues in moral philosophy. There are other debates in moral psychology that connect to, or turn upon, the nature of moral motivation that we could not discuss here. For example, the nature of moral motivation matters to the debate over the possibility of altruism between the psychological egoist – who holds that all motivation to act is ultimately self-regarding – and the psychological altruist – who maintains that at least some motivation is ultimately other-regarding, or that we sometimes act for the sake of others as ends in themselves. We commend to the reader the other chapters in this collection (Chapters 12 and 13) which take up these and other matters.

References Adler, J. (2002). Belief’s own ethics. MIT Press. Aharoni, E., Sinnott-Armstrong, W., & Kiehl, K. A. (2012). Can psychopathic offenders discern moral wrongs? A new look at the moral/conventional distinction. Journal of Abnormal Psychology, 121(2), 484–497. Aharoni, E., Sinnott-Armstrong, W., & Kiehl, K. A. (2014). What’s wrong? Moral understanding in psychopathic offenders. Journal of Research in Personality, 53, 175–181. Anscombe, G. E. (1957). Intention. Harvard University Press. Ayer, A. (1936). Language, truth, and logic. Ryerson Press. Barger, B., & Derryberry, W. P. (2013). Do negative mood states impact moral reasoning? Journal of Moral Education, 42(4), 443–459. Berthoz, S., Artiges, E., Van De Moortele, P.-F., Poline, J.-B., Rouquette, S., Consoli, S. M., & Martinot, J.-L. (2002). Effect of impaired recognition and expression of emotions on frontocingulate cortices: An fMRI study of men with alexithymia. American Journal of Psychiatry, 159(6), 961–967. Björnsson, G., Eriksson, J., Strandberg, C., Olidner, R., & Björklund, F. (2015). Motivational internalism and folk intuitions. Philosophical Psychology, 28(5), 715–734. Blackburn, S. (1998). Ruling passions: A theory of practical reasoning. Oxford University Press. Blair, R. (1995). A cognitive developmental approach to morality: Investigating the psychopath. Cognition, 57, 1–29. Blair, R. (1997). Moral reasoning and the child with psychopathic tendencies. Personality and Individual Differences, 26, 731–739. Boyd, R. (1988). How to be a moral realist. In G. Sayre-McCord (Ed.), Essays on moral realism (pp. 181–228). Cornell University Press. Bratman, M. (1987). Intentions, plans, and practical reason. Harvard University Press. Brink, D. (1997). Moral motivation. Ethics, 108(1), 4–32. Cholbi, M. (2006). Belief attribution and the falsification of motive internalism. Philosophical Psychology, 19(5), 607–616. Copp, D. (2018). Realist-expressivism and the fundamental role of normative belief. Philosophical Studies, 175(6), 1333–1356. Cusimano, C., Thapa, S., & Malle, B. F. (2017). Judgment before emotion: People access moral evaluations faster than affective states. In G. Gunzelmann,

Moral Motivation

A. Howes, T. Tenbrink, & E. J. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (pp. 1848–1853). Cognitive Science Society. Damasio, A., Tranel, D., & Damasio, H. (1990). Individuals with sociopathic behavior caused by frontal damage fail to respond autonomically to social stimuli. Behavioral Brain Research, 41, 91–94. Darwall, S. (1983). Impartial reason. Cornell University Press. Dreier, J. (2015). Another world. In R. N. Smith & M. Johnson (Eds.), Passions and projections: Themes from the philosophy of Simon Blackburn (pp. 155–171). Oxford University Press. Dretske, F. (1988). Explaining behavior: Reasons in a world of causes. MIT Press. Gamez-Djokic, M., & Molden, D. (2016). Beyond affective influences on deontological moral judgment: The role of motivations for prevention in the moral condemnation of harm. Personality and Social Psychology Bulletin, 42(11), 1522–1537. Gawronski, B., Conway, P., Armstrong, J., Friesdorf, R., & Hütter, M. (2018). Effects of incidental emotions on moral dilemma judgments: An analysis using the CNI model. Emotion, 18(7), 989–1008. Gibbard, A. (1990). Wise choices, apt feelings: A theory of normative judgment. Harvard University Press. Gray, K., & Schein, C. (2016). No absolutism here: Harm predicts moral judgment 30 better than disgust–commentary on Scott, Inbar, & Rozin (2016). Perspectives on Psychological Science, 11(3), 325–329. Greene, J. D., & Haidt, J. (2002). How (and where) does moral judgment work? Trends in Cognitive Sciences, 6(12), 517–523. Greene, J. D., Nystrom, L., Engell, A., Darley, J., & Cohen, J. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44, 389–400. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), Article 5537. Haidt, J., Björklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason [Unpublished manuscript. https://pdfs.semanticscholar.org/ d415/e7fa2c2df922dac194441516a509ba5eb7ec.pdf]. University of Virginia. Heekeren, H. R., Wartenburger, I., Schmidt, H., Schwintowski, H.-P., & Villringer, A. (2003). An fMRI study of simple ethical decision-making. NeuroReport, 14(9), 1215–1219. Hieronymi, P. (2005). The wrong kind of reason. Journal of Philosophy, 102(9), 437–457. Holton, R. (2009). Willing, wanting, waiting. Oxford University Press. Joyce, R. (2008). What neuroscience can (and cannot) contribute to metaethics. In W. Sinnot-Armstrong (Ed.), Moral psychology (Vol. 3, pp. 371–394). MIT Press. Kelly, D., Stich, S., Haley, K., Eng, S., & Fessler, D. (2007). Harm, affect, and the moral-conventional distinction. Mind & Language, 22, 117–131. Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., & Damasio, A. (2007). Damage to the prefrontal cortex increases utilitarian moral judgments. Nature, 446, 908–911. Korsgaard, C. M. (1986). Skepticism about practical reason. Journal of Philosophy, 83(1), 5–25. Korsgaard, C. M. (1996). The sources of normativity. Cambridge University Press.

75

76

           .    

Landy, J. F., & Goodwin, G. P. (2015). Does incidental disgust amplify moral judgment? A meta-analytic review of experimental evidence. Perspectives on Psychological Science, 10(4), 518–536. Leary, S. (2017a). In defense of practical reasons for belief. Australasian Journal of Philosophy, 95(3), 529–542. Leary, S. (2017b). Defending internalists from acquired sociopaths. Philosophical Psychology, 30(7), 878–895. Lewis, D. (1988). Desire as belief. Mind, 97, 323–332. Lewis, D. (1996). Desire as belief II. Mind, 105, 303–313. Mackie, J. L. (1977). Ethics: Inventing right and wrong. Penguin Books. Marusic, B., & Schwenkler, J. (2018). Intending is believing: A defense of strong cognitivism. Analytic Philosophy, 59(3), 309–340. McNaughton, D. (1988). Moral vision: An introduction to ethics. Basil Blackwell. Moll, J., de Oliveira-Souza, R., Bramati, I. E., & Grafman, J. (2002). Functional networks in emotional moral and nonmoral social judgments. Neuroimage, 16(3, Part A), 696–703. Nagel, T. (1970). The possibility of altruism. Clarendon Press. Nichols, S. (2002). How psychopaths threaten moral rationalism: Is it irrational to be amoral? The Monist, 85(2), 285–304. Nichols, S. (2004). Sentimental rules: On the natural foundations of moral judgment. Oxford University Press. Nucci, L. P. (2001). Education in the moral domain. Cambridge University Press. Nussbaum, M. (2001). Upheavals of thought: The intelligence of emotions. Cambridge University Press. Parfit, D. (2011). On what matters (Vol. 2). Oxford University Press. Phan, K., Wager, T., Taylor, S., & Liberzon, I. (2002). Functional neuroanatomy of emotion: A meta-analysis of emotion activation studies in PET and fMRI. NeuroImage, 16(2), 331–348. Platts, M. (1991). Moral realities: An essay in philosophical psychology. Routledge. Prinz, J. (2007). The emotional construction of morals. Oxford University Press. Prinz, J. (2015). An empirical case for motivational internalism. In C. Strandberg, F. Björklund, G. Björnsson, J. Eriksson, & R. F. Olinder (Eds.), Motivational internalism (pp. 61–84). Oxford University Press. Putnam, H. (1975). The meaning of “meaning.” In H. Putnam, Philosophical papers (pp. 215–271). Cambridge University Press. Railton, P. (1986). Moral realism. Philosophical Review, 95(2), 163–207. Roskies, A. (2003). Are ethical judgments intrinsically motivational? Lessons from ‘acquired sociopathy.’ Philosophical Psychology, 16(1), 51–65. Ross, J. (2009). How to be a cognitivist about practical reason. Oxford Studies in MetaEthics, 4, 243–281. Royzman, E. B., Kim, K., & Leeman, R. F. (2015). The curious tale of Julie and Mark: Unraveling the moral dumbfounding effect. Judgment and Decision Making, 10(4), 296–313. Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The neural basis of economic decision-making in the Ultimatum Game. Science, 300(5626), 1755–1758. Saver, J., & Damasio, A. (1991). Preserved access and processing of social knowledge in a patient with acquired sociopathy due to ventromedial frontal damage. Neuropsychologia, 29, 1241–1249.

Moral Motivation

Scanlon, T. (2013). Being realistic about reasons. Oxford University Press. Schnall, S., Haidt, J., Clore, G. L., & Jordan, A. H. (2008). Disgust as embodied moral judgment. Personality and Social Psychology Bulletin, 34(8), 1096–1109. Schroeder, M. (2007). Slaves of the passions. Oxford University Press. Schroeder, T. (2004). Three faces of desire. Oxford University Press. Schroeder, T., Roskies, A., & Nichols, S. (2010). Moral motivation. In J. Doris and The Moral Psychology Research Group (Eds.), The moral psychology handbook (pp. 72–110). Oxford University Press. Schueler, G. F. (2009). The Humean theory of motivation rejected. Philosophy and Phenomenological Research, 78, 103–122. Setiya, K. (2008). Practical knowledge. Ethics, 118(3), 388–409. Shafer-Landau, R. (2003). Moral realism: A defense. Oxford University Press. Shah, N. (2006). A new argument for evidentialism. Philosophical Quarterly, 56(225), 481–498. Sinhababu, N. (2009). The Humean theory of motivation reformulated and defended. Philosophical Review, 118, 465–500. Smetana, J. G. (1981). Preschool children’s conceptions of moral and social rules. Child Development, 52, 1333–1336. Smith, M. (1987). The Humean theory of motivation. Mind, 96, 36–61. Smith, M. (1994). The moral problem. Wiley-Blackwell. Smith, M. (2007). The truth about internalism. In W. Sinnott-Armstrong (Ed.), Moral psychology: Vol. 3. The neuroscience of morality: Emotion, brain disorders, and development (pp. 207–215). MIT Press. Solomon, R. C. (1976). The passions: The myth and nature of human emotion. Anchor. Stahl, T., Zaal, M. P., & Skitka, L. J. (2016). Moralized rationality: Relying on logic and evidence in the formation and evaluation of belief can be seen as a moral issue. PLoS ONE, 11(11), Article e0166332. Strandberg, C., & Björklund, F. (2013). Is moral internalism supported by folk intuitions? Philosophical Psychology, 26(3), 319–335. Turiel, E. (1983). The development of social knowledge: Morality and convention. Cambridge University Press. van Roojen, M. (2018). Moral cognitivism vs. non-cognitivism. In E. N. Zalta (Ed.), Stanford encyclopedia of philosophy (Fall 2018 ed.). https://plato.stanford.edu/ archives/fall2018/entries/moral-cognitivism. Velleman, J. D. (1989). Practical reflection. Chicago University Press. Wallace, R. J. (2006). Moral motivation. In J. Dreier (Ed.), Contemporary debates in moral theory (pp. 182–195). Blackwell. Way, J. (2016). Two arguments for evidentialism. Philosophical Quarterly, 66(265), 805–818. Wedgwood, R. (2007). The nature of normativity. Clarendon Press. Wheatley, T., & Haidt, J. (2005). Hypnotic disgust makes moral judgments more severe. Psychological Science, 16(10), 780–784. Williamson, T. (2007). The philosophy of philosophy. Blackwell.

77

4 Norms: Inference and Interventions Giulia Andrighetto and Eva Vriens

Social norms, or allusions to norms, have shaped collective behaviors and religious practices in human societies for at least 10–12 millennia (Norenzayan et al., 2016). They are so ingrained in repeated social interactions that some argue that in their most basic form (without the cognitive capacities required to understand reasons for action), social norms even shape behaviors in ape societies (Andrews, 2020; von Rohr et al., 2011). In human societies, social norms are informal, unwritten social rules that govern many (standard) interactions and practices, both good and bad, ranging from disapproval of spitting and littering on the streets, to painful initiation rites, or to how we informally solve disputes. In our complex societies, many norms are at play at the same time, relevant in different contexts, that may sometimes act complementary and other times contradict each other (Tankard & Paluck, 2016). They are a key mechanism for the maintenance of social order (Nyborg et al., 2016), and the emergence and spread of new social norms has contributed in a crucial way to, for example, the reduction of smoking in public places (Nyborg & Rege, 2003), the increase in eco-friendly behavior (Jachimowicz et al., 2018; Sparkman & Walton, 2017), and to the change in food consumption and eating behavior (Higgs et al., 2019). Social norms have been studied extensively to explain collective human behavior in several disciplines, including philosophy (Bicchieri, 2006; Elster, 1989; Sripada & Stich, 2006; Tummolini et al., 2013; Ullmann-Margalit, 1977); sociology (Hechter & Opp, 2001; Horne & Mollborn, 2020; Przepiorka et al., 2022; Szekely et al., 2021), social and moral psychology (Cialdini et al., 1990; Heyes, 2023; Lapinski, 2005; Lindenberg & Steg, 2007; Miller & Prentice, 1994), cultural psychology (Gelfand et al., 2011), law (Posner, 1999), economics (Binmore, 2010; Fehr & Schurtenberger, 2018; Gintis, 2010; Nyborg et al., 2016), political science (Giuliano & Nunn, 2021; Ostrom, 2000), anthropology (Chudek & Henrich, 2011; Ensminger & Henrich, 2014), (evolutionary) biology (Gavrilets & Richerson, 2017; Richerson & Boyd, 2005; Strimling et al., 2018; Tverskoi et al., 2023), computational social sciences (Centola et al., 2018; Conte et al., 2014), and robotics (Malle & Scheutz, 2019). This cross-disciplinary interest in social norms resulted in a large variety of definitions and several, sometimes conflicting, theories about what norms are. One distinction is whether social norms are an individual or collective construct. Theories of norms as individual constructs consider them as the beliefs of an individual of what is common (what people do in situation X) and approved 78

Norms: Inference and Interventions

(the extent to which people approve of those who do Y in situation X) in a given group or society. Theories of social norms as collective constructs see them as external, meso- or macro-level forces influencing people’s decisions. As pointed out by Legros and Cislaghi (2020), both approaches have their benefits. Using norms as individual constructs has advantages when identifying, for example, behavioral-change interventions; whereas theories that define norms as collective constructs will be helpful to researchers investigating how norms operate and evolve over time at the population level. Scholars have also debated how social norms relate to similar concepts such as legal and moral norms. Differently from legal norms, social norms are not codified into laws and do not have specified enforcers (e.g., the police); rather, they are socially required. Social norms and legal norms can stand in opposition, but they can also be consistent and in this case their combined effect is stronger in guiding people’s actions. There is less consensus around the distinction between social and moral norms. Turiel (1983), for example, distinguishes “moral” from “conventional/social” norms in the way people assess the validity of norms. According to this theory, moral norms are mainly about harm, rights, and injustice. Their transgression is deemed more serious than the transgression of conventions, and they are considered valid independent of context. By contrast, conventional norms can have pretty much any content. Their transgression is deemed less serious, and their validity depends on context. Bicchieri and Elster instead do not distinguish between social and moral norms based on their content. Bicchieri (2006) differentiates them on the basis of the preference that supports compliance. Moral norms are those that are followed unconditionally, whereas social norms are followed conditionally upon the satisfaction of normative and empirical expectations about other people. Elster (2015) focuses on the emotional mechanisms underlying compliance with norms. The emotion sustaining social norms is shame, while the one sustaining moral norms is guilt. Malle (2023) argues that social and moral norms differ merely in their locations on certain dimensions, including how communityspecific and context-specific they are, and how strong the norm is (i.e., their deontic force). While in theory a distinction can be traced, we believe that the lines between moral and social norms might be difficult to draw in practice, and specific cases might represent a mixture of different types. In this chapter, when we talk about social norms, the content of the norm may tap into moral domains. However, we choose the label “social” because our interest is in how people perceive a norm based on what they infer about others (both in their behaviors and beliefs) and how their idea about a social norm may be changed through interventions that target these inferences. In line with Bicchieri (2006), this approach to norms is social in the sense that norm interventions may target people’s expectations about others, not one’s personal convictions or beliefs. That is, if people are motivated to comply with the social norm not only by personal normative beliefs or internalized values, but also by a preference to do what others do and think should be done, norm-based interventions may generate a change in

79

80

             

behavior by modifying people’s social expectations – which is arguably much easier than changing their internalized norms and personal values (Nyborg et al., 2016, Paluck, 2009). In this chapter, we present the main recent conceptual and methodological advances in the study of social norms, highlighting developments in the definition and measurement of social norms, in insights about internal processes of norm inference, and methods of social norm enforcement. We proceed to present a two-step approach of social norm diagnosis and interventions that integrates norm enforcement, how norms can be measured, and how they can be changed. We advocate for such an integrated approach because without understanding how a social norm is inferred (i.e., the features that humans use to perceive social norms and create social expectations) we cannot tell whether a social norm is salient, how (the perception of ) this norm evolves over time, or whether it causally influences decision making. As a consequence, any intervention to strengthen social norms would target perceptions of the norm blindly and risks either being ineffective or having undesired and unforeseen consequences on the behavior this intervention aims to change.

4.1 Social Norms: From Definition to Measurement Social norms are largely studied independently across different disciplines. While interdisciplinary collaborations are increasing, there is no broadly shared consensus about the features that characterize social norms. Here we do not attempt an exhaustive review of all characteristics of norms (for a map of the literature, see Legros & Cislaghi, 2020). Instead, we focus our discussion on definitions that make it possible to explicitly measure social norms empirically. Social norms researchers across disciplines generally agree that social norms capture shared expectations about how people ought to behave (Cialdini et al., 1990; Elster, 2015; Fehr & Schurtenberger, 2018; Horne & Mollborn, 2020; Legros & Cislaghi, 2020; Ostrom, 2000). Moreover, social norms comprise not only expectations or evaluations about a behavior but also shared beliefs that violations are punished and sanctions are socially accepted. Norms can be prescriptive or proscriptive: Prescriptions encourage or dictate a specific behavior, while proscriptions forbid the behavior (Horne & Mollborn, 2020). Other classifications of (collective) behavior are habits (e.g., brushing teeth in the morning), recurrent behavioral patterns (e.g., people putting up umbrellas simultaneously when it rains), and moral norms (e.g., do not kill others). Other than social norms, these concepts explain behavior from unconditional, nonsocial motives. Instead, social norms exist when people have expectations about how others evaluate behaviors and about others’ reactions if these behaviors are not observed (Horne & Mollborn, 2020). While this implies that people follow social norms conditional on what others do and believe, that does not mean that social norms cannot be internalized and become values that are followed as an

Norms: Inference and Interventions

end in themselves, without the need of external enforcement (Etzioni, 2000; Horne & Mollborn, 2020; Malle, 2023; Villatoro et al., 2015). Social norms have a descriptive component (how most people tend to act) and an injunctive component (standards or guides about what others approve or disapprove of ) (Cialdini & Trost, 1998). Ultimately, they depend on empirical and/or normative expectations about a relevant population – the reference group (Bicchieri, 2006, 2017). Early social norms research considered these components as two different types of social norms that are often complementary but may also be in conflict with each other (Cialdini et al., 1990). Nowadays, they are often considered to go hand in hand – either by saying that a norm exists only if both empirical expectations and normative expectations are present, or by seeing empirical expectations as the perception of compliance with what is pre- or proscribed by the normative expectations. Hence, social norms are multilayered constructs capturing both descriptive and injunctive components and reflecting both collective (social structures) and individual aspects (beliefs and expectations). But how can we know if a social norm exists? The decomposition of aggregate social norms into individual-level social expectations has meant a big step forward into empirical research on norms. These micro-level definitions enable explicit diagnosis of the existence of norms and their causal effect on behavior (Andrighetto & Vriens, 2022). The most widely applied micro-level translation of social norms comes from Bicchieri (2006). She defines social norms as behavioral rules that people comply with conditional on the double expectation that i) a critical part of a population does so as well (empirical expectations) and ii) a critical part of the population believes that people ought to conform to this rule and is willing to sanction transgressions (normative expectations). Empirically measuring whether there is a shared consensus in these expectations enables assessments of whether an established behavioral pattern represents a social norm (Bicchieri, 2006; Krupka & Weber, 2013). Methodological innovations, such as large-scale, interactive behavioral experiments, agent-based simulations, and computational models have also taken advantage of these new, measurable definitions of social norms and have opened up possibilities to measure and predict not just the presence of social norms, but also their evolution over time through constant feedback processes between macro-level social norms and micro-level social expectations and decision making (Conte et al., 2014). This resulted in an explosion of studies in a large range of domains that study, for instance, the evolution of social and cultural norms (Gelfand, 2018; Gelfand et al., 2011), the sudden shift toward widespread social norm creation, erosion, or change (Bicchieri et al., 2022; Bicchieri & Funcke, 2018; Centola et al., 2018; Przepiorka et al., 2022; Szekely et al., 2021), and the effectiveness of norm-based interventions to engineer desired behavioral change (Constantino et al., 2022; McAlaney et al., 2010; Prentice & Paluck, 2020).

81

82

             

4.2 Social Norm Inference If we follow the definition that people comply with social norms conditional on specific social expectations, the question remains how people create and formulate these social expectations. An understanding of such social norm inference requires a recognition of how macro-level properties of norms are translated into micro-level interpretations. In general, there is little attention on the complexities that come with analyzing how macro-level emergent phenomena can act back upon micro-level beliefs and actions and possibly change them (but see Castelfranchi, 1998; Conte et al., 2014 for an analysis of this downward process referred to as “immergence”). Understanding how macrolevel social norms are inferred on the micro level would also help to better detail how micro-level beliefs and decisions transform into macro-level outcomes and where and how interventions should be targeted to strengthen social norms and their effect on macro-level behavioral outcomes. One interpretation of how social norms are inferred follows directly from micro-level definitions of social norms such as the one of Bicchieri (2006). It states that a social norm can be inferred conditional on the empirical and normative expectations that a critical subset of the population does so and thinks that one should. However, for norms “in the wild,” the application of this interpretation of social norms is far from straightforward. It matters greatly who is considered to be the relevant population for a norm and how many of them are needed to form a critical subset. Moreover, even if one were able to give a total or an average number of others, this information by itself may not be sufficient: Whether this is enough to follow the norm will depend greatly on whether the remaining others all follow another norm (competition) or follow a large variety of different behavioral strategies. To illustrate, think of a social norm of sustainable consumption (such as purchasing environmentally safe products, recycling household waste, eating locally produced food, and buying clothes made from organic materials; Pristl et al., 2021). Sustainable consumption is a way to reduce the emission of greenhouse gases, serving the global goal of limiting increases in the average temperature and reducing the risk of dangerous climate change. Yet while sustainable consumption serves a global goal, if social norms were to play a role in people’s motivation to engage in more sustainable consumption patterns and people would indeed form social expectations about the behavior and beliefs of others, these expectations are unlikely to be formed on the basis of what people all over the world do and believe. Still, even if we agree that the relevant social expectations are formed locally, whether the population reference group is composed of people living in the same country, in the same city, or of people with the same political preferences significantly changes the empirical and normative expectations. Finally, this sustainable consumption norm is likely contrasted by another (dominant) norm of unsustainable consumption. It matters not only how many people are expected to follow the sustainable norm, but also how dominant the alternative is, or how many alternatives there are.

Norms: Inference and Interventions

Hence, for social norms in the wild there is a large heterogeneity in what people use as their relevant social reference category (Bicchieri, 2017). This is based on the idea that when social norms play a role, people act on the basis of “social proof”: They determine appropriate behavior for themselves in a situation based on how others, especially similar others, behave (Cialdini et al., 1999). Social proof is an ongoing, dynamic process guided by the behaviors and opinions people observe and generate within their social groups (van Kleef et al., 2019). Heterogeneity comes into play not just with regard to who people take as their social reference category. There is diversity also with respect to the critical part needed for a norm to be considered salient. Some people might consider a social norm salient only if they expect a large majority to follow this norm or think that people should. Others (e.g., those with personal normative beliefs that align with the social norm) might perceive a norm as salient already when only a small group of people complies with it. That is, the critical part of the reference population does not necessarily need to be a majority for people to comply with a social norm. The threshold for when a sufficiently large critical mass is reached of people who already comply with a certain norm is different for everyone (Centola et al., 2018; Granovetter, 1978). Another source of complexity in social norm inference lies in how social information about others is obtained. As a general rule, people have incomplete information and sometimes even biased views on how big the share of the population is that follows a certain social rule (Tankard & Paluck, 2016). This has to do with how they perceive inputs from their environment to create their social expectations. Particularly, individuals infer social norms by observing other individuals’ public behavior (Andrighetto & Castelfranchi, 2013). If they observe others complying, this brings the norm “into focus” (Cialdini et al., 1990). Acts of obedience with the norm remind people of the social norm, demonstrate that the norm is relevant to the immediate situation or context, and as such promote compliance and spread. How easy it is to get these cues from the environment depends on the social norm in question. During the COVID-19 pandemic, for instance, it was easier to infer the social norm of mask wearing than that of vaccination. Mask wearing is a publicly visible behavior, so seeing many others masking in public transport increases the salience of the norm of mask wearing and the probability that an individual will choose to do so as well. Vaccination, by contrast, is a private decision, so other than official statistics about vaccination rates people had little social cues about vaccination acceptance in their direct environment (Vriens et al., 2023). Several factors may facilitate and, in some cases, even amplify the recognition of the social norm. For example, certain individuals, called social referents (Over & Carpenter, 2012; Paluck et al., 2016; Rogers, 2003), are particularly influential over others’ perceptions of social norms. The salience of social referents derives from their personal connections to the perceiver, and their number of connections throughout the group (Paluck & Shepherd, 2012). However, social referents are not necessarily people in central network positions

83

84

             

or people with many connections in a social network (e.g., leaders, social influencers, or people high in status). In fact, since social norms require multiple social cues to be picked up or to generate the expectations that they are broadly followed, they could be considered an example of complex contagion (Centola & Macy, 2007). Hence, depending on the context or the type of social norm, the social referents might rather be located in peripheral, clustered network positions (Centola, 2021a, 2021b). Social cues do not only come from other people; sometimes cues can also be provided by the environment. Cialdini et al. (1990) conducted a series of studies in which they show the importance of observing other individuals’ normcompliant behavior in making social norms focal. In these experiments, they manipulated whether participants were in an environment where littering was normative or not, and then manipulated norm salience by having a confederate litter in front of people passing by (the study participants). In environments with large amounts of trash on the ground (suggesting that littering is normative in that environment), the confederate’s littering in front of the participant increased the participant’s own likelihood of engaging in (norm-consistent) littering. The same effect did not occur when the environment was clean, for this environment signaled that littering is counter-normative. It thus seems that minimal cues suffice to make people perceive norms (e.g., “do not litter”) as irrelevant or less salient, at least for that specific context. Keizer et al. (2008) likewise found in several field experiments that when the environment provides cues that certain social norms are violated (e.g., observing graffiti or wrongly parked vehicles), people were more likely to violate other norms or rules. When it comes to social norm inference, there may thus be an asymmetry bias between observing compliance and violation. While relatively high compliance rates are needed before people are ready to follow a social norm, minimal signs of violations are enough to demotivate compliance – or to motivate people to defect or free ride instead (Bicchieri et al., 2022; Diekmann et al., 2015; Dimant, 2019). Hence, while there is to date hardly any systematic research on which (linguistic and nonlinguistic) features people use to infer the existence of a social norm and whether they perceive it as salient and strong, norm inference is unlikely to be linearly related to the extent of compliance. Rather than merely considering the average or relative compliance rate as indicator for the presence of a norm, people also strongly consider violations, the (absence of ) punishment of violations, and the distribution of alternative norms before deciding whether the social norm of interest is salient to them in a particular context. Incomplete social information and asymmetric influence of observing compliance and violation make inaccuracies in the inference of social norms unavoidable. In some cases, however, inaccuracies are more extreme and the perception of the salient social norm may go against actual behavior and/or normative beliefs of the majority of the population. One such situation is when a social norm suffers from false consensus bias. False consensus bias occurs when people overestimate how many others behave and think similarly to them

Norms: Inference and Interventions

(Ross et al., 1977). A norm that suffers from false consensus bias could appear stronger or, alternatively, weaker than it really is. False consensus is a general tendency across all social groups but tends to be strong particularly for minority groups holding counter-normative views. For instance, climate change deniers in Australia were found to largely overestimate the number of climate change deniers (Leviston et al., 2013). Similarly, following the 2020 US election, the minority of Trump voters that did not support the outcome of the election wrongly perceived that only a minority of Trump voters did support the outcome and the political norm of a peaceful transfer of power (Weinschenk et al., 2021). In Italy, vaccine-hesitant people more strongly misperceived and underestimated the degree of vaccine acceptance during the COVID-19 vaccination campaign (Vriens et al., 2023). Another common bias in norm (mis)perception is pluralistic ignorance, where people have incorrect normative expectations (Miller & McFarland, 1987). In situations of pluralistic ignorance, people privately reject a social norm (their personal normative beliefs go against it), but they conform to the norm because they wrongly believe that most others do support it. Examples of pluralistic ignorance are high rates of alcohol consumption across college students despite most of them privately preferring to drink less (Prentice & Miller, 1993; Schroeder & Prentice, 1998), the low participation of women in the workforce despite the vast majority of men supporting women working outside the home (Bursztyn et al., 2020), the large spread of fake news sharing on social media platforms due to the misperception that they express the opinions of many (Castioni et al., 2022), and people concerned about climate change silencing themselves because they wrongly believe that most others do not share their climate concerns (Geiger & Swim, 2016). In conclusion, many factors influence how people (correctly or incorrectly) infer the existence of a social norm. They form social expectations about the prevalence as well as the appropriateness of a social norm based on social cues in their network or in the broader environment. While a norm is more likely to be internalized and to guide behavior the higher the (observed or perceived) compliance rate, the impact of observing norm violations, overlapping or competing norms, and misperceptions of the spread or appropriateness of a social norm should not be overlooked. Research on norm inference should consider indicators of both prevalence and strength and acknowledge population-wide heterogeneity.

4.3 Social Norm Enforcement So far, we have discussed norm inference from indirect cues, but social norms can also be enforced explicitly to increase their perceived salience (see Malle, Chapter 15 in this volume). Examples of social norm enforcement are punishment (either by peers or institutions), social institutions transmitting normative signals, or providing information about the opinion or behaviors

85

86

             

of others. Peer punishment, particularly, is inherent to the concept of social norms. When a strong social norm is in place, people do not only expect others to comply but also that violations of the norm will be punished (Coleman, 1990). These expectations are sometimes interpreted as part of normative expectations (Bicchieri, 2006; Cialdini et al., 1990). Others refer to “meta norms” of norm enforcement to describe the type and severity of appropriate social sanctions for a given norm violation (Axelrod, 1986; Eriksson et al., 2021; Horne & Mollborn, 2020). Regardless of the classification, social sanctions are seen as a means to strengthen the social norm and promote norm compliance (Fehr & Gächter, 2002; Nikiforakis, 2008). As such, sanctions are communicative acts through which norms are transmitted and enforced (Andrighetto et al., 2013). Being sanctioned (or observing someone else being sanctioned) for violating a norm is a strong indicator of a social norm. Sanctions can take many different forms. Direct sanctions would include, for instance, verbal or physical confrontation or imposing monetary fines. Norm violators may also be sanctioned indirectly, for instance through gossip, by distancing oneself from the deviant individual, or by ostracizing the deviant individual from the social group (Balafoutas et al., 2014; Feinberg et al., 2014; Molho et al., 2020). In the light of norm enforcement, the main motivation of sanctioning is to deter future transgressions (Fehr & Gächter, 2000; Van Miltenburg et al., 2014). However, when people engage in sanctioning it is often also an emotional response, for example resulting from anger, perceived injustice, unfairness, or disgust (Chaurand & Brauer, 2008; Molho et al., 2020). Given these alternative motivations, sanctioning alone may not be a clear enough signal to make the sanctioned party interpret their own behavior as a violation of a social norm. It may be necessary to combine the sanctioning act with normative information such as “one should not behave in this way” or “you shouldn’t have done it” to highlight the legitimacy of the sanctioning behavior (Andrighetto et al., 2013; Janssen et al., 2010; Masclet et al., 2003; Ostrom et al., 1992; Villatoro et al., 2014; Xiao & Houser, 2011). If, for example, those who punish a norm violation are people who engage in virtuous behavior (Faillo et al., 2013), people whose personal interests are not directly involved (as in the case of third-party punishment; Fehr & Fischbacher, 2004; Rabellino et al., 2016), or people who organize through group punishment (Boyd et al., 2010; Villatoro et al., 2014), it is more probable that these punishing acts will be interpreted as a reaction to a norm violation rather than an idiosyncratic, or personally motivated, action. While sanctions may have spillover effects to others observing that norm violations will be punished, their effect is mostly local – directed at changing the behavior of a specific (norm-violating) individual. A way to more broadly focus people’s attention on the existence of a social norm is via signals transmitted by social institutions that govern, educate, or organize a reference group and their social interactions, such as governments, schools, and the mass media (Cullis et al., 2012; Getzels & Guba, 1957; Silverblatt, 2004). An institution’s policy can signal which behaviors or opinions are common or desirable in a group.

Norms: Inference and Interventions

Through new laws or prohibitions, institutions can also signal or promote social norm change (Andrighetto & Vriens, 2022). For example, smoking bans in public places can alter perceptions of the social norms making people believe that smoking is becoming less common and socially accepted (Procter-Scherdtel & Collins, 2013). Additionally, governments passing laws allowing same-sex marriages signal social norms promoting equal rights and condoning antigay biases (Ofosu et al., 2019). Institutions could also directly manipulate what behavior people observe. In online forums, for instance, censoring hate speech messages was found to significantly reduce hate speech of users in general (Álvarez-Benjumea & Winter, 2018). Finally, receiving direct information about the opinions or behaviors of people in a group has a strong impact in changing perceptions of group norms. In the simplest form that means informing people that most others are engaging in a certain behavior or that most others approve of this behavior, in order to encourage them to do the same (Prentice & Paluck, 2020). People were found to be more likely to pay taxes, for instance, if they believe that other citizens, and in particular their friends, pay their taxes as well (Traxler, 2010). General summary information has also been used to reduce unhealthy behaviors such as binge drinking and drug use, settings where the norm suffers from pluralistic ignorance (Schroeder & Prentice, 1998). Likewise, descriptive norm messages that a large majority of the population plans to take a vaccine against COVID-19 have been found to increase vaccination intentions (Sinclair & Agerström, 2021). It is important to highlight potential limitations of providing summary information about others. This information can backfire if individuals are outperforming the norm or already perceive the norm to be more widely complied with than the summary information dictates. For example, Schultz and colleagues provided information about neighbors’ electricity use to reduce overall electricity use in a neighborhood. Contrary to their expectation, their normbased intervention backfired because the low-electricity users increased their electricity consumption after learning that their neighbors used more electricity (Schultz et al., 2007). Bicchieri and colleagues found that providing summary information about the behavior of anonymous strangers backfires when the information reveals that noncompliance is not uncommon, as examples of deviance dominate examples of compliance. Instead, when participants were given summary information indicating that those who comply are similar to them (social proximity), participants complied with the social norm and the erosion of compliance was limited (Bicchieri et al., 2022). These findings highlight the importance of accounting for the broader social context when examining social norm compliance and how to promote or enforce it.

4.4 A Toolkit for Social Norm Diagnosis and Interventions Because perceptions determine how individuals infer a social norm, interventions designed to change these perceptions may promote norm

87

88

             

compliance and generate behavioral change (Tankard & Paluck, 2016). However, without knowledge of how a social norm is perceived, interventions that aim to change these perceptions may well be ineffective or even have undesired consequences. To increase effectiveness, social norm interventions should be designed and implemented following a two-step approach. In the first stage (diagnosis), it is established whether a social norm is in place for the target behavior (e.g., it rests on empirical and normative expectations), how strong this norm is perceived, and whether it suffers from biases or misperceptions. In the second stage (intervention), the diagnosed norm is altered or strengthened (Constantino et al., 2022; McAlaney et al., 2010; Schimmelpfennig et al., 2021). An accurate diagnosis of the target behavior, which implies measuring personal behaviors, personal normative views, and associated empirical and normative expectations (Bicchieri et al., 2014), makes it possible to identify whether the behavior represents a social norm. If the behavior to change is indeed guided by a social norm, it is important to target both normative and empirical expectations. If, instead, the behavior appears to follow social conventions, it may be sufficient to target empirical expectations only. If there are no or weak social expectations related to the target behavior, norm interventions may be ineffective, because the behavior is not driven by social motivations. Yet even if the target behavior is socially driven, providing the wrong social information may cause norm-based interventions to backfire, for instance because noncompliers radicalize further or because compliers become less willing to do so. Hence, when designing a norm-based intervention it is important to know what social expectations are in place, what social reference group they are based on, whether the expectations are accurate, and whether they are broadly shared.

4.4.1 Social Norm Diagnosis When diagnosing a social norm, the first step is to define the norm and its scope. Social norms may be very concrete and provide behavioral instructions in a specific context, such as “all adolescents should get vaccinated against HPV” or “always turn off heating when you are not at home” (Sparkman et al., 2020). Norms can also be abstract and cover a broad range of social interactions, for example “people should share their resources equally” or “one should do no harm” (Lindenberg, 2008). Moreover, social norms can be stable or dynamic in time (Andrighetto & Vriens, 2022). If a social norm is expected to be stable, a single measurement would suffice for diagnosis. However, if the norm is expected to be changing (e.g., because the norm of interest is still emerging or appears to be eroding), norm diagnosis requires repeated measurements to identify the internal evolution of norm recognition, strength, and compliance (see Szekely et al., 2021 for an example of how norm emergence and change is tracked over time in an online behavioral experiment and Vriens et al., 2023 for an example of how the creation of a social norm of vaccination was tracked throughout the COVID-19 vaccination campaign).

Norms: Inference and Interventions

Second, one should identify the population for which the social norm applies and their relevant social reference category. The population of this reference category should be large enough for the behavioral rule to represent a norm (e.g., not just one’s family), yet small enough to make it possible for an individual to obtain social cues and formulate social expectations. Moreover, the reference category should identify a population that the individual associates with and therefore likely influences their beliefs and choices through processes of social proof (Cialdini et al., 1999; van Kleef et al., 2019). Note that depending on the context different subpopulations may serve as reference categories for different people. Once the norm is defined and the reference category is identified, people’s social expectations can be extracted. To extract social expectations, beliefelicitation methods that were originally designed for controlled laboratory settings may be applied. Bicchieri and Chavez (2010), for instance, use a twostep elicitation method that recovers both first-order and second-order beliefs for the action sets available to proposers in the Ultimatum Game (UG). First, responders in the UG judge the fairness of each proposer action available in the UG (first-order beliefs). Then, both proposers and responders are paid to guess how responders judged the fairness of these actions (normative expectations or second-order beliefs). The more the normative beliefs of the two player types align, the more social expectations are shared and stronger the social norm. Similarly, Krupka and Weber (2013) measured social norms in a Dictator Game (DG). They ask participants to rate all actions available in the DG in terms of the estimated social appropriateness as judged by the other participants. Specifically, they are asked to guess the modal social appropriateness of each action as reported by all others. The more the socially appropriateness of an action is jointly recognized, or collectively perceived, by all participants, the stronger the social norm in place. For social norms in the wild, an application of these belief-elicitation methods requires mapping all (relevant) behavioral rules or strategies in place to find which rule is collectively considered the most appropriate and thus potentially signals the social norm. Both elicitation methods measure normative expectations by asking people what they think the modal or average participant believes. However, to agree that a social norm is in place it is not just the average expectations that matter. Social norm diagnosis also involves diagnosing whether there are alternative (complementing or conflicting) social rules in place and how salient the rule that forms the social norm is in respect to its alternatives. In the method of Krupka and Weber (2013), for instance, the action with the highest average appropriateness can be interpreted as the social norm. However, the bigger the relative appropriateness of this strategy compared to the other available strategies, the stronger the norm. Szekely et al. (2021), likewise, propose a measure of social norm strength that includes the specificity, the consistency, and the accuracy of a given norm across a range of possible behaviors. They consider a social norm to be stronger when the range of appropriate actions is smaller (specificity), when the social expectations of all group members align more – that is, there is

89

90

             

agreement about what people expect others to do and to believe (consistency), and when the social expectations more accurately represent actual behavior and personal normative beliefs (accuracy). Such norm strength measurements give insight into the macro-level role of norms. Individual heterogeneity in the sensitivity to (strong) social norms may be revealed through measures like the rule-following task (Kimbrough & Vostroknutov, 2018). The rule-following task has people decide to distribute a fixed number of balls between a yellow and a blue basket. The rule is to put the balls in the blue basket, but people earn more by putting the balls in the yellow basket. The more balls people put in the blue basket, the more sensitive they are to social norms. Taken together, the belief-elicitation methods and the rulefollowing task provide insight into the presence of a norm, its strength, and the propensity of people to follow it (Gross & Vostroknutov, 2022). By using one or several of these measures, a social norm diagnosis can reveal which aspects of a social norm (or the social expectations about this norm) an intervention should target to strengthen the social norm and/or promote behavioral change. For instance, if the reported behavior or personal normative beliefs differ from the empirical or normative expectations, people do not accurately infer the social norm. This might happen, for instance, when the social norm guides decisions made privately, meaning that people observe limited social cues in their environment. Alternatively, private normative beliefs may have changed over time, such that they no longer align with the collective idea about the prevailing social norm. Additionally, the diagnosis might indicate that there is no single social norm in place. Different subgroups may comply with different norms or people may report several (complementing or conflicting) norms in place at the same time. Finally, the diagnosis might reveal that social expectations are low overall, but that certain individuals or subgroups might comply with the social rule anyway, for instance because of a strong conviction in their own personal beliefs. Each of these diagnoses would require other norm-based interventions to strengthen or change the norm and realize the desired behavioral change.

4.4.2 Informed Norm-Based Interventions Once social norms and the social groups in which they exist have been measured and diagnosed, this information can be used to design norm-based interventions that promote lasting and sustainable behavioral change. One way to do so is through belief-manipulation protocols incorporated in behavioral and vignette experiments. Belief-manipulation protocols present subjects with social information to change their social expectations, assuming that this will spill over into a change in behavior. Bicchieri and Xiao (2009), for instance, manipulated the social expectations of participants playing a DG. In the DG, dictators have to decide how to divide $10 between themselves and another participant. Social expectations were manipulated by presenting the dictators with (true) information about how other dictators in previous experiments with the same DG

Norms: Inference and Interventions

decided to split the $10 or how they thought people should split the $10. If individual behavior is indeed conditional on social expectations, a successful manipulation of social expectations should result in behavioral change. To test that the change was actually due to a change in expectations, Bicchieri and Xiao (2009) then asked dictators about their expectations about the behavior and beliefs of other dictators in the present game. Similar methods that focus on what people typically do or believe should be done have also been used in the field as norm-based interventions to promote behavioral change. The advantage of having a norm diagnosis step before the intervention is that the norm intervention can report the true behavior and normative beliefs in place (including a disclosure about how this social information is obtained), which makes it more likely that the social information is perceived as plausible and relevant (McAlaney et al., 2010). For example, presenting information about what the majority does (the descriptive norm) and what the majority thinks should be done (the injunctive norm) has been found to promote transitions to sustainable energy (Jachimowicz et al., 2018). Interventions that disclose true compliance rates or normative beliefs are powerful in correcting inaccuracies in social expectations, such as those classified as false consensus bias (Ross et al., 1977) or pluralistic ignorance (Miller & McFarland, 1987). Norm correction strategies have been largely used and proven effective in reducing a variety of harmful behaviors such as alcohol use, tobacco use, and drunk driving (Cislaghi & Berkowitz, 2021; Perkins & Berkowitz, 1986), as healthy attitudes and behaviors tend to be underestimated, while unhealthy ones are overestimated (Berkowitz, 2010). However, presenting the target population with accurate, credible summary information about the behavior and beliefs of their reference group is not effective if the expectations are highly accurate already, if the expectations support other (undesirable) norms, or when compliance rates are low. In fact, access to this information could even be counterproductive and lead people to perceive that no norm, a weak norm, or a dominant undesirable norm is in place. When the diagnosis reveals several conflicting norms, for instance, interventions may draw attention to the inappropriateness of violating the target norm. This can divert people’s attention from the descriptive information (the norm violations observed) and make them focus on a normative message. In the context of littering, drawing attention to how much litter is on the ground may lead people to litter, while drawing attention to an anti-littering ad can reduce littering (Cialdini et al., 1990). Another strategy is to not focus on the normative message but on (the meta norm of ) punishment. Interventions could target people’s willingness to punish norm violations, for instance by informing them about the appropriateness of punishment. Being sanctioned or observing someone else being sanctioned for violating a norm not only modifies the incentives associated with norm compliance and norm violation (or the incentives associated with following the competing norm) but also has an indirect effect in communicating the existence of a social norm. This way, interventions on the meta norm may, by promoting peer

91

92

             

punishment, increase the strength of normative expectations around a certain behavior and as a consequence increase norm compliance. Or, as suggested by Cislaghi and Berkowitz (2021), when there are several, even conflicting, social norms sustaining the practice that needs to be changed, the norms transformation strategies can be effective. This class of interventions, rather than trying to correct misperceptions, is focused on changing people’s beliefs and behaviors of a core group of change actors through a series of facilitated conversations. This group of people eventually becomes large and committed enough to enact the new practice that will lead to the change of the existing social norms. Alternatively, if the norm diagnosis reveals that a social norm is weak but developing, interventions can inform people about this trend. For instance, while most people still use fuel vehicles (hinting at a dominant norm around unsustainable behavior), the number of people that use electric vehicles is increasing. In such situations, a trending norm intervention that stresses the increase in people that desire change and either already comply with the new norm (because they bought an electric vehicle) or plan to comply (as soon as they need a new vehicle) may help to accelerate the emergence of a new (sustainable) social norm and the corresponding behavioral change (Mortensen et al., 2019; Sparkman et al., 2020; Sparkman & Walton, 2017). Finally, norm-based interventions do not always need to be addressed to the entire reference population but may also target the most influential individuals within a group, that is, the “social referents” that others look at to infer the group norm and copy behavior. The actions and communications of social referents are often more influential than messages from other members of the reference group. If an intervention can change the opinions and behaviors of the social referents, it is likely to spill over to the other group members. In a field experiment conducted by Paluck et al. (2016), social referents in American high schools were identified using social network analysis. Social referents students were trained to model anti-harassment behaviors and assigned to an intervention that encouraged their public stance against conflict at school. Compared with control schools, disciplinary reports of student conflict at schools involved in the treatment were reduced by 30 percent over one year, and students’ perception that harassment was not considered socially appropriate by other students at their school was made more salient.

4.5 Conclusion Social norms, as a topic of inquiry, have gained significant attention from a variety of perspectives in recent years. Moving out of their traditional heartlands in sociology, philosophy, and psychology, social norms are now called upon to explain (human) behavior by researchers from a multiplicity of disciplines, including anthropology, biology, complex systems, and computer science. Scientists have accumulated a vast body of experimental evidence that

Norms: Inference and Interventions

makes it hard to deny that norms are a major driver of human decision making (Fehr & Schurtenberger, 2018; Gelfand et al., 2011; Nyborg et al., 2016). Clear and operational definitions of what norms are were formulated (Bicchieri, 2006; Horne & Mollborn, 2020) and theoretical models have been developed to explain processes of norm emergence and change (Centola et al., 2018; Strimling et al., 2018). Methodologically, experimental belief-elicitation instruments and computational models have enabled explicit diagnosis of the presence and evolution of norms (Gross & Vostroknutov, 2022; Krupka & Weber, 2013) and a better understanding of the causal feedback dynamics between norms and behavior (Andreoni et al., 2021; Szekely et al., 2021). In this chapter we have discussed some of these recent conceptual and methodological advances in the study of social norms, with a particular focus on norm inference. Norm inference is the process through which people learn the social norms in place and how strong they are. In many applications of social norm research, the elicitation of social norms or expectations about social norms rely on modal or average (expectations of ) compliance. The nuances in norm inference that result from conflicting and incomplete information and the consequences thereof on the decision to comply with or violate a norm have received less attention. However, we claim that understanding exactly how norms are inferred is crucial to understand how they can be enforced and changed. Knowing what information people use to infer norms makes it possible to design interventions to effectively change them. We presented a two-step approach of social norm diagnosis and interventions that first identifies whether a social norm exists, how strong this norm is perceived, and whether it suffers from biases and misperceptions. Based on this information, in a second step, interventions are designed to test whether (weak) social norms can be strengthened or socially undesirable norms abandoned. Taken together, the careful attention to macro-tomicro mechanisms of norm inference and micro-to-macro interventions on norm perceptions and behavior represent an integrated framework for understanding dynamics of social norms and behavioral change.

References Álvarez-Benjumea, A., & Winter, F. (2018). Normative change and culture of hate: An experiment in online environments. European Sociological Review, 34(3), 223–237. Andreoni, J., Nikiforakis, N., & Siegenthaler, S. (2021). Predicting social tipping and norm change in controlled experiments. Proceedings of the National Academy of Sciences, 118(16), Article e201489118. Andrews, K. (2020). Naïve normativity: The social foundation of moral cognition. Journal of the American Philosophical Association, 6(1), 36–56. Andrighetto, G., Brandts, J., Conte, R., Sabater-Mir, J., Solaz, H., & Villatoro, D. (2013). Punish and voice: Punishment enhances cooperation when combined with norm-signalling. PLoS ONE, 8(6), 1–8.

93

94

             

Andrighetto, G., & Castelfranchi, C. (2013). Norm compliance: The prescriptive power of normative actions. PARADIGMI, 139–150. Andrighetto, G., & Vriens, E. (2022). A research agenda for the study of social norm change. Philosophical Transactions of the Royal Society A, 380, Article 20200411. Axelrod, R. (1986). An evolutionary approach to norms. The American Political Science Review, 80(4), 1095–1111. Balafoutas, L., Nikiforakis, N., & Rockenbach, B. (2014). Direct and indirect punishment among strangers in the field. Proceedings of the National Academy of Sciences of the United States of America, 111(45), 15924–15927. Berkowitz, A. D. (2010). Fostering health norms to prevent violence and abuse: The social norms approach. In K. Kaufman (Ed.), The prevention of sexual violence: A practitioner’s sourcebook (pp. 147–171). NEARI Press. Bicchieri, C. (2006). The grammar of society: The nature and dynamics of social norms. Cambridge University Press. Bicchieri, C. (2017). Norms in the wild: How to diagnose, measure, and change social norms. Oxford University Press. Bicchieri, C., & Chavez, A. (2010). Behaving as expected: Public information and fairness norms. Journal of Behavioral Decision Making, 23(2), 161–178. Bicchieri, C., Dimant, E., Gächter, S., & Nosenzo, D. (2022). Social proximity and the erosion of norm compliance. Games and Economic Behavior, 132, 59–72. Bicchieri, C., & Funcke, A. (2018). Norm change: Trendsetters and social structure. Social Research, 85(1), 1–21. Bicchieri, C., Lindemans, J. W., & Jiang, T. (2014). A structured approach to a diagnostic of collective practices. Frontiers in Psychology, 5, 1–13. Bicchieri, C., & Xiao, E. (2009). Do the right thing but only if others do so. Journal of Behavioral Decision Making, 22, 291–208. Binmore, K. (2010). Social norms or social preferences?. Mind & Society, 9, 139–157. Boyd, R., Gintis, H., & Bowles, S. (2010). Coordinated punishment of defectors sustains cooperation and can proliferate when rare. Science, 328(5978), 617–620. Bursztyn, L., González, A. L., & Yanagizawa-Drott, D. (2020). Misperceived social norms: Women working outside the home in Saudi Arabia. American Economic Review, 110(10), 2997–3029. Castelfranchi, C. (1998). Simulating with cognitive agents: The importance of cognitive emergence. In J. S. Sichman, R. Conte, & N. Gilbert (Eds.), Multi-agent systems and agent-based simulation: First international workshop MABS ’98 (pp. 26–44). Springer. Castioni, P., Andrighetto, G., Gallotti, R., Polizzi, E., & De Domenico, M. (2022). The voice of few, the opinions of many: Evidence of social biases in Twitter COVID-19 fake news sharing. Royal Society Open Science, 9(220716), 1–12. Centola, D. (2021a). Change: How to make big things happen. John Murray. Centola, D. (2021b). Influencers, backfire effects, and the power of the periphery. In M. L. Small & B. L. Perry (Eds.), Personal networks (1st ed., pp. 73–86). Cambridge University Press. Centola, D., Becker, J., Brackbill, D., & Baronchelli, A. (2018). Experimental evidence for tipping points in social convention. Science, 360(6393), 1116–1119. Centola, D., & Macy, M. (2007). Complex contagions and the weakness of long ties. American Journal of Sociology, 113(3), 702–734.

Norms: Inference and Interventions

Chaurand, N., & Brauer, M. (2008). What determines social control? People’s reactions to counternormative behaviors in urban environments. Journal of Applied Social Psychology, 38(7), 1689–1715. Chudek, M., & Henrich. J. (2011). Culture–gene coevolution, norm-psychology and the emergence of human prosociality. Trends in Cognitive Sciences 15(5), 218–226. Cialdini, R. B., Reno, R. R., & Kallgren, C. A. (1990). A focus theory of normative conduct: Recycling the concept of norms to reduce littering in public places. Journal of Personality and Social Psychology, 58(6), 1015–1026. Cialdini, R. B., & Trost, M. R. (1998). Social influence: Social norms, conformity and compliance. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), Handbook of social psychology (4th ed., pp. 151–192). McGraw-Hill. Cialdini, R. B., Wosinska, W., Barrett, D. W., Butner, J., & Gornik-Durose, M. (1999). Compliance with a request in two cultures: The differential influence of social proof and commitment/consistency on collectivists and individualists. Personality and Social Psychology Bulletin, 25(10), 1242–1253. Cislaghi, B., & Berkowitz, A. D. (2021). The evolution of social norms interventions for health promotion: Distinguishing norms correction and norms transformation. Journal of Global Health, 11, Article 03065. Coleman, J. S. (1990). Foundations of social theory. Harvard University Press. Constantino, S. M., Sparkman, G., Kraft-Todd, G. T., Bicchieri, C., Centola, D., ShellDuncan, B., Vogt, S., & Weber, E. U. (2022). Scaling up change: A critical review and practical guide to harnessing social norms for climate action. Psychological Science in the Public Interest, 23(2), 50–97. Conte, R., Andrighetto, G., & Campennì, M. (2014). Minding norms: Mechanisms and dynamics of social order in agent societies. Oxford University Press. Cullis, J., Jones, P., & Savoia, A. (2012). Social norms and tax compliance: Framing the decision to pay tax. The Journal of Socio-Economics, 41, 159–168. Diekmann, A., Przepiorka, W., & Rauhut, H. (2015). Lifting the veil of ignorance: An experiment on the contagiousness of norm violations. Rationality and Society, 27(3), 309–333. Dimant, E. (2019). Contagion of pro- and anti-social behavior among peers and the role of social proximity. Journal of Economic Psychology, 73, 66–88. Elster, J. (1989). Social norms and economic theory. Journal of Economic Perspectives, 3 (4), 99–117. Elster, J. (2015). Explaining social behavior: More nuts and bolts for the social sciences. Cambridge University Press. Ensminger, J., & Henrich, J. (Eds.). (2014). Experimenting with social norms: Fairness and punishment in cross-cultural perspective. Russell Sage Foundation. Eriksson, K., Strimling, P., Gelfand, M., Wu, J., Abernathy, J., Akotia, C. S., Aldashev, A., Andersson, P. A., Andrighetto, G., Anum, A., Arikan, G., Aycan, Z., Bagherian, F., Barrera, D., Basnight-Brown, D., Batkeyev, B., Belaus, A., Berezina, E., Björnstjerna, M., . . . Van Lange, P. A. M. (2021). Perceptions of the appropriate response to norm violation in 57 societies. Nature Communications, 12(1), 1–11. Etzioni, A. (2000). Toward a theory of public ritual. Sociological Theory, 18(1), 44–59. Faillo, M., Grieco, D., & Zarri, L. (2013). Legitimate punishment, feedback, and the enforcement of cooperation. Games and Economic Behavior, 77, 271–283.

95

96

             

Fehr, E., & Fischbacher, U. (2004). Third-party punishment and social norms. Evolution and Human Behavior, 25(2), 63–87. Fehr, E., & Gächter, S. (2000). Cooperation and punishment in public goods experiments. American Economic Review, 90, 980–994. Fehr, E., & Gächter, S. (2002). Altruistic punishment in humans. Nature, 415(6868), 137–140. Fehr, E., & Schurtenberger, I. (2018). Normative foundations of human cooperation. Nature Human Behaviour, 2(7), 458–468. Feinberg, M., Willer, R., & Schultz, M. (2014). Gossip and ostracism promote cooperation in groups. Psychological Science, 25(3), 656–664. Gavrilets, S., & Richerson, P. J. (2017). Collective action and the evolution of social norm internalization. PNAS Proceedings of the National Academy of Sciences, 114(23), 6068–6073. Geiger, N., & Swim, J. K. (2016). Climate of silence: Pluralistic ignorance as a barrier to climate change discussion. Journal of Environmental Psychology, 47, 79–90. Gelfand, M. J. (2018). Rule makers, rule breakers: How culture wires our minds, shapes our nations and drive our differences. Robinson. Gelfand, M. J., Raver, J. L., Nishii, L., Leslie, L. M., Lun, J., Lim, B. C., Duan, L., Almaliach, A., Ang, S., Arnadottir, J., Aycan, Z., Boehnke, K., Boski, P., Cabecinhas, R., Chan, D., Chhokar, J., D’Amato, A., Ferrer, M., Fischlmayr, I. C., . . . Yamaguchi, S. (2011). Differences between tight and loose cultures: A 33-nation study. Science, 332(6033), 1100–1104. Getzels, J. W., & Guba, E. G. (1957). Social behavior and the administrative process. The School Review, 65(4), 423–441. Gintis, H. (2010). Social norms as choreography. Politics, Philosophy & Economics, 9(3), 251–264. Giuliano, P., & Nunn, N. (2021). Understanding cultural persistence and change. Review of Economic Studies, 88(4), 1541–1581. Granovetter, M. S. (1978). Threshold models of collective behavior. American Journal of Sociology, 83(6), 1420–1443. Gross, J., & Vostroknutov, A. (2022). Why do people follow social norms? Current Opinion in Psychology, 44, 1–6. Hechter, M., & Opp, K.-D. (Eds.). (2001). Social norms. Russell Sage Foundation. Heyes, C. (2023). Rethinking norm psychology. Perspectives on Psychological Science, 19(1), 12–38. Higgs, S., Liu, J., Collins, E. I. M., & Thomas, J. M. (2019). Using social norms to encourage healthier eating. Nutrition Bulletin, 44(1), 43–52. Horne, C., & Mollborn, S. (2020). Norms: An integrated framework. Annual Review of Sociology, 46, 467–487. Jachimowicz, J. M., Hauser, O. P., O’Brien, J. D., Sherman, E., & Galinsky, A. D. (2018). The critical role of second-order normative beliefs in predicting energy conservation. Nature Human Behaviour, 2(10), 757–764. Janssen, M. A., Holahan, R., Lee, A., & Ostrom, E. (2010). Lab experiments for the study of social-ecological systems. Science, 328(5978), 613–617. Keizer, K., Lindenberg, S., & Steg, L. (2008). The spreading of disorder. Science, 322(5908), 1681–1685. Kimbrough, E. O., & Vostroknutov, A. (2018). A portable method of eliciting respect for social norms. Economics Letters, 168, 147–150.

Norms: Inference and Interventions

Krupka, E. L., & Weber, R. A. (2013). Identifying social norms using coordination games: Why does dictator game sharing vary? Journal of the European Economic Association, 11(3), 495–524. Lapinski, M. K. (2005). An explication of social norms. Communication Theory, 15(2), 127–147. Legros, S., & Cislaghi, B. (2020). Mapping the social-norms literature: An overview of reviews. Perspectives on Psychological Science, 15(1), 62–80. Leviston, Z., Walker, I., & Morwinski, S. (2013). Your opinion on climate change might not be as common as you think. Nature Climate Change, 3(4), 334–337. Lindenberg, S. M. (2008). Social norms: What happens when they become more abstract? In A. Diekmann, K. Eichner, P. Schmidt, & T. Voss (Eds.), Rational choice: Theoretische Analysen und empirische Resultate: Festschrift für KarlDieter Opp zum 70. Geburtstag [Theoretical analyses and empirical results: Festschrift for Karl-Dieter Opp for his 70th birthday] (pp. 63–81). VS Verlag für Sozialwissenschaften. Lindenberg, S., & Steg, L. (2007). Normative, gain and hedonic goal frames guiding environmental behavior. Journal of Social Issues, 63(1), 117–137. Malle, B. F. (2023). What are norms and how is norm compliance regulated? In M. Berg & E. Chang (Eds.), Motivation and morality: A biopsychosocial approach (pp. 46–75). American Psychological Association. Malle, B. F., & Scheutz, M. (2019). Learning how to behave: Moral competence for social robots. In O. Bendel (Ed.), Handbuch Maschinenethik [Handbook of machine ethics]. Springer. Masclet, D., Noussair, C., Tucker, S., & Villeval, M.-C. (2003). Monetary and nonmonetary punishment in the voluntary contributions mechanism. American Economic Review, 93(1), 366–380. McAlaney, J., Bewick, B. M., & Bauerle, J. (2010). Social norms guidebook: A guide to implementing the social norms approach in the UK. University of Bradford Press. Miller, D. T., & McFarland, C. (1987). Pluralistic ignorance: When similarity is interpreted as dissimilarity. Journal of Personality and Social Psychology, 53(2), 298–305. Miller, D. T., & Prentice, D. A. (1994). Collective errors and errors about the collective. Personality and Social Psychology Bulletin, 20(5), 541–550. Molho, C., Tybur, J. M., Van Lange, P. A. M., & Balliet, D. (2020). Direct and indirect punishment of norm violations in daily life. Nature Communications, 11(1), Article 3432. Mortensen, C. R., Neel, R., Cialdini, R. B., Jaeger, C. M., Jacobson, R. P., & Ringel, M. M. (2019). Trending norms: A lever for encouraging behaviors performed by the minority. Social Psychological and Personality Science, 10(2), 201–210. Nikiforakis, N. (2008). Punishment and counter-punishment in public good games: Can we really govern ourselves? Journal of Public Economics, 92(1–2), 91–112. Norenzayan, A., Shariff, A. F., Gervais, W. M., Willard, A. K., McNamara, R. A., Slingerland, E., & Henrich, J. (2016). The cultural evolution of prosocial religions. Behavioral and Brain Sciences, 39, Article e1. Nyborg, K., Anderies, J. M., Dannenberg, A., Lindahl, T., Schill, C., Schlüter, M., Adger, W. N., Arrow, K. J., Barrett, S., Carpenter, S., Chapin, F. S., Crépin, A. S., Daily, G., Ehrlich, P., Folke, C., Jager, W., Kautsky, N., Levin, S. A., Madsen, O. J., . . . De Zeeuw, A. (2016). Social norms as solutions. Science, 354(6308), 42–43.

97

98

             

Nyborg, K., & Rege, M. (2003). On social norms: The evolution of considerate smoking behavior. Journal of Economic Behavior & Organization, 52(3), 323–340. Ofosu, E. K., Chambers, M. K., Chen, J. M., & Hehman, E. (2019). Same-sex marriage legalization associated with reduced implicit and explicit antigay bias. Proceedings of the National Academy of Sciences, 116(18), 8846–8851. Ostrom, E. (2000). Collective action and the evolution of social norms. Journal of Economic Perspectives, 14(3), 137–158. Ostrom, E., Walker, J., & Gardner, R. (1992). Covenants with and without a sword: Selfgovernance is possible. American Political Science Review, 86(2), 404–417. Over, H., & Carpenter, M. (2012). Putting the social into social learning: Explaining both selectivity and fidelity in children’s copying behavior. Journal of Comparative Psychology, 126(2), 182–192. Paluck, E. L. (2009). What’s in a norm? Sources and processes of norm change. Journal of Personality and Social Psychology, 96(3), 594–600. Paluck, E. L., & Shepherd, H. S. (2012). The salience of social referents: A field experiment on collective norms and harassment behavior in a school social network. Journal of Personality and Social Psychology, 103(6), 899–915. Paluck, E. L., Shepherd, H., & Aronow, P. M. (2016). Changing climates of conflict: A social network experiment in 56 schools. Proceedings of the National Academy of Sciences, 113(3), 566–571. Perkins, H. W., & Berkowitz, A. D. (1986). Perceiving the community norms of alcohol use among students: Some research implications for campus alcohol education programming. International Journal of the Addictions, 21(9–10), 961–976. Posner, E. (1999). Law and social norms. Harvard University Press. Prentice, D., & Miller, D. T. (1993). Pluralistic ignorance and alcohol use on campus: Some consequences of misperceiving the social norm. Journal of Personality and Social Psychology, 64(2), 243–256. Prentice, D., & Paluck, E. L. (2020). Engineering social change using social norms: Lessons from the study of collective action. Current Opinion in Psychology, 35, 138–142. Pristl, A.-C., Kilian, S., & Mann, A. (2021). When does a social norm catch the worm? Disentangling social normative influences on sustainable consumption behaviour. Journal of Consumer Behaviour, 20(3), 635–654. Procter-Scherdtel, A., & Collins, D. (2013). Smoking restrictions on campus: Changes and challenges at three Canadian universities, 1970–2010. Health & Social Care in the Community, 21(1), 104–112. Przepiorka, W., Szekely, A., Andrighetto, G., Diekmann, A., & Tummolini, L. (2022). How norms emerge from conventions (and change). Socius, 8. https://doi.org/ 10.1177/23780231221124556 Rabellino, D., Morese, R., Ciaramidaro, A., Bara, B. G., & Bosco, F. M. (2016). Thirdparty punishment: Altruistic and anti-social behaviours in in-group and outgroup settings. Journal of Cognitive Psychology, 28(4), 486–495. Richerson, P., & Boyd, R. (2005). Not by genes alone: How culture transformed human evolution. University of Chicago Press. Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Simon & Schuster. Ross, L., Greene, D., & House, P. (1977). The “false consensus effect”: An egocentric bias in social perception and attribution processes. Journal of Experimental Social Psychology, 13(3), 279–301.

Norms: Inference and Interventions

Schimmelpfennig, R., Vogt, S., Ehret, S., & Efferson, C. (2021). Promotion of behavioural change for health in a heterogeneous population. Bulletin of the World Health Organization, 99(11), 819–827. Schroeder, C. M., & Prentice, D. A. (1998). Exposing pluralistic ignorance to reduce alcohol use among college students. Journal of Applied Social Psychology, 28(23), 2150–2180. Schultz, P. W., Nolan, J. M., Cialdini, R. B., Goldstein, N. J., & Griskevicius, V. (2007). The constructive, destructive, and reconstructive power of social norms. Psychological Science, 18(5), 429–434. Silverblatt, A. (2004). Media as social institution. American Behavioral Scientist, 48(1), 35–41. Sinclair, S., & Agerström, J. (2021). Do social norms influence young people’s willingness to take the COVID-19 vaccine? Health Communication, 38(1), 152–159. Sparkman, G., Howe, L., & Walton, G. (2020). How social norms are often a barrier to addressing climate change but can be part of the solution. Behavioural Public Policy, 5(4), 528–555. Sparkman, G., & Walton, G. M. (2017). Dynamic norms promote sustainable behavior, even if it is counternormative. Psychological Science, 28(11), 1663–1674. Sripada, C., & Stich, S. (2006). A framework for the psychology of norms. In P. Carruthers, S. Laurence, & S. P. Stich (Eds.), The innate mind: Vol. 2. Culture and cognition (pp. 285–310). Oxford University Press. Strimling, P., De Barra, M., & Eriksson, K. (2018). Asymmetries in punishment propensity may drive the civilizing process. Nature Human Behaviour, 2, 148–155. Szekely, A., Lipari, F., Antonioni, A., Paolucci, M., Sánchez, A., Tummolini, L., & Andrighetto, G. (2021). Evidence from a long-term experiment that collective risks change social norms and promote cooperation. Nature Communications, 12(1), Article 5452. Tankard, M. E., & Paluck, E. L. (2016). Norm perception as a vehicle for social change. Social Issues and Policy Review, 10(1), 181–211. Traxler, C. (2010). Social norms and conditional cooperative taxpayers. European Journal of Political Economy, 26(1), 89–103. Tummolini, L., Andrighetto, G., Castelfranchi, C., & Conte, R. (2013). A convention or (tacit) agreement betwixt us: On reliance and its normative consequences. Synthese, 190(4), 585–618. Turiel, E. (1983). The development of social knowledge: Morality and convention. Cambridge University Press. Tverskoi, D., Guido, A., Andrighetto, G., Sánchez, A., & Gavrilets, S. (2023). Disentangling material, social, and cognitive determinants of human behavior and beliefs. Humanities and Social Sciences Communications, 10, Article 236. Ullmann-Margalit, E. (1977). The emergence of norms. Oxford University Press. van Kleef, G. A., Gelfand, M. J., & Jetten, J. (2019). The dynamic nature of social norms: New perspectives on norm development, impact, violation, and enforcement. Journal of Experimental Social Psychology, 84, Article 103814. Van Miltenburg, N., Buskens, V., Barrera, D., & Raub, W. (2014). Implementing punishment and reward in the public goods game: The effect of individual and collective decision rules. International Journal of the Commons, 8(1), 47–78.

99

100

             

Villatoro, D., Andrighetto, G., Brandts, J., Nardin, L. G., Sabater-Mir, J., & Conte, R. (2014). The norm-signaling effects of group punishment: Combining agentbased simulation and laboratory experiments. Social Science Computer Review, 32(3), 334–353. Villatoro, D., Andrighetto, G., Conte, R., & Sabater-Mir, J. (2015). Self-policing through norm internalization: A cognitive solution to the tragedy of the digital commons in social networks. Journal of Artificial Societies and Social Simulation, 18, 1–28. von Rohr, C. R., Burkart, J. M., & van Schaik, C. P. (2011). Evolutionary precursors of social norms in chimpanzees: A new approach. Biology & Philosophy, 26(1), 1–30. Vriens, E., Tummolini, L., & Andrighetto, G. (2023). Vaccine-hesitant people misperceive the social norm of vaccination. PNAS Nexus, 2(5), Article pgad132. Weinschenk, A. C., Panagopoulos, C., & van der Linden, S. (2021). Democratic norms, social projection, and false consensus in the 2020 U.S. Presidential Election. Journal of Political Marketing, 20(3–4), 255–268. Xiao, E., & Houser, D. (2011). Punish in public. Journal of Public Economics, 95(7–8), 1006–1017.

5 Moral Dilemmas Joanna Demaree-Cotton and Guy Kahane

The demands of morality can seem straightforward. Be kind to others. Do not lie. Do not murder. But moral life is not so simple. We are often confronted with difficult situations in which someone is going to get hurt no matter what we do, in which we cannot meet all of our obligations, in which loyalties come into conflict, in which we cannot help everyone who needs it, or in which we must compromise on important values. It is natural to describe such situations as moral dilemmas. This chapter is about the psychology of how we represent, process, and make decisions about what to do when moral life is difficult in this way. Our first aim is to provide some conceptual clarity on what exactly turns a choice situation into a moral dilemma. Our second aim is to critically survey existing psychological work, providing an overview of some important findings, while raising questions for future research.

5.1 Characterizing Moral Dilemmas In moral psychology, “moral dilemmas” are typically defined as any situation in which you are required to make a moral trade-off: where a gain in some moral value comes at the cost of some other.1 This definition has the advantage that it can be easily operationalized for experimental materials. However, this is far broader than the everyday sense of the term. Imagine you’ve promised to meet your friend for coffee. But on the way, you see a child collapse in the street. Do you simply continue on your way – potentially allowing the child to die but thereby ensuring you keep your promise? Or do you stop to help? This situation demands a moral trade-off. But, outside of the lab, few would call this a moral dilemma. If you later said: “A child lay dying in the road – but I did promise my friend a cappuccino . . . it was a real moral dilemma!” you’d sound ridiculous, even sinister, and out of touch with the demands of morality.

1

A great deal of current psychological research uses “moral dilemmas” to refer to variants of philosophical “runaway trolley scenarios,” also known as sacrificial dilemmas. As we discuss later, sacrificial dilemmas are obviously merely examples of dilemmas and cannot be assumed to be typical or representative.

101

102

           -            

Part of the reason we wouldn’t describe this trade-off as a moral dilemma is because it is relatively trivial to navigate, in at least two senses. Firstly, it’s epistemically easy to figure out what the right thing is to do. A child’s life is much, much weightier than a coffee date, so the obligation to help clearly outweighs the promise to your friend. Although the promise is what philosophers would call a pro tanto moral reason to keep walking, clearly the morally right thing all things considered is to help. Secondly, the trade-off is psychologically easy to manage, in the sense that choosing to help is not inherently emotionally aversive, and you shouldn’t be kept up at night ruminating guiltily over this choice. Indeed, it seems there would be something normatively amiss if you did find the choice psychologically difficult: Breaking a promise in this kind of situation just isn’t that big of a deal, morally speaking. Let’s call situations that demand moral trade-offs that are epistemically and psychologically straightforward to resolve “trivial moral trade-offs.” By contrast, colloquially the term “moral dilemma” is reserved for more difficult situations. Let’s call these “genuine” moral dilemmas or simply “moral dilemmas.” A genuine moral dilemma can arise if moral considerations that are sufficiently weighty pull in the direction of incompatible courses of action. In some such situations, it can be difficult to know which outweighs the other. In other situations, you might know what the right thing is to do, all things considered; nevertheless, the competing moral considerations are sufficiently weighty so that whatever you do, you have to do something that at least ordinarily would be very wrong, even if it’s not wrong here. This might be psychologically difficult to navigate even if it’s not epistemically difficult: It may be uncomfortable, unpleasant, or even highly distressing, and give rise to regret or guilt. For instance, imagine you’re a soldier who is forced to leave injured civilians to die, because attending to them would put your fellow soldiers’ lives at risk and break direct orders. Even if you are sure you’re doing the right thing, it’s still distressing and heart wrenching (cf. Molendijk, 2018; Shortland & Alison, 2020); and, in normal circumstances outside of wartime, leaving people to die in the street would be a terrible thing to do. And this is what makes the situation a genuine moral dilemma even though you know what you ought to do, all things considered. To be precise, what makes the situation a genuine moral dilemma is not that you in fact experience this psychological conflict; rather, it’s the normative fact that, in some sense, you ought to. Insofar as you’re a good person, it’s appropriate to place heavy moral and emotional weight on the fact that your actions dictate the death of innocent civilians.2 In light of these considerations, we offer a definition of genuine moral dilemmas as situations in which:

2

This is so even from a utilitarian point of view. The death of innocent civilians counts negatively in the utilitarian calculus. Moreover, this normally leads to worse consequences overall. Because of this, many utilitarians would say that at least an initial reluctance to make such choices will tend to make one the sort of person who maximizes good consequences, even if one ought to override this reluctance in specific situations.

Moral Dilemmas

(1) an agent’s moral reasons conflict, such that any course of action open to them has at least some strong moral reason against it; (2) this conflict is such that there is some sense in which it is morally appropriate to feel motivationally conflicted about what to do, and to feel psychologically conflicted about any given course of action they end up choosing. Thus, the definition of moral dilemmas is inherently normative: it makes reference to the moral reasons that actually apply in a situation, and to the psychological response that would be morally appropriate. Note, that, in philosophical terms, the claim that sometimes there are strong reasons for and against a choice is not a claim about whether the agent engages in reasoning; it’s a claim about the demands of morality, not the agent’s psychology. So having to choose between innocent lives gives rise to strong conflicting reasons and is thus a moral dilemma even if the person deciding happens to be too heartless, jaded, or traumatized to actually experience it as such; a trivial trade-off is not a moral dilemma just because someone is irrationally wracked with unreasonable guilt. But our normative account also tells us something about what, descriptively, it is to experience something as a moral dilemma – it is to experience a kind of moral psychological conflict arising from having to make a choice when it seems to you that moral reasons conflict. Thus, we expect there to be a number of descriptive markers that distinguish such situations from other moral decisions, including a tendency to evoke uncertainty, feelings of moral conflict, as well as what the philosopher Bernard Williams (1965) called “moral residue” – lingering negative affect later associated with the decision taken (see Figure 5.1). Thus, even if the notion of a moral dilemma is best understood in normative terms, for the purposes of psychological research it may suffice to study what most people in a certain population experience as a moral dilemma, or what would count as a moral dilemma relative to certain commonly accepted moral norms. There is a great deal of psychological research investigating how people respond to moral dilemmas. Most of this research in fact studies moral

Dilemma (Eliciting Situation) Moral reasons: - Strong/important - Conflicting (incompatible actions)

Appraisal Recognition of dilemma Feelings of conflict (emotional, motivational, cognitive) Moral uncertainty

Moral judgment

Choice

Moral Residue

Lower confidence

Lower confidence

Higher reaction time

Higher reaction time

Guilt, affective regret (without cognitive regret*)

Reluctance, difficulty implementing action

Distress, anxiety about the choice Self-doubt

Figure 5.1 Simplified illustration of the components and characteristic psychological markers of genuine moral dilemmas. Although the arrows indicate the paradigmatic order in which these steps unfold, there may be recursive relationships between the components (e.g., if one’s choice leads one to revise one’s moral judgment in order to reduce cognitive dissonance).  Experiencing markers of moral residue in the absence of cognitive regret is a special marker of an ultimate moral dilemma.

103

104

           -            

judgments about hypothetical situations – which is rather different than actually facing a dilemma (Bostyn et al., 2018). Nevertheless, evidence suggests that at least some dilemmas employed by psychologists are experienced as genuine dilemmas: Moral dilemmas involving serious conflicts lead to subjective feelings of moral conflict, decision difficulty, longer reaction times, reduced confidence, and increased negative emotions (such as fear of making the wrong choice) compared to trivial trade-offs and straightforward moral decisions (Bialek & De Neys, 2016; Hanselmann & Tanner, 2008; Mandel & Vartanian, 2008), and they are associated with activity in brain regions involved in conflict detection and resolution (see Greene, 2008). The experience of moral conflict has cognitive and emotional components (Mata, 2019). Cognitive conflict involves feeling divided about what to do and feeling drawn to conflicting considerations and options (i.e., epistemic difficulty), and it predicts judgments that neither option is “totally acceptable” or “totally unacceptable” (Mata, 2019). Emotional conflict involves negative feelings and emotional discomfort (and is therefore a key component of what we termed psychological difficulty). In these studies, the extent of conflict between moral values is manipulated by the experimenters. But a study by Krosch et al. (2012) illustrates how experiences of moral conflict depend on subjective commitment to competing moral values or reasons. Participants were presented with dilemmas and asked which values they would use to guide their decision and what choice they would make. The more a participant also endorsed values that supported the opposite choice to the one they favored, the more difficult they found the choice, and the more they expected to worry that they had made a mistake. So far, we discussed the broad sense that psychologists give to “moral dilemma,” which includes trivial moral trade-offs, and provided a characterization of “genuine” moral dilemmas, which only include more difficult trade-offs. We should also mention that when philosophers use the term “moral dilemma,” they often refer to a much narrower category: situations in which all options are morally wrong. These are special kinds of genuine dilemmas. Let’s call these “ultimate” moral dilemmas (see Figure 5.2). Philosophers disagree about

Figure 5.2 We distinguish trivial trade-offs from genuine moral dilemmas. Ultimate dilemmas are a possible subset of genuine dilemmas.

Moral Dilemmas

whether there are ultimate moral dilemmas. Some philosophers say there are – that some situations tragically force you to do something wrong no matter what you do (Barcan Marcus, 1980; Hursthouse, 2001; Nussbaum, 2000). Others hold that it’s always possible to choose something that’s not wrong, that is, morally permissible (Conee, 1982). Consider the example of “Sophie’s choice,” where a mother is forced to choose one of her two children to be murdered, or else faces having them both murdered. On one view, it’s an ultimate dilemma: Part of the tragedy is that Sophie is forced do something morally terrible, because it’s absolutely morally wrong to select one of your own children for death, even if the alternative is even worse and they would have died anyway. By contrast, other philosophers would argue that although there is a genuine moral conflict, it’s not an ultimate dilemma. According to this view, Sophie is doing the best she can under the circumstances, and so she isn’t doing anything wrong when she sends one child to their death – even if it feels that way. While psychological research on so-called moral dilemmas sometimes includes cases like Sophie’s choice that some see as ultimate dilemmas, much of the psychological literature involves vignettes inspired by philosophical thought experiments – most famously, trolley problems – that would not normally be considered ultimate “moral dilemmas.” For instance, most philosophers think that if you face a choice between allowing a runaway train to kill five workmen and pulling a switch to divert it onto a side-track where unfortunately there is one workman, you have the opportunity to do something that’s morally right: pulling the switch. If you think that, then you don’t think that it’s an ultimate moral dilemma, where you’re condemned to act wrongly no matter what. 3 We will consider whether there is evidence that nonphilosophers treat some cases as ultimate dilemmas later in the chapter.

5.2 The Psychological Source of Moral Dilemmas We have offered an account of moral dilemmas and of what generates them at the normative level – a conflict between weighty opposing moral reasons – as well as of their key experiential markers. We now turn to possible accounts of the underlying psychological machinery: Why are certain kinds of moral trade-offs experienced as moral dilemmas that are epistemically or just psychologically difficult to resolve, whereas others are trivial?

3

Why should anyone feel that Sophie’s choice is an ultimate dilemma but not feel the same way about trolley-style dilemmas? After all, both involve choices between human lives. One possibility is if you think that actively choosing someone to die is intrinsically wrong, especially if it’s your own child, but allowing someone to die (as is possible in some trolley-style dilemmas), especially a stranger, is not.

105

106

           -            

5.2.1 Emotion versus Reason and Conflicting Processes To put one influential answer in slogan form: Moral dilemmas arise when reason tells us to do one thing, but the heart tells us to do another (Greene, 2008). According to classic dual-process theory, “System 1” processes (which are fast, automatic, and emotion-driven) run in parallel to “System 2” processes (which involve slow, deliberative reasoning), and each process type produces independent inputs into moral judgment. In spite of disagreement about how exactly to characterize the dual processes, the idea is that moral dilemmas arise when dissociable psychological processes produce conflicting outputs. As Cushman and Greene argue: “When two such processes yield different answers to the same question, that question becomes a ‘dilemma.’ No matter which answer you choose, part of you walks away dissatisfied” (Cushman & Greene, 2012, p. 267). There is a wealth of psychological and neuroscientific evidence that dual processing plays a role in moral dilemmas. Famously, Greene and colleagues (see Greene, 2008, 2014) drew on philosophical thought experiments to create vignettes designed to pair emotionally evocative action types with positive consequences. The action generally involved sacrificing one person in order to obtain benefits for others; the types of situation depicted by these vignettes have come to be known as “sacrificial dilemmas.” In “difficult dilemmas,” the emotionally evocative action – the sacrifice – led to greater positive consequences, and thus had strong moral reasons in its favor. For instance, in Crying Baby, villagers are hiding when one of their babies begins to cry, alerting enemy soldiers to their location. The mother has to decide whether to smother and kill her baby to prevent all of their deaths at the hands of the soldiers (the “utilitarian” or “sacrificial” option) or whether to refrain from smothering the baby (the “deontological” or “nonsacrificial” option). By contrast, in a so-called easy dilemma (an oxymoron on our definition!) a new mother is deciding whether or not to kill her baby, simply because it’s unwanted. Greene and colleagues reported that “difficult” dilemmas activated neural areas associated with conflict and cognitive control (though see Baron & Gürcay, 2017, and Sauer, 2012, for criticism). In another study, they found that placing subjects under cognitive load selectively delayed pro-, but not nonsacrificial, judgments (see Greene, 2008). Such evidence is interpreted by Greene and others as showing that the psychological difficulty of moral dilemmas arises because of a conflict between the experience of automatic, emotional aversion to certain actions, and reasoned, deliberative recognition that this action will have the best outcome (for supporting evidence, see Greene, 2014; Patil et al., 2021) – thereby offering a purely psychological explanation of what, in the ethical debate, was characterized as a normative conflict between opposing moral principles. Greene’s dual-process model led to a proliferation of research examining dilemmas that pit emotionally evocative, harmful actions against some seemingly greater good. Indeed, it has led many to analyze all moral dilemmas in

Moral Dilemmas

terms of competing dual processes. For instance, Bartels and colleagues write that there is “an underlying agreement. . . [that] moral dilemmas exist because we have diverse psychological processes available for making moral judgments, and when two or more processes give divergent answers to the same problem, the result is that we feel ‘of two minds’” (Bartels et al., 2015, p. 491).

5.2.2 Sacred Values and Tragic Trade-offs Certainly, if our heads and our hearts are in agreement – or intuition and deliberation, or any other pair of processes – that makes for an easy decision, whereas conflict between processes requires resolution. But a focus on conflicts between values based in opposing processes may be an artifact of the “sacrificial” dilemmas that have dominated the literature, which have been designed precisely to pit reason-based processes against affective ones. But some kinds of moral conflict are especially difficult to resolve precisely because similar moral values are competing. Consider the research on “sacred values” and “tragic trade-offs” (Tetlock et al., 2000). A “sacred value” (or “protected value”; Baron & Spranca, 1997; see also Chapter 8 in this volume) is a moral value that is regarded, in some sense, as non-negotiable and absolute – such as human lives, justice, or protecting nature. Whether a value is sacred is measured by explicit high levels of agreement with statements such as “it’s something that we should not sacrifice, no matter the benefits”; “you can’t quantify it with money”; or, “it involves principles that we should defend under any circumstances” (Hanselmann & Tanner, 2008). “Taboo” trade-offs involve sacrificing a sacred value for a so-called secular value – for instance, denying someone life-saving treatment because it costs too much money – and are widely regarded as morally wrong. “Tragic” trade-offs, by contrast, pit “sacred” values against one another. Tragic trade-offs are genuine moral dilemmas in our sense; some may even be “ultimate” dilemmas. Take the “tragic trade-off” of a hospital director who must choose which of two ailing little boys will receive a life-saving liver transplant, when there is only one available (Tetlock et al., 2000). Such a case is difficult to resolve because serious and symmetrical moral reasons pull the agent in incompatible directions: Either way, the director is forced to deny a child a life-saving transplant. Tragic trade-offs can also arise when different sacred values are pitted against one another, such as protecting the environment versus ensuring safe working conditions. Participants find trade-offs between sacred values subjectively difficult (Hanselmann & Tanner, 2008), perceive agents willing to make them as untrustworthy and immoral (Everett et al., 2016; Uhlmann et al., 2013), and express moral outrage if others find such dilemmas easy to resolve (Tetlock et al., 2000). In this last respect, ordinary attitudes concur with arguments by certain ethicists that a virtuous person faced with a tragic dilemma acts only with great hesitation, reluctance, and regret (Hursthouse, 2001). It’s highly implausible that the feeling of moral conflict and difficulty to which such dilemmas give rise is due to a conflict between values rooted in

107

108

           -            

emotion, on the one hand, and values rooted in reason, on the other. In the liver transplant case, the very same value applies to two incompatible courses of action. The moral loss of allowing either little boy to die is presumably processed in the same way by the same mechanisms; consequently, both options are going to feel bad for similar reasons. The “sacred values” framework typically contrasts nonmoral values with sacred values, neglecting the possibility of nonsacred moral values. Yet, the possibility that not all moral values are seen as sacred is important if we are to use this framework to explain moral dilemmas, since not all moral conflicts give rise to genuine dilemmas. For instance, if promise-keeping is a moral but not a sacred value, while saving a life is a sacred value, that would explain why the coffee versus child emergency scenario is a trivial moral trade-off rather than a genuine moral dilemma. An alternative hypothesis would be that “sacredness” comes in degrees; there could be a hierarchy of sacred values (cf. Shortland & Alison, 2020) such that moral dilemmas arise when there are conflicts between values of sufficient, or sufficiently similar, sacredness. On this hypothesis, the coffee promise versus child emergency scenario would fail to be a genuine moral dilemma, not because promise-keeping isn’t perceived as “sacred,” but because everyday promises are lower on the hierarchy than the sacred value of saving lives.

5.2.3 Value Commensurability and Absolute Constraints How exactly should we understand the reluctance people have to violate sacred values? The language used to describe and measure sacred values – “absolute,” “unquantifiable,” “nonnegotiable” – might suggest that sacred values are regarded as (1) the subject of absolute moral constraints; (2) strictly incommensurable to each other, and also infinitely greater than nonsacred values. If people regard sacred values as the subject of absolute restrictions and/or incommensurability, this could explain why certain conflicts are especially difficult to resolve. Indeed, in philosophy, value incommensurability – the idea that some values can’t be compared and measured on a common scale (Chang, 1997) – is often taken to be intimately linked to ultimate moral dilemmas. For instance, you might think that the values of loyalty and justice are incommensurable: that they cannot be weighed against one another, so you can’t “make up” for disloyalty by making things more just. Similarly, maybe there’s just no fact of the matter as to how much happiness the death of an innocent would have to bring about for it to be “worth it.” Such incommensurability could explain the special tragedy of ultimate moral dilemmas: You are forced to make a sacrifice that cannot be made up for by any of your choice’s benefits, because these benefits cannot even be compared to the disvalue of the sacrifice (Nussbaum, 2000). In a different way, absolutism about moral prohibitions – that is, the idea that doing certain types of things is always morally wrong – could also give rise to ultimate moral dilemmas, as you may end up with a choice between two options, both of which are absolutely wrong to do.

Moral Dilemmas

Participants appear to endorse explicit claims of absolutism and incommensurability with regards to sacred values. And moral conflicts can therefore seem irresolvable. For instance, Shortland and Alison (2020) interviewed military veterans who described their experience of real-life dilemmas in combat settings. When conflicting moral values were both regarded as nonnegotiable, sacred values, this led to great epistemic and psychological difficulty, revealed by “redundant deliberations,” “looping” cognitions, and great difficulty reaching a resolution. These veterans appear to have struggled to resolve the conflict either because both options were perceived to be absolutely prohibited and/or because they struggle to make a meaningful comparison between the conflicting values. However, other evidence suggests that people don’t treat conflicting values as incommensurable in the sense that they are unable to compare them. For instance, in many dilemmas people judge that it’s better for one person to die if this is necessary to save sufficiently many others (e.g., Nichols & Mallon, 2006). Furthermore, these kinds of comparative judgments can be seen without the need to resort to large numbers if the act of sacrificing is disambiguated from comparisons regarding the amount to be sacrificed: Even if people are absolutely committed to avoiding the act of, for example, causing a child to die, they would still make trade-off judgments when it’s impossible to avoid this act (Berman & Kupor, 2020). Thus, conflicts between sacred values may be better captured in terms of commitment to a kind of absolutism: the idea that the acts involved (e.g. allowing children to die; razing acres of rainforest) are always wrong. Notice that the judgment that such choices involve facing an ultimate dilemma is compatible with the comparative moral judgments that are normally measured in research on moral dilemmas – for instance, that it’s better to choose option A rather than B, or that choosing A rather than B is the “right” choice. Just as it’s possible for someone to judge that they prefer Annie to Beth, and yet also believe that both Annie and Beth are bad people, it may be that in some dilemmas people judge that it’s right to choose option A over option B, and yet still judge that either way you do something wrong (Hursthouse, 2001). This fits the finding that, when given the option of doing so, people sometimes judge both options in sacrificial dilemmas as wrong (Kurzban et al., 2012, Study 1). If this is right, then one way of understanding the special difficulty of some sacrificial dilemmas would be not in terms of a conflict between reasoning-based values and emotion-based values, but as a special kind of tragic trade-off involving a conflict between sacred values that give rise to incompatible absolute obligations (e.g., when an absolute obligation to save lives conflicts with absolute prohibitions against intentional murder).

5.2.4 Automatic Aversion We have argued that the values involved in moral conflict can, but needn’t, have their source in different process-types or “systems.” But we are still left with a

109

110

           -            

puzzle: Why do we feel motivationally conflicted even when we recognize it’s the best choice under the circumstances? Evidence suggests that performing certain types of actions feels intrinsically wrong and unpleasant automatically, even if doing so is justified or even harmless. For instance, Cushman and colleagues showed that people have strong automatic aversions to performing pretend harmful actions, such as shooting someone with a fake gun, as indicated by physiological measures (Cushman et al., 2012). Automatic aversion is not sufficient for the experience of moral dilemmas (or pretend harmful actions would be considered wrong). Yet, combined with the perceived violation of moral requirements (cf. Nichols & Mallon, 2006) – perhaps requirements that are sacred or absolute – the automaticity of aversion explains the feelings of conflict characteristic of genuine dilemmas: If you are forced to perform a harmful action, whatever you do will automatically feel aversive, wrong and distressing even if you know it’s the right thing to do under the circumstances. (This could also contribute to persistent “moral residue”; see Section 5.4). While some theorists suggest that aversion to physically harming others is innate (Greene, 2008), learning may play a role in feeling automatic aversion to a wider range of actions. For instance, Graham and colleagues’ moral foundations theory suggests that we have an innate propensity to feel aversion to violations of different moral foundations (including, for example, care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, and sanctity/degradation) which are then developed depending on which values are emphasized in our culture (Graham et al., 2013).4 More recently, psychological and neuroscientific research has drawn on computational models to theorize how we might acquire moral aversion through reinforcement learning (Crockett, 2013; Cushman, 2013), where associating an action with harmful consequences or moral condemnation leads to automatic negative representations of that action in the future. Although much research has focused on physically harmful actions, such learning mechanisms could potentially explain aversion to a wider range of action types, like lying, stealing, showing disrespect, emotional harm, or purity violations (Cushman, 2013; Nichols, 2021). Indeed, it’s possible that learning mechanisms could explain automatic aversion to the breaking of a moral rule as such. This would contribute to explaining the experience of moral dilemmas more broadly in which one is required to violate some moral rule where breaking that rule is typically associated with severe consequences or condemnation. As well as aversion to performing certain acts, we also experience automatic negative affective reactions to the harm that befalls others. So-called outcome aversion is linked to empathic concern and is associated both with reluctance to harm and to fail to prevent harm (Jack et al., 2014; Reynolds & Conway, 2018).

4

Greene (2014) allows that “deontological” aversion may be a product of cultural learning.

Moral Dilemmas

This suggests that the unpleasantness of moral dilemmas arises not just from an aversion to performing certain (e.g., violent) acts, but also from empathic aversion to the harm that would befall others as a consequence of our choice (see Chapter 11 in this volume for a detailed treatment of empathy).

5.2.5 The Sources of Moral Dilemmas: Summary Moral dilemmas arise whenever important values exert motivational pull in conflicting directions. Sometimes, a value based in System 1 conflicts with a value processed and prioritized by System 2 (Greene, 2014). However, further research suggests that moral dilemmas can arise even when there’s no such inter-system competition. This could happen due to conflict between values that are both processed by System 2 (which might give rise to cognitive conflict, and thereby to emotional conflict); or due to conflict between two values that are both processed by System 1 (which might give rise to emotional conflict first, recognition of which gives rise to cognitive conflict). Nevertheless, the automaticity of the aversion felt toward performing bad actions, to bringing about bad outcomes, to violating sacred values or important moral rules, or simply to doing something perceived as wrong, may be a key part of the explanation for why motivational conflict persists despite reflective beliefs that no better option is available, and it’s in this more qualified respect that dual processes may be central to the experience of moral dilemmas.

5.3 How Do We Resolve Moral Dilemmas? The question of how we resolve moral dilemmas involves three interrelated considerations. The first is content: What values or considerations do people bring to bear to resolve moral dilemmas? The second is process: What types of psychological processes are involved, and how do they interact? The third is the resolution that is reached, what we might call the verdict: What does the agent conclude about what they ought to do?

5.3.1 Content: Which Values Are Brought to Bear? According to Greene’s dual-process model (2008), these questions are intertwined: It says that the values one brings to bear, and thus one’s all-thingsconsidered resolution, depends critically on the psychological process used. On this view, moral dilemma resolution is the result of a battle between a default affect-driven response that promotes “deontological” aversion to certain options, and a reasoned response that promotes “utilitarian” preference for whichever action has the best consequences. If we cannot engage in reasoned resolution – because we lack the opportunity or capacity, or because our emotional responses are too strong – then System 1 emotional intuition dominates, and we simply pick whichever action feels least aversive.

111

112

           -            

Contrary to the dual-process model’s claim of a strong link between intuitive resolutions and “deontological” prohibitions, we already saw that automatic, affect-driven responses can include aversion to bad outcomes. Thus, even if an agent responds on a purely intuitive basis, this doesn’t always mean they will fail to prioritize better consequences. Indeed, the utilitarian solution to moral dilemmas is sometimes more intuitive than the deontological one (Kahane et al., 2012; for criticism, see Paxton et al., 2014; for a reply, see Kahane, 2014, p. 16). Additionally, “utilitarian” judgments can be made effortlessly even when participants are under cognitive load when the number to be saved is very large (Trémolière & Bonnefon, 2014); and the tendency to choose actions promoting the greatest consequences for distant others (impartial beneficence) is preserved when responding intuitively (Capraro et al., 2019). Indeed, Bago and De Neys (2019) used a two-response paradigm and found that the majority of participants endorsing the sacrificial (“utilitarian”) option as their final judgment after reflection had already favored this option when initially required to make a quick, intuitive judgment. It’s similarly unclear that reasoned resolutions imply specific normative responses. Thus, which kinds of responses are intuitive, and which are deliberative, will vary for different contexts and individuals. While substantial evidence links deliberative reasoning to choosing the best consequences (e.g., Greene, 2008; Patil et al., 2021), reasoned resolutions are not limited to “utilitarian” cost-benefit analysis. Sometimes reasoning is used to override cooperative impulses to make more self-interested choices (Rand et al., 2012); in other contexts reasoning is used to promote deontological concerns over selfinterested or utilitarian impulses (Knoch et al., 2006; see Kahane, 2015, note 8, for discussion). Reasoning can also involve considering other people’s moral arguments (Paxton et al., 2012), or more domain-general techniques, such as consistency reasoning (Paxton & Greene, 2010), where one reflects on similar, less difficult cases in order to decide what ought to be done now. These reasoning techniques are unlikely to prioritize one kind of normative stance. Consistent with this claim, innovative paradigms that track participants’ preferences over time (Gürcay & Baron, 2017; Parker & Finkbeiner, 2020) suggest that reasoning about sacrificial dilemmas can lead participants to change their verdict in different directions: Some participants move away from an initial “utilitarian” inclination toward a reasoned “deontological” solution, while other participants show the opposite trajectory. Thus, reasoning about moral dilemmas can include relatively sophisticated endorsement of “deontological” rights, duties, or rules (e.g., Cushman et al., 2006; Gamez-Djokic & Molden, 2016; Holyoak & Powell, 2016; Körner & Volk, 2014).5 For instance, Gamez-Djokic and Molden (2016) found that inducing a focus on goals relating to duties and responsibilities increased deontological judgments in sacrificial dilemmas. This increase was not 5

See Holyoak and Powell (2016) for further discussion of deontological principles in moral reasoning.

Moral Dilemmas

associated with empathic concern or affect; instead, it seemed to be explained by reasoning. Reasoning about deontological values can also concern “role-based” obligations, such as special duties toward close family and friends. Such values can play a role in resolving dilemmas where personal loyalties conflict with rules or the greater good (e.g., deciding whether to report one’s own brother to the police; Lee & Holyoak, 2020), or when multiple personal loyalties conflict (e.g., deciding whether to sacrifice one family member to save other family members; Kurzban et al., 2012). Religious values may also be brought to bear. For instance, belief that a moral rule is grounded in God’s moral authority is associated with the judgment that one mustn’t break that rule to promote better outcomes (Piazza & Landy, 2013), and the influence of such beliefs on dilemma judgments is reduced if religious participants cannot reflect because of time pressure or cognitive load (McPhetres et al., 2018). These examples of moral values that may feature in reason-based resolutions are surely not exhaustive (see, e.g., Graham et al., 2013). Thus reasoning processes are unlikely to be linked to a single specific type of normative resolution across dilemmas generally (Kahane, 2012). Instead, the difference between resolving moral dilemmas intuitively or via deliberation may be better characterized in terms of the number and complexity of moral considerations one brings to bear. This is the central claim of Landy and Royzman’s (2018) “moral myopia model,” according to which purely intuitive responses will consist in singularly attending to one salient aspect of a moral problem, while having motivation and opportunity to engage in reasoning will lead to more complex, integrative responses. In our view, intuitive responses are not literally “myopic” – even automatic intuitions are sensitive to the presence of moral conflict (Bialek & De Neys, 2016), and thus responsive to a range of moral factors. But it does seem that reasoning often allows us to bring to bear a greater number of factors and to evaluate their relevance in more complex ways, including factors that are not immediately salient or intuitive (Kahane, 2012). For instance, Moore et al. (2008) found that participants’ moral judgments regarding sacrificial moral dilemmas were sensitive to whether the one to be sacrificed would have died anyway, but only for participants with high working memory capacity – suggesting that reasoning was needed to give this factor weight in dilemma resolution. Beyond “utilitarian” and “deontological” values, perception of moral character may influence moral dilemma resolution. Those who make pro-sacrificial judgments in moral dilemmas are judged to be less moral, less trustworthy, less praiseworthy, less caring, more self-interested, and generally worse than those who choose the nonsacrificial resolution (e.g., Critcher et al., 2020; Everett et al., 2016). Consequently, people may bring concerns about moral character to bear when deciding how to resolve a moral dilemma. Recent studies support this prediction. Participants are more likely to reject the sacrificial option in sacrificial dilemmas (like the Crying Baby dilemma) when told they are being

113

114

           -            

assessed for emotional competence (Rom & Conway, 2018) or that they are being observed by others, and this tendency is associated with increased sensitivity to words concerning “warmth”-related personality traits (Lee et al., 2018). There are a number of (mutually compatible) ways that concerns about moral character could affect moral dilemma resolution. The first is instrumental: People just want to be perceived as good, and they self-interestedly moderate their choices to preserve their social reputation. The second is that reasoning about others’ perceptions is used as a heuristic: Refraining from acts that make you look like a bad person might be a reliable heuristic for in fact doing the right thing. A third possibility is that concerns about character play a more direct role in moral reasoning: People may choose actions on the basis of what kind of person they would be if they performed these acts. Consistent with the latter hypothesis, Reynolds et al. (2019) found that assessing sacrificial dilemmas in front of a mirror increased tendencies to reject sacrificial actions (actions that participants tend to associate with immoral character; Uhlmann et al., 2013), even though it did not increase self-reported concern about how others would evaluate them for their decision. To the extent that people consider how to act well or virtuously when comparing options, their reasoning would echo the philosophical tradition of virtue ethics (as opposed to deontology or utilitarianism), which grounds morality in what it is to be a good person, and according to which the right thing to do is what a virtuous person would characteristically do (e.g., to act honestly, kindly, courageously, etc.; Hursthouse, 2001). The distinctive psychological difficulties associated with moral dilemmas could therefore lie in the perception that one must do something seemingly inconsistent with virtue and, indeed, one’s own self-conception (Strohminger & Nichols, 2014), as moral dilemmas require you to do something that typically only a vicious (callous, dishonest, disloyal, etc.) person would do (Hursthouse, 2001, p. 74).

5.3.2 How Do We Weigh Competing Values? On some views, the psychological conflict experienced in moral dilemmas often does not reflect conflict between moral values that are viewed as genuine; rather, they involve an immediate, “pre-potent” emotional response that effortful reflection reveals is a kind of moral illusion, pulling us away from the moral values that we reflectively endorse (Greene, 2008). By contrast, much of the research we’ve reviewed so far suggests that most people do recognize genuine competing values in sacrificial dilemmas and in other contexts. Most people therefore experience many situations as genuine moral dilemmas in our sense – cases where conflicting values need to be weighed against each other. Some philosophers have offered normative accounts of how conflicts between values can be resolved. On one especially simple model, different values or principles are given different fixed weights that can provide a ranking of

Moral Dilemmas

“valuableness” that we can use to resolve moral dilemmas when those values conflict (e.g., Chang, 1997). In psychology, the so-called conflict model of moral dilemma resolution suggested by Gürcay and Baron (2017) can be read along such lines. Gürcay and Baron argue that agents weigh up the competing alternatives until an option reaches a threshold of support, leading to its endorsement. Reaching this threshold can be seen as an additive process that depends on the background weight the agent attaches to different values, and on the trade-off of those values required by the dilemma. This model sits easily with the view that no value is treated in a qualitatively different way to others (e.g., on the basis of having roots in emotional or nonemotional processing, or on the basis of being a “default” value). In this respect, such a model complements the view that the competing moral values at stake in moral dilemmas are commensurable – contrary to certain interpretations of “sacred values” discussed earlier – at least in the descriptive sense that we do add up and weigh sources of moral value and disvalue to form a single overall valuation. A natural way of interpreting this model is that people approach moral dilemmas already equipped with priorities or “weights” attached to different values that they simply apply to the situation at hand. For example, a “deontological” responder is one who places a larger negative weight on intentionally killing than the positive weight placed on saving a life. However, many philosophers have argued that the weight of different moral values and principles isn’t fixed in this simple way but always depends on the context. We can’t, then, come ready with a specific weight attached to a general value like “don’t harm others” or “help people if you can.” This idea is central to the pluralist moral system proposed by the philosopher W. D. Ross (1930). On Ross’s framework, there are multiple “prima facie,” nonabsolute moral duties. They don’t have a fixed “weight,” so there’s no general rule that determines how to resolve conflicts between them; instead, which duty outweighs the other is determined according to the particular moral situation at hand, by exercising moral judgment – where this involves a kind of holistic pattern perception, not appealing to a higher-order rule or principle. In psychological terms, such an all-thingsconsidered judgment could be seen as a System 1 intuition about a conflict between principled duties registered at the System 2 level. Similarly, according to Aristotelian virtue ethics, the virtuous person must exercise “practical wisdom” – a kind of intuitive moral expertise developed over time through habit, reflection, and experience – in order to determine what ought to be done (Hursthouse, 2001). For instance, although there’s no principle that prioritizes, for example, honesty over kindness, the virtuous person can “tell” that they ought to tell a white lie when their embarrassed friend asks whether the spill on their dress is very noticeable. Although it’s unclear what exactly this “practical wisdom” consists of, it might be interpreted as involving System 1 intuitions (particularly those developed by learning). On the other hand, according to virtue ethics, those who have not yet acquired the intuitive skill of the truly virtuous might have to resolve the conflict using

115

116

           -            

reasoning (System 2), by reflecting on, for example, what moral exemplars have done in similar situations. On these ethical views, determining the relative weights of competing values is primarily an intuitive process. What might this intuitive process look like in more detail? On one possible model, each moral factor is associated with an emotional response, and the factor that wins out is the one associated with the strongest emotion. For instance, we may simply endorse the option that feels the least emotionally aversive. Supporting this, many studies have found that the strength of emotional responses often predicts people’s judgments in moral dilemmas (see Miller & Cushman, 2013, for an overview). One interpretation of these findings is that participants decide to forego a greater good (e.g., saving more lives) because sacrificing someone evokes stronger negative emotions. However, other studies have found that the strength of various emotional responses has only limited correlation with overall judgments (Horne & Powell, 2016). This suggests that while emotions play a role in dilemma resolution, people tend to reason in a more complex way than simply choosing whichever option feels the least bad. Even when more complex emotions than aversion are taken into account, their strength appears to underdetermine judgment. Royzman et al. (2011) asked participants to evaluate dilemmas that forced a protagonist to choose between committing incest (a disgust-related violation) and allowing great harm to befall someone (a sympathy-related violation). Participants’ moral judgments were consistently predicted by which action they believed would have the greatest costs for everyone. By contrast, their judgments were not significantly related to the levels of emotional distress or the disgust or sympathy evoked by contemplating the different options, or individual differences in participants’ dispositional disgust sensitivity or dispositional sympathy.

5.3.3 The Verdict Reasoning about complex moral considerations doesn’t only affect which option is ultimately favored. It can also result in a different moral stance – a resolution that is, in a sense, more nuanced and forgiving. Royzman et al. (2015) found that performance on the cognitive reflection test (a marker of deliberative capacity) predicted the judgment that both options in a moral dilemma were morally permissible (thus, that neither was morally required over the other). In this vein, future research may consider moral dilemma resolution not only in terms of which action is preferred overall, but also in terms of different types of moral attitudes toward both actions. Another question concerns what function reasoning plays beyond influencing all-things-considered moral judgments. After all, sometimes the intuitively preferred option is also reflectively endorsed (Bago & De Neys, 2019). One possibility is that people delay their response just to signal to others that they are moral and take the dilemma seriously (cf. Tetlock et al., 2000). But reasoning processes also allow people to explore justifications for their choice,

Moral Dilemmas

which can be important for justifying oneself to others later on (Paxton & Greene, 2010). Although this process remains unexplored, it could also have downstream implications for moral emotions like guilt and regret. Finally, the dominant lab-based research paradigm that asks participants to evaluate two mutually exclusive options (e.g., sacrifice vs. don’t sacrifice) artificially limits the resolution process. In real life, option sets are rarely limited to two mutually exclusive options restricted to a single point in time. Instead, agents can often undertake a series of actions that change the nature of the moral situation in more complex, creative ways. For instance, a “deontological” agent’s recognition of the moral cost of refraining from sacrificing the one could manifest itself in other decisions – such as seeking other ways to save the five, trying to find ways to prevent such situations arising again in the future, and making amends for the harm caused to the five. Investigating more complex resolution options in future research could provide a more nuanced picture of dilemma resolution.

5.4 Moral Residue and the Aftermath of Dilemma Resolution Most psychological research has focused on how moral dilemmas are resolved. But some of the central psychological characteristics of genuine moral dilemmas only manifest themselves after the resolution stage in “moral residue” or “moral remainder” – a sense of moral unease, remorse, or guilt that lingers no matter which option is chosen. Some philosophers further argue that, far from being irrational, these lingering moral feelings are sometimes morally appropriate, and even indicate you faced an ultimate moral dilemma – that you were forced to do something morally wrong (Barcan Marcus, 1980; Williams, 1965). Indeed, such feelings may be part of what it is to be a good, loving, loyal person who has been forced to do something terrible by tragic circumstances (cf. Hursthouse, 2001, pp. 73–77). Goldstein-Greenwood and colleagues (2020) apply the distinction between affective regret and cognitive regret to hypothetical sacrificial dilemmas. Consistent with the notion of moral residue, many decisions, especially utilitarian decisions, produced affective regret (e.g., self-blame or feeling like “kicking yourself”) but little cognitive regret (believing that a different decision would have been better, wishing you had decided differently). The combination of affective regret without cognitive regret is especially indicative of the kind of moral residue philosophers would expect of ultimate moral dilemmas, since it implies negative moral feelings about what one has done without believing one acted wrongly. There are obvious limitations to investigating moral residue using merely hypothetical choices. However, the theoretical construct of “moral injury” from research on combat veterans provides a striking illustration of the psychological impact of real-life moral dilemmas. In this literature, moral injury is theorized as a distinct component of more general trauma, and is theorized to result from

117

118

           -            

witnessing or engaging in actions that transgress deeply held moral values – including killing, or being unable to help wounded civilians without risking the lives of fellow soldiers. These experiences are associated with deeply negative feelings such as hopelessness, shame, distress, and anger, as well as issues like depression, anxiety and social withdrawal (for a review, see Griffin et al., 2019). Molendijk (2018) argues that the experience of seemingly irresolvable value conflicts (a marker of genuine moral dilemmas) is a common theme of moral injury. She also highlights the complexity of the resulting guilt. Consider this striking account from a Dutch veteran describing a situation where more desperate refugees approached his compound than it was possible to accommodate: People pressed against one another, against walls, all together. Terrified. Terror in their eyes. I’m going to die, these people thought. Help me, help me. Old men, women, passed out. So, I threw them into the wheelbarrow and drove [them to the compound]. You did what you could. . . . At that point, you’re doing it all wrong. Everything. . . . You can’t choose between one human life and another human life. So yes, you always do the wrong thing. Everybody in the compound, that didn’t fit. (pp. 3–4)

An Afghanistan veteran similarly describes the dilemma of trying to give impromptu life-saving medical treatment, but then receiving military orders to leave. He says of the incident: So then you have to take off the oxygen mask and take out the IV. For a nurse, that doesn’t make sense. I had taken an oath as a soldier, but as a nurse I also had an oath. But those two promises are not compatible over there, you have to choose . . . In the end I chose the [oath I took as a] soldier. (p. 4)

He goes on to write about feelings of guilt – feelings not quelled by his belief that he made the best decision under the circumstances. These accounts seem to report the experience of ultimate moral dilemmas, as veterans describe the confusion, distress, and doubt that arise from knowing that they did the best thing they could under the circumstances, while nevertheless feeling that they transgressed a crucial moral requirement (Molendijk, 2018). Recent work has expanded research on moral injury to other real-life moral dilemmas, such as those facing health-care workers during the COVID19 pandemic (Borges et al., 2020). Future research in moral cognition might consider drawing on this concept to further illuminate the phenomenology of moral dilemmas.

5.5 Conclusion To understand morality in all of its real-world messiness and uncertainty, we need to understand what it is to face and resolve moral dilemmas. We have argued that the experience of genuine moral dilemmas arises from the recognition that any choice you make requires you to transgress serious moral

Moral Dilemmas

requirements or values – thus triggering negative affect, feelings of moral conflict, and specific forms of regret or guilt that persist as “moral residue” even if you did the best you could under the circumstances. This phenomenon, we suggest, is an all-too-common part of everyday moral life. Recognizing this, and recognizing the rich variety of values that feature in experiences and resolutions of such conflicts, calls for a broadening of dilemma research beyond cases of sacrificing strangers, beyond cases of violating requirements for the sake of a utilitarian greater good, and beyond conflicts between “emotionbased” and “reason-based” values. Correspondingly, future research would benefit from more investigation into how people compare and weigh values against one another. At the same time, a narrower target than “moral tradeoffs” is needed if we are to fully understand moral dilemmas, with the characteristic experiences of strong conflict, psychological difficulty, and moral residue that they involve. Rather than resulting from any moral trade-off, the conflicting values most capable of generating these experiences may be those that are held equally sacred, absolute, or that strike at the core of our identities as virtuous, moral beings.

References Bago, B., & De Neys, W. (2019). The intuitive greater good: Testing the corrective dual process model of moral cognition. Journal of Experimental Psychology: General, 148(10), 1782–1801. Barcan Marcus, R. (1980). Moral dilemmas and consistency. Journal of Philosophy, 77 (3), 121–136. Baron, J., & Gürcay, B. (2017). A meta-analysis of response-time tests of the sequential two-systems model of moral judgment. Memory & Cognition, 45, 566–575. Baron, J., & Spranca, M. (1997). Protected values. Organizational Behavior and Human Decision Processes, 70, 1–16. Bartels, D. M., Bauman, C. W., Cushman, F. A., Pizarro, D. A., & McGraw, A. P. (2015). Moral judgment and decision making. In G. Keren & G. Wu (Eds.), The Wiley Blackwell handbook of judgment and decision making (Vol. 1; pp. 478–515). Wiley Blackwell. Berman, J. Z., & Kupor, D. (2020). Moral choice when harming is unavoidable. Psychological Science, 31(10), 1294–1301. Bialek, M., & De Neys, W. (2016). Conflict detection during moral decision-making: Evidence for deontic reasoners’ utilitarian sensitivity. Journal of Cognitive Psychology, 2(5), 631–639. Borges, L. M., Barnes, S. M., Farnsworth, J. K., Frescher, K. D., & Walser, R. D. (2020). A contextual behavioral approach for responding to moral dilemmas in the age of COVID-19. Journal of Contextual Behavioral Science, 17, 95–101. Bostyn, D. H., Sevenhant, H., & Roets, A. (2018). Of mice, men, and trolleys: Hypothetical judgment versus real-life behavior in trolley-style moral dilemmas. Psychological Science, 29(7), 1084–1093.

119

120

           -           

Capraro, V., Everett, J. A. C., & Earp, B. D. (2019). Priming intuition disfavors instrumental harm but not impartial beneficence. Journal of Experimental Social Psychology, 83, 142–149. Chang, R. (Ed.). (1997). Incommensurability, incomparability, and practical reason. Harvard University Press. Conee, E. (1982). Against moral dilemmas. Philosophical Review, 91(1), 87–97. Critcher, C. R., Helzer, E. G., & Tannenbaum, D. (2020). Character evaluation: Testing another’s moral-cognitive machinery. Journal of Experimental Social Psychology, 87, Article 103906. Crockett, M. (2013). Models of morality. Trends in Cognitive Sciences, 17, 363–366. Cushman, F. (2013). Action, outcome, and value: A dual-system framework for morality. Personality and Social Psychology Review, 17(3), 273–292. Cushman, F. A., Gray, K., Gaffey, A., & Mendes, W. (2012). Simulating murder: The aversion to harmful action. Emotion, 12, 2–7. Cushman, F. A., & Greene, J. D. (2012). Finding faults: How moral dilemmas illuminate cognitive structure. Social Neuroscience, 7(3), 269–279. Cushman, F., Young, L., & Hauser, M. (2006). The role of conscious reasoning and intuition in moral judgment: Testing three principles of harm. Psychological Science, 17(12), 1082–1089. Everett, J. A., Pizarro, D.A., & Crockett, M. J. (2016). Inference of trustworthiness from intuitive moral judgments. Journal of Experimental Psychology: General, 145, 772–787. Gamez-Djokic, M., & Molden, D. (2016). Beyond affective influences on deontological moral judgment: The role of motivations for prevention in the moral condemnation of harm. Personality and Social Psychology Bulletin, 42(11), 1522–1537. Goldstein-Greenwood, J., Conway, P., Summerville, A., & Johnson, B. N. (2020). (How) do you regret killing one to save five? Affective and cognitive regret differ after utilitarian and deontological decisions. Personality and Social Psychology Bulletin, 46(9), 1303–1317. Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., & Ditto, P. H. (2013). Moral foundations theory: The pragmatic validity of moral pluralism. Advances in Experimental Social Psychology, 47, 55–130. Greene, J. D. (2008). The secret joke of Kant’s soul. In W. Sinnott-Armstrong & C. B. Miller (Eds.), Moral psychology: The neuroscience of morality: Emotion, brain disorders, and development (pp. 35–79). MIT Press. Greene, J. D. (2014). Beyond point-and-shoot morality: Why cognitive (neuro)science matters for ethics. Ethics, 124(4), 695–726. Griffin, B. J., Purcell, N., Burkman, K., Litz, B. T., Bryan, C. J., Schmitz, M., Villierme, C., Walsh, J., & Maguen, S. (2019). Moral injury: An integrative review. Journal of Traumatic Stress, 32, 350–362. Gürcay, B., & Baron, J. (2017). Challenges for the sequential two-system model of moral judgement. Thinking & Reasoning, 23(1), 49–80. Hanselmann, M., & Tanner, C. (2008). Taboos and conflicts in decision making: Sacred values, decision difficulty, and emotions. Judgment and Decision Making, 3(1), 51–63. Holyoak, K. J., & Powell, D. (2016). Deontological coherence: A framework for commonsense moral reasoning. Psychological Bulletin, 142(11), 1179–1203. Horne, Z., & Powell, D. (2016). How large is the role of emotion in judgments of moral dilemmas? PLOS ONE, 11(7), Article e0154780.

Moral Dilemmas

Hursthouse, R. (2001). On virtue ethics. Oxford University Press. Jack, A. I., Robbins, P., Friedman, J., & Meyers, C. (2014). More than a feeling: Counterintuitive effects of compassion on moral judgment. In J. Sytsma (Ed.), Advances in experimental philosophy of mind (pp. 125–179). Bloomsbury. Kahane, G. (2012). On the wrong track: Process and content in moral psychology. Mind & Language, 27(5), 519–545. Kahane, G. (2014). Intuitive and counterintuitive morality. In J. D’Arms and D. Jacobson (Eds.), Moral psychology and human agency: Philosophical essays on the science of ethics (pp. 9–39). Oxford University Press. Kahane, G. (2015). Sidetracked by trolleys: Why sacrificial moral dilemmas tell us little (or nothing) about utilitarian judgment. Social Neuroscience, 10(5), 551–560. Kahane, G., Wiech, K., Shackel, N., Farias, M., Savulescu, J., & Tracey, I. (2012). The neural basis of intuitive and counterintuitive moral judgment. Social Cognitive and Affective Neuroscience, 7(4), 393–402. Knoch, D., Pascual-Leone, A., Meyer, K., Treyer, V., & Fehr, E. (2006). Diminishing reciprocal fairness by disrupting the right prefrontal cortex. Science, 314(5800), 829–832. Körner, A., & Volk, S. (2014). Concrete and abstract ways to deontology: Cognitive capacity moderates construal level effects on moral judgments. Journal of Experimental Social Psychology, 55, 139–145. Krosch, A., Figner, B., & Weber, E. U. (2012). Choice processes and their postdecisional consequences in morally conflicting decisions. Judgment and Decision Making, 7(3), 224–234. Kurzban, R., DeScioli, P., & Fein, D. (2012). Hamilton vs. Kant: Pitting adaptations for altruism against adaptations for moral judgment. Evolution and Human Behavior, 33, 323–333. Landy, J. F., & Royzman, E. B. (2018). The moral myopia model: Why and how reasoning matters in moral judgment. In G. Pennycook (Ed.), The new reflectionism in cognitive psychology (pp. 70–92). Routledge. Lee, J., & Holyoak, K. J. (2020). “But he’s my brother”: The impact of family obligation on moral judgments and decisions. Memory & Cognition, 48, 158–170. Lee, M., Sul, S., & Kim, H. (2018). Social observation increases deontological judgments in moral dilemmas. Evolution and Human Behavior, 39(6), 611–621. Mandel, D. R., & Vartanian, O. (2008). Taboo or tragic: Effect of tradeoff type on moral choice, conflict, and confidence. Mind & Society, 7, 215–226. Mata, A. (2019). Social metacognition in moral judgment: Decisional conflict promotes perspective taking. Journal of Personality and Social Psychology: Attitudes and Social Cognition, 117(6), 1061–1082. McPhetres, J., Conway, P., Hughes, J. S., & Zuckerman, M. (2018). Reflecting on God’s will: Reflective processing contributes to religious peoples’ deontological dilemma responses, Journal of Experimental Social Psychology, 79, 301–314. Miller, R., & Cushman, F. (2013). Aversive for me, wrong for you: First-person behavioral aversions underlie the moral condemnation of harm. Social and Personality Psychology Compass, 7(10), 707–718. Molendijk, T. (2018). Toward an interdisciplinary conceptualization of moral injury: From unequivocal guilt and anger to moral conflict and disorientation. New Ideas in Psychology, 51, 1–8.

121

122

           -           

Moore, A. B., Clark, B. A., & Kane, M. J. (2008). Who shalt not kill? Individual differences in working memory capacity, executive control, and moral judgment. Psychological Science, 19(6), 549–557. Nichols, S. (2021). Rational rules: Towards a theory of moral learning. Oxford University Press. Nichols, S., & Mallon, R. (2006). Moral dilemmas and moral rules. Cognition, 100(3), 530–542. Nussbaum, M. C. (2000). The costs of tragedy: Some moral limits of cost-benefit analysis. Journal of Legal Studies, 29, 1005–1036. Parker, S., & Finkbeiner, M. (2020). Examining the unfolding of moral decisions across time using the reach-to-touch paradigm. Thinking & Reasoning, 26(2), 218–253. Patil, I., Zucchelli, M. M., Kool, W., Campbell, S., Fornasier, F., Calò, M., Silani, G., Cikara, M., & Cushman, F. (2021). Reasoning supports utilitarian resolutions to moral dilemmas across diverse measures. Journal of Personality and Social Psychology, 120(2), 443–460. Paxton, J. M., Bruni, T., & Greene, J. D. (2014). Are “counter-intuitive” deontological judgments really counter-intuitive? An empirical reply to Kahane et al. (2012). Social Cognitive and Affective Neuroscience, 9, 1368–1371. Paxton, J. M., & Greene, J. D. (2010). Moral reasoning: Hints and allegations. Topics in Cognitive Science, 2(3), 511–527. Paxton, J. M., Ungar, L., & Greene, J. D. (2012). Reflection and reasoning in moral judgment. Cognitive Science, 36(1), 163–177. Piazza, J., & Landy, J. (2013). “Lean not on your own understanding”: Belief that morality is founded on divine authority and non-utilitarian moral thinking. Judgment and Decision Making, 8(6), 639–661. Rand, D. G., Greene, J. D., & Nowak, M. A. (2012). Spontaneous giving and calculated greed. Nature, 489(7416), 427–430. Reynolds, C. J., & Conway, P. (2018). Not just bad actions: Affective concern for bad outcomes contributes to moral condemnation of harm in moral dilemmas. Emotion, 18(7), 1009–1023. Reynolds, C. J., Knighten, K. R., & Conway, P. (2019). Mirror, mirror, on the wall, who is deontological? Completing moral dilemmas in front of mirrors increases deontological but not utilitarian response tendencies. Cognition, 192, Article 103993. Rom, S. C., & Conway, P. (2018). The strategic moral self: Self-presentation shapes moral dilemma judgments. Journal of Experimental Social Psychology, 74, 24–37. Ross, W. D. (1930). The right and the good. Oxford University Press. Royzman, E. B., Goodwin, G. P., & Leeman, R. F. (2011). When sentimental rules collide: “Norms with feelings” in the dilemmatic context. Cognition, 121, 101–114. Royzman, E. B., Landy, J. F., & Leeman, R. F. (2015). Are thoughtful people more utilitarian? CRT as a unique predictor of moral minimalism in the dilemmatic context. Cognitive Science, 39(2), 325–352. Sauer, H. (2012). Morally irrelevant factors: What’s left of the dual process-model of moral cognition? Philosophical Psychology, 25(6), 783–811. Shortland, N., & Alison, L. (2020). Colliding sacred values: A psychological theory of least-worst option selection. Thinking & Reasoning, 26(1), 118–139.

Moral Dilemmas

Strohminger, N., & Nichols, S. (2014). The essential moral self. Cognition, 131(1), 159–171. Tetlock, P. E., Kristel, O. V., Elson, B., Green, M. C., & Lerner, J. S. (2000). The psychology of the unthinkable: Taboo trade-offs, forbidden base rates, and heretical counterfactuals. Journal of Personality and Social Psychology, 78(5), 853–870. Trémolière, B., & Bonnefon, J.-F. (2014). Efficient kill–save ratios ease up the cognitive demands on counterintuitive moral utilitarianism. Personality and Social Psychology Bulletin, 40(7), 923–930. Uhlmann, E. L., Zhu, L., & Tannenbaum, D. (2013). When it takes a bad person to do the right thing. Cognition, 126(2), 326–334. Williams, B. (1965). Ethical consistency. Proceedings of the Aristotelian Society (Supplement), 39, 103–124.

123

6 The Moral Domain What Is Wrong, What Is Right, and How Your Mind Knows the Difference Samantha Abrams and Kurt Gray

Imagine that a spacecraft lands on Earth. World leaders assemble to greet the visitors, but when the aliens step out of their craft, they are not interested in discussing politics. Instead, they want to answer one question: What is the psychological nature of human morality? The aliens journey to many academic psychology conferences – they may be the lamest aliens in the galaxy – and listen to many presentations by moral psychologists who give their opinions on what makes up the “moral domain.” The moral domain is a technical name, but its definition is simple, revolving around two questions: WHAT: What acts do people judge as morally good or bad? HOW: How do they judge whether these actions are morally good or bad? The “what” question is about the external, social world – what acts in society are deemed as morally wrong. The “how” question is about our internal, mental mechanisms – how does the mind decide right from wrong. More succinctly, “what is immoral” is about the world, and “how do we judge” is about the mind. Of course, our minds exist within the world, and the world – especially our social world – consists of minds. Clarifying what behaviors in the world are judged as immoral requires examining how the mind comes to make moral judgments, and vice versa. But this “what in the world” and “how in the mind” distinction is an important structure to make sense of competing visions of the moral domain. Namely, the question that a theory attempts to answer first – either “what?” or “how?” – influences how that theory answers the second question. Another key consideration for how the aliens understand the moral domain of humans is the question of “when.” When exactly did the aliens arrive and attend talks: today, the past, or the future? How scientists understand many topics evolves through time, and the moral domain is no different. These evolutions in understanding may sometimes look gradual and peaceful, but progress in science is uneven and fueled by fighting. In moral psychology too, movement is fueled by an intense clash of paradigms, competing visions of our minds and our social worlds. In this chapter, we explore the trajectory of moral psychology, revealing how our alien friends – and we humans too – might understand the moral domain. 124

The Moral Domain

We chart out the past, present and emerging future of both the “what in the world” and the “how in the mind” of the moral domain. As we explore the science, we also ask “why?” – as in, “why did these theories develop and rise to prominence?” – and explore the paradigms surrounding each time-bound understanding of the moral domain. These paradigms include the social, cultural, and other scientific forces that shape how we understand our minds and our social worlds. *** Like any social enterprise, science evolves through paradigms (Kuhn, 1970), and moral psychology is no exception. Theories gain traction in science not just because of their veracity, but because of their popularity. The following chapter traces the evolution of the moral domain through three major paradigms in moral psychology – past, present, and future. We critically evaluate the evidence for and against these conceptions of the moral domain, and explain how broader movements in psychology and society influenced each. We also acknowledge concurrent and divergent views of the moral domain within each paradigm and illustrate how these paradigms respond to and build on one another. The Past “How.” If the morally curious aliens visited psychology conferences in the 1970s and 1980s, they would understand the moral domain as articulated by Elliot Turiel. Coming on the heels of the cognitive revolution in psychology, Turiel and his contemporaries were most concerned with explaining the “how” – as in, “how does the mind make moral judgments?” The “Turielian” view of morality posited that the moral mind was universal, that people decided right from wrong in all situations based on reasoning about harm. On this view, the moral world is defined by whatever actions people think are harmful. The Present “What.” But if the aliens attended talks today, they would leave Earth with a very different picture of the moral domain, one articulated by Jonathan Haidt. Struck by the apparent cultural diversity of the moral world, Haidt shifted gears toward explaining the “what” – as in, “what kinds of actions are considered immoral?” The “Haidtian” view of morality argues that the moral world consists of many different acts, not just those that are harmful. The moral world is governed by an innately prepared yet culturally variable set of moral values, which the moral mind uses to quickly and intuitively decide whether an act falls into the moral domain. The Future “How and What.” Neither Turiel’s nor Haidt’s conception of the moral domain matches modern evidence, but each account gets part of the story right. The Haidtian approach gets many things right about the diversity of “what” in the moral world, but we review research that suggests it gets much of the science of the moral mind wrong. On the other hand, we suggest that while the many issues with Turiel’s take on the moral world were rightly disavowed, its core tenets about the “how” of the moral mind were prematurely dismissed. We suggest that the theory of dyadic morality synthesizes these key insights from paradigms past and present, positing a harm-based moral mind very

125

126

            

Table 6.1 How each moral psychological paradigm answers the defining “how” and “what” questions of the moral domain Turiel’s past paradigm

Haidt’s present paradigm

TDM: future paradigm

HOW does the mind make moral judgments?

Reasoning about rights, justice, and welfare (harm)

Quick intuitions and affective responses triggered by moral modules

WHAT actions in the world do we judge as immoral?

Acts assumed to be harmful

Acts that violate innately prepared (but culturally activated) moral values of harm/care, fairness, loyalty, authority, or purity

Intuitive perceptions of harm – closer matches to a dyadic harm-based template are seen as more immoral. Acts perceived as harmful. Perceptions of harm vary by culture and can arise from the violation of diverse values.

similar to Turiel’s while also drawing on Haidt’s moral relativism to make sense of a diverse and culturally constructed moral world. Here, we offer an iconoclastic interpretation of studies and theories about how the moral domain – both mind and world (see Table 6.1). We challenge the dominant paradigm with an exhaustive evaluation of the empirical record and the intellectual forces that pushed the field’s orthodoxy from Turielian to Haidtian, and synthesize these perspectives to bring our understanding of the moral domain into the future.

6.1 Turiel and Paradigms Past Turiel’s interest in “how” goes back further than his interest in morality. As a developmental psychologist, he believed that studying children and adolescents could tell us something about the way the adult mind works. He believed that children are unencumbered by the layers of acculturation and self-reflection that plague the adult mind, making it easier to study the core mental processes that govern human minds across the lifespan. But unlike many other child researchers, Turiel also believed that children have more insight into their minds than most adults give them credit. While some cognitive psychologists were coming up with creative ways to indirectly assess decision making in the mind, Turiel’s solution to the “black box” problem was much simpler: just ask. Asking children and adults alike to explain their thought processes for themselves proved fruitful for Turiel, and these interviews became the basis for his theoretical account of the moral mind.

6.1.1 Turiel’s Moral Domain: A Universal Distinction One of the ways that Turiel noticed children differed in their judgments of right and wrong was their dependence on rules. In some cases, rules were very

The Moral Domain

important for children in these judgments. For example, most 5- to 11-year-olds reasoned that a child should be punished for leaving toys on the floor of their classroom only if there was a school rule against doing so (Weston & Turiel, 1980). But children judged other things as wrong regardless of the rules. Most of the children interviewed agreed that hitting another child is always wrong, regardless of whether there is a school rule against hitting other students. When we make judgments of right and wrong, why do situational factors like rules and social norms matter in some cases but not others? According to Turiel, this is because we judge behaviors differently based on whether they fall under the moral or conventional domains. Turiel reasoned that “wrongs” can be separated into two distinct domains. The moral domain is absolute. Moral violations are always wrong – and moral prescriptions are always mandatory – regardless of where, when, or why they were committed (Turiel et al., 1987). Prototypical moral violations include theft, discrimination, and causing physical harm. In the scenarios we described earlier, children viewed hitting another child as a moral violation because it is wrong even if there are no school rules against hitting. In contrast, the conventional domain is tied to social context. Conventional violations are only wrong if they go against explicit rules, social authorities, or normative behaviors (e.g., arriving too late or too early). Children judged the act of leaving toys on the classroom floor as a conventional violation because it was only wrong if the school had a rule against it. But what makes something a moral violation rather than a conventional one? Turiel argued that the key difference between the moral and conventional domains was based on concerns about justice, rights, and welfare, which we can broadly categorize under the umbrella of harm. Through conducting interviews with both adults and children, he discovered that participants almost always judged a behavior as wrong when they believed it to be harmful, regardless of rules, authorities, or social context. By contrast, participants relied much more on social norms and authorities to determine whether nonharmful behaviors were wrong or not. The distinguishing factor between moral and conventional wrongs was the extent to which people believed that they caused harm.

6.1.2 Context/History/Roots Turiel and his colleagues’ emphasis on the mind represented a drastic change in the way scientists approached psychological research just a few decades earlier. As psychology transitioned into an empirical field in the 1920s and 1930s, the rise of behaviorism encouraged many researchers to give up on studying the seemingly unobservable properties of the mind to instead study patterns of behavior. But by the mid 1950s it became clear that behavioral data were incomplete without the mental mechanisms to explain them. Early cognitive researchers showed that we can indirectly observe mental processes, and argued that studying the mind is the only way that psychology could progress. Turiel’s

127

128

            

mind-centered approach to moral psychology was heavily influenced by the cognitive revolution. At the same time, Turiel’s focus on reasoning was likely inspired by the analytic philosophers that dominated the moral domain. To them, the moral mind was defined by rationality, logic, and reason, and the moral world consisted solely of issues related to individual rights, justice, and welfare. Early analytic philosophers like Immanuel Kant, and eventually more modern philosophers like Alan Gewirth and John Rawls, argued that moral imperatives were universal and discoverable through rationality and reason (Gewirth, 1978; Kant, 1785/1998; Rawls, 1999). This branch of philosophy’s emphasis on reasoning eventually seeped into early theorizing in psychology. When Turiel came onto the scene, he followed the intuitions of the analytic philosophers that came before him and focused his study of the moral domain on conscious reasoning about harm. Concurrent Perspectives. Some of Turiel’s contemporaries who took more global approaches to understanding morality did not view morality and convention as distinct domains of reasoning (Turiel et al., 1991). Developmental psychologists like Jean Piaget, and later Lawrence Kohlberg, also believed that the development of reasoning is a key feature of the moral mind but thought that conventional and moral judgments were progressive steps along a universal trajectory of moral development (Kohlberg & Hersh, 1977; Piaget, 2013). Young children first learn to distinguish right and wrong based on rules and authorities, but as they grow into adulthood they learn to use abstract principles of justice and welfare to make these judgments. By contrast, cultural anthropologists and psychologists like Richard Shweder and Manamohan Mahapatra also regarded reasoning – particularly through discourse – as a key driver of moral judgments, but they believed that moral and conventional judgments were both culturally variable and strongly rooted in social context (Shweder et al., 1997; see also Chapter 20 of this volume).

6.1.3 Deconstructing Turiel: Mind THEN World These social and scientific influences led Turiel to take a mind-first approach to understanding the moral domain. He focused first and foremost on the mental mechanisms that could explain “how in the mind,” and let the answer guide his explanation of “what in the world.” Here, we summarize Turiel’s account of the moral mind and world and review the seminal literature that catalyzed its success as a paradigm. Mind. Turiel’s moral mind decided right from wrong by reasoning about harm and fairness. This primary mechanism for moral judgment is present from an early age; 4- to 8-year-olds justify their moral evaluations primarily by appealing to avoidance of harm and promotion of welfare (Davidson et al., 1983). When harm or injustice is present, people’s judgments are characterized by obligation, generalizability, and nonalterability. Children ages 7–11 and adults thought rules that prevented harm (e.g., no kicking other people, no

The Moral Domain

stealing) were good rules to have in both public and private contexts (generalizable), should not be left to personal discretion (obligatory), and should not be changed (nonalterable). In contrast, participants thought rules that had high social utility but did not prevent harm (e.g., use the green door to exit the theater) could be left to personal discretion in private contexts and altered in either public or private contexts (Miller & Bersoff, 1988). Once we’ve assessed that a given social situation involves harm, the moral mind uses this fixed and universal reasoning to determine right from wrong and justify our evaluations. World. The moral world is made up of whatever the moral mind deems to be harmful. Explicitly harmful social events like inflicting physical pain, stealing, and discrimination reliably elicit obligatory, generalizable, and nonalterable moral judgments (Turiel et al., 1987), but nonprototypically harmful behaviors can also fall in the moral domain if people judge them as harmful. People who judge abortion to be permissible also tend to believe that life begins close to or at birth, whereas people who think abortion is impermissible also tend to believe that life begins at conception (Smetana, 1981). The difference in informational assumptions about when life begins leads to differing evaluations of harmfulness, and consequently different domain categorizations. Moral disagreement therefore emerges when people differ in their assessments of harm. Participants who thought behaviors like homosexuality, incest, and consumption of pornography were morally impermissible were more likely to use a combination of reasoning about harm and other types of justification than those who thought these behaviors were permissible (Turiel et al., 1991). What exists in the moral world is dependent upon how the moral mind evaluates harm in the social world.

6.1.4 Turiel in Light of Modern Evidence Despite the methodological limitations of the time, Turiel’s work unearthed a few key insights that are in line with modern evidence. However, it is not without its limitations. Here, we review the elements of Turiel’s moral domain that modern evidence supports, and those that it rejects. What Turiel Got Right. First, most moral psychologists today would agree that there is an important distinction between the moral and conventional domains. Moral judgments are held with deeper conviction (Skitka et al., 2015), are seen as more objective and universal (Goodwin & Darley, 2012; Van Bavel et al., 2012), and are more independent of rules and authority (Jambon & Smetana, 2018; Mulvey, 2016) compared to nonmoral social judgments. Empirical work also suggests that these domain distinctions are even observable in our physiology. Children ages 3–4 and undergraduates showed greater pupil dilation when watching a video of a moral transgression (destroying another person’s artwork) compared to a conventional transgression (not following the rules of a game), and also attended significantly more to the victim of the moral transgression than the bystander in the conventional transgression (Yucel et al., 2020). Moral judgments can also be differentiated from

129

130

            

conventional judgments through different patterns of neural activation (White et al., 2017). Though there are reasonable objections to Turiel’s definitions of the moral and conventional domains that we will explore more in depth, most researchers agree that moral judgments are distinct from other kinds of social evaluations (see Schein & Gray, 2018, for a review). Second, recent research supports Turiel’s harm-based theory of morality. There is a general consensus among moral psychologists that intentional harm accounts for the most typical and universal moral judgments (Graham et al., 2019; Hofmann et al., 2014). Obviously harmful behaviors like murder, rape, assault, abuse, and theft elicit strong moral outrage (Hutcherson & Gross, 2011) and are widely condemned across cultures (Mikhail, 2009; Shweder, 2012). The centrality of harm is not merely an artifact of biased questions and methods, as participants freely generate scenarios or concerns involving harm when given the opportunity. Research using ecological momentary assessments suggests that harm is the most frequent concern in the moral judgments people make in their daily lives (Hofmann et al., 2014), and when asked to generate examples of immoral acts, over 90 percent of people provide examples of harmful behaviors (Schein & Gray, 2015). While other factors like norm violation (Malle et al., 2014) and negative affect (Inbar et al., 2012) also contribute to perceptions of immorality, they cannot explain the difference between moral wrongs and other wrongs without the presence of intentional harm. What Turiel Got Wrong. Despite the kernel of truth in these lasting insights, the Turielian account gets some important things about the moral mind and world wrong. Recent work has called into question the central role of conscious reasoning in moral judgment formation. Infants show an intuitive preference for prosocial compared to antisocial agents despite having not yet developed the cognitive capacity for reasoning (Van de Vondervoort & Hamlin, 2018), and even professional philosophers who have expertise in moral reasoning tend to show the same biases in moral judgment as laypeople (Schwitzgebel & Cushman, 2012). Research also suggests that people do not always have conscious access to the reasons underlying their moral judgments (Cushman et al., 2006), making reasoning an unreliable guide to moral judgment. Another key issue with the research that came out of the Turielian paradigm is its failure to account for social and cultural differences in morality. Modern psychologists acknowledge that while concerns about rights, justice, and welfare may be central to the moral domain for White, liberal, educated men, these concerns may not be as universal as Turiel and his contemporaries would have liked to believe. Developmental work around the same time suggested differences in morality across gender, with women and girls endorsing care and compassion as legitimate moral concerns more so than men and boys (Gilligan, 1993; Gilligan & Attanucci, 1988). In modern research, differences between liberals and conservatives in the United States in free will beliefs (Everett et al., 2020) and consequentialist thinking (Hannikainen et al., 2017) motivate diverging moral judgments, and some work suggests that these differences extend to

The Moral Domain

the types of behaviors people deem morally relevant (Graham et al., 2009; Graham et al., 2011). Turiel’s claims of universality seem to break down even further when considering global cultural differences. Rationalists like Turiel mainly employed cross-cultural research to explore differences in the rate of moral development (e.g., Turiel et al., 1978), but we now know that there are important differences in the content of moral judgments across cultures. Chinese participants used the term “immoral” to describe behaviors that were either uncivil or harmful, whereas Westerners were more likely to tightly link the term to harmful acts (Buchtel et al., 2015). Cross-cultural differences in the tightness of social norms also impact the enforcement of moral rules and punishments (Gelfand et al., 2017). While Turiel and his contemporaries briefly acknowledged how social factors could influence the behaviors that comprise people’s moral worlds (Kohlberg et al., 1983; Turiel et al., 1991), their failure to center moral differences in their research sparked criticism. These limitations, in combination with mounting social forces and concurrent paradigm shifts in other fields of psychology, led to the demise of Turiel’s universal “how” in the moral mind, and to the rise of Haidt’s pluralistic “what” in the moral world.

6.2 Haidt and the Present Paradigm Haidt’s work with cultural psychologists and anthropologists led him to see firsthand how social influences produced astonishing variability in people’s everyday thoughts and behaviors. He observed that what many people thought was “wrong” was not the same as what we might consider “harmful.” Hot-button issues like gay rights and religious freedom were clearly matters of moral concern for conservatives despite not being explicitly about harm, but Haidt’s left-leaning colleagues misguidedly relegated these issues to the conventional domain based on moral reasoning. Principles of individual rights and welfare seemed insufficient for explaining the diversity of the moral world, and so Haidt set out to describe the moral world himself.

6.2.1 Haidt’s Moral Domain: Affect and Relativity One of the ways that Haidt’s moral world differs from Turiel’s is that Haidt does not see harm as the key differentiator between the moral and conventional domains. Haidt felt that Turiel had not paid enough attention to how people defined the moral domain for themselves. Oriya Hindu Brahmans, for example, would say that it is immoral for one’s son to eat chicken the week after his father died, but this seems to be a far cry from the values of justice and welfare that Turiel believed were at the core of the moral domain (Shweder et al., 1997; see also Chapter 23, this volume). Haidt did not think there was an objective dimension to evaluate moral “rightness” and “wrongness.” Instead,

131

132

            

communities construct unique systems of moral values for themselves (Haidt & Kesebir, 2010). However, Haidt did not think moral values were constructed randomly. Instead, he reasoned that they must emerge to provide some sort of social utility. While concerns about individual harm and fairness may have been useful for our individual survival, this is not enough to secure the survival of our species. Valuing things like loyalty to one’s in-group, respect for authority figures, and sexual/bodily purity for reproduction would have also been important for outcompeting rival groups and ensuring one’s own group survival. Over time, some of these “foundations” became more useful to a given community than others. Haidt’s moral foundations theory (MFT) argues that humans possess a set of evolutionarily prepared but culturally activated (Haidt & Joseph, 2004) modules that give rise to quick, intuitive, and affectively charged moral judgments. These foundations are differentially activated across cultures and function independently from explicit moral reasoning.

6.2.2 Context/History/Roots Mounting evidence from anthropologists and cultural psychologists in the late 1980s and early 1990s forced psychologists to acknowledge the importance of cultural differences. Researchers began to map out psychological variation in the world on dimensions such as values (Inglehart et al., 1998) and self-concept. Even basic mental processes like visual perception and spatial cognition thought to be universal were shown to be culturally variable (D’Andrade, 1995). At the same time, much of the popular discourse was focused on the cultural divides that were becoming more and more entrenched in American politics. The 1990s were rife with ongoing debates between the left and the right, particularly concerning issues of sexual morality. Haidt saw the liberal/conservative divide as analogous to cross-cultural differences, and applied this relativistic approach to his own research on moral judgments across political divides. Just as the cognitive revolution led Turiel to emphasize conscious reasoning, the rise of intuitionism and automaticity led Haidt to emphasize quick, implicit judgments. Primary work by researchers like Bargh (1994) and Greenwald and Banaji (1995) influenced a wealth of research demonstrating the unconscious nature of a surprising number of humans’ social judgments and behaviors. Other work by Forgas (1995) and Petty and Cacioppo (1986) catalyzed psychologists’ exploration of emotion’s role in judgment and decision making. Haidt sought to offer an alternative to the rationalist accounts dominating moral psychology by placing social influences and intuition at the center of his conception of the moral domain (Haidt, 2001). Emotion is at the core of Haidt’s intuitionism, so Haidt structured the moral mind in the same way that the prevailing theory of emotion at the time structured the emotional mind. Basic emotions theory (Ekman, 1999; Izard, 1992) suggested that emotions are neuropsychologically basic in that they serve specific and essential social and biological functions and manifest as discrete psychological experiences. Haidt’s

The Moral Domain

moral foundations also served specific evolutionary functions, so he reasoned that the moral mind must be distinct and modular, too. Concurrent Perspectives. As was the case under the Turielian paradigm, not everyone believes that a distinction between the moral and the conventional can be made. Some argue that the cognitive processes we use to make moral judgments are essentially the same as the ones we use to make nonmoral judgments (Nichols, 2021). Others counter that moral judgments were far too disparate for us to construct theories about a unified moral domain – let alone distinguish it from a conventional one (Sinnott-Armstrong, 2018; Stich, 2018). Still, the idea that morality qualitatively differs from convention was popular before Haidt and MFT came onto the scene, and remains the predominant view in moral psychology to this day (see Schein & Gray, 2018, for a review). Similarly, some researchers also question the centrality of affect – as well as the exclusion of reasoning – in the process of moral judgment. These researchers suggest that reasoning may not be the only driver of moral judgment, but it still impacts our moral judgments in crucial ways, like when we change our minds about our moral judgments or override our automatic biases (Paxton & Greene, 2010). Despite these objections, most researchers agree with Haidt that affect and intuition are more important in moral judgment formation than previously thought.

6.2.3 Deconstructing Haidt: World THEN Mind These intellectual and cultural forces inspired Haidt to take a world-first approach to understanding the moral domain. He focused first and foremost on the distinct features of moral behaviors that could describe “what in the world,” and used this taxonomy to guide his explanation of “how in the mind.” Here, we summarize Haidt’s account of the moral world and mind and the research that led to its success as a paradigm. World. Haidt’s moral world is comprised of behaviors that uphold or violate one or more of the five moral foundations: harm/care, fairness/reciprocity, ingroup/loyalty, authority, and purity (Graham et al., 2009; Graham et al., 2011; Haidt & Graham, 2007). Early cross-cultural anthropological work identified three main types of moral values – the ethics of autonomy, community, and divinity (Haidt et al., 1993; Shweder et al., 1987; Shweder et al., 1997) – and Haidt later expanded this taxonomy to develop his five foundations. Each foundation served a specific purpose in our evolutionary history: Harm expanded humans’ sensitivity to suffering in offspring to other individuals, fairness ensured reciprocity to individuals in interactions with non-kin, ingroup/loyalty allowed larger cooperative groups to form, authority kept hierarchical communities intact, and purity kept our bodies safe from disease (Haidt & Graham, 2007). People supposedly differ in the extent to which they endorse each of the moral foundations as morally relevant based on their sociocultural upbringing. MFT is widely applied to explain differences between liberals and conservatives

133

134

            

(Graham et al., 2009; Graham et al., 2011; Haidt & Graham, 2007; Haidt & Hersh, 2001; Koleva et al., 2012) by suggesting that liberals primarily endorse harm and fairness, while conservatives moderately endorse all five foundations. Political differences in moral foundations have been used to explain the left–right divide on many moralized American political issues, such as abortion, gay rights, religious freedom, gun control, and global warming (Graham et al., 2009). Mind. Haidt’s moral mind is made up of five distinct affective modules, which correspond to each of the five moral foundations posited by MFT. The theory suggests that we are born with innately prepared (but flexible) moral knowledge, which is then shaped and activated by our experiences (Haidt & Joseph, 2004, 2007). Nature provides a “first draft” of the moral mind, and nurture later edits it (Marcus, 2004). This first draft is comprised of domain-specific mental structures, often connected to specific emotions, that employ “distinct cognitive computations” (Young & Saxe, 2011) to strongly influence moral judgment (Haidt, 2001; Haidt & Joseph, 2007). These modular structures have unique triggers from the moral world (inputs) and affective experiences in the moral mind (outputs). For example, the harm/care module is triggered by suffering or distress in others, and its output is compassion for the victim or anger at the perpetrator. In contrast, the purity module responds to inputs like waste, disease, and nonnormative sexual behavior, and produces an output of disgust (Graham et al., 2013). Haidt believes that the innateness of this moral knowledge allows us to make moral judgments based on quick intuitions and gut reactions. This nearautomatic sense of right and wrong is guided by our emotional responses to events in the outside world (Haidt, 2001; Haidt & Joseph, 2004). Participants induced with unrelated feelings of disgust evaluated moral transgressions more severely than those in a control condition (Schnall et al., 2008; Wheatley & Haidt, 2005), suggesting that moral judgments are driven by affectively laden moral intuitions (but see Landy & Goodwin, 2015). In one famous (but unpublished) study, participants judged a scenario involving explicitly unharmful sibling incest as morally wrong, but were unable to provide adequate justification for their evaluation (Haidt et al., 2000). This “moral dumbfounding” suggested that the explicit reasons that people provide for their moral judgments are therefore consequences, not causes, of the judgments themselves.

6.2.4 Haidt in Light of Modern Evidence The Haidtian paradigm has uncovered a few pearls of wisdom that have served moral psychologists well. But it has also plagued the field with ideas that are unsupported by more recent evidence. Here, we review the aspects of Haidt’s moral domain that have held up, and those that crack under the pressure of empirical scrutiny. What Haidt Got Right. First, using people’s explicit reasoning is not a reliable way to understand their moral judgments. A large body of work has

The Moral Domain

documented that reasoning processes are heavily influenced by motivational factors, and that people are flexible in the principles they apply to justify their decisions (Bartels & Medin, 2007; Ehrich & Irwin, 2005). Features of a situation that are irrelevant to moral reasoning but relevant to one’s identity or ideology can therefore motivate people to differentially apply moral principles (Hester & Gray, 2020). Second, emotions and intuition are a central part of moral judgments. People make moral evaluations for social situations more quickly than they make nonmoral evaluations (Van Bavel et al., 2012), and individual differences in reliance on intuition predict harsher moral judgments (Ward & King, 2018). Emotions are thought to be one of the primary elicitors of moral intuition. Intensifying negative affect also intensifies moral judgment (Horberg et al., 2011; Inbar et al., 2012), and engaging in emotion reappraisal counteracts this influence (Feinberg et al., 2012). (For a different perspective on emotion in moral judgments, see Chapter 15 in this volume, specifically the “How Does Blame Relate to Emotion” subsection in Section 15.3.6.1.) Finally, people’s moral worlds are culturally constructed. The values and behaviors that people moralize meaningfully differ across societies (Shweder, 2012), and most modern theories in moral psychology acknowledge the primary role of culture and socialization in moral judgment (Jensen, 2015; Rai & Fiske, 2011). While Western, individualistic cultures are generally more likely to endorse individual rights and independence in their moral values, non-Western, collectivistic cultures tend to more strongly moralize duty-based communal obligations and spiritual purity (Buchtel et al., 2015; Guerra & Giner-Sorolla, 2010). Haidt’s emphasis on cultural learning and social transmission, both within and between societies, remains a crucial component of modern moral psychology. What Haidt Got Wrong. Still, other aspects of MFT do not hold up as well when examined against modern evidence. First, moral foundations do not adequately explain the cultural diversity of the moral world. The unique relationship between the moral foundations and political ideology is weaker (Klein et al., 2018) and less generalizable (Kivikangas et al., 2021) in more recent work. For example, Liberals endorsed group-focused “binding” moral concerns just as much as Conservatives when framed as issues of social justice (JanoffBulman & Carnes, 2016). Additionally, MFT does not accurately reflect how people and cultures define the moral world for themselves. Despite claiming that “what people think are their moral concepts are, in fact, moral concepts” (Haidt & Joseph, 2007, p. 372), moral foundations researchers have failed to trust people’s concepts of harm by assuming that behaviors that seem harmless to them are harmless to everyone. Condemning people who have sex with dead chicken carcasses (Haidt et al., 1993) or engage in consensual incest (Haidt et al., 2000) is only evidence of nonharm-based morality if people truly deem these acts harmless, but modern evidence suggests that this is not the case (Royzman et al., 2015).

135

136

            

Second, the linchpin mechanism of MFT – the modular moral mind – is inconsistent with modern evidence. The theory posits that moral intuitions are rooted in distinct and innate psychological modules evolved to solve social dilemmas (Haidt & Joseph, 2007). This means that moral foundations must be 1) domain-specific (Haidt & Joseph, 2007), 2) relatively stable over time (Haidt, 2012), and, to some degree, 3) heritable (Graham et al., 2009; Haidt & Joseph, 2011). But modern research counters all three of these stipulations. For example, judgments of harm and purity – often described as maximally distinct (Haidt, 2012) – are highly correlated with one another (Gray & Keeney, 2015) and therefore unlikely to be domain-specific. Moral foundations also fail to show stability over time and across contexts. Longitudinal research indicates low test–retest correlations for the MFQ (Moral Foundations Questionnaire, measuring individual differences in perceived importance of the various moral foundations) and showed that participants’ initial moral foundation endorsement accounted for only 8–10 percent of the variance in their moral foundation endorsement 18 months later (Smith et al., 2017). People’s proclivities for certain moral foundations are also susceptible to minimal external influences (Ciuk, 2018). And despite its centrality to MFT, just one study has tested whether moral foundations can be explained by genetic factors; this twin study failed to find consistent evidence for genetic influences of moral foundations (Smith et al., 2017). Moral foundations lack solid evidence for domainspecificity, temporal or contextual stability, and heritability, undermining the modularity of the moral mind. Proponents of MFT argue that the evidence against distinct moral modules only disproves a “straw-man” version of their theory that necessitates strong modularity, but not the more flexible form of modularity for which MFT actually advocates (Graham et al., 2019). But if these findings don’t qualify as disconfirming evidence, what findings would? This raises our final important criticism: MFT lacks empirical utility because it is unfalsifiable by design. For example, more recent explanations by MFT researchers argue that moral foundations are “perfectly consistent with domain-general as well as domainspecific processes” (Graham, 2015, p. 871). Apparently, one doesn’t need to “embrace modularity, or any particular view of the brain [emphasis added], to embrace MFT” (Graham et al., 2013, p. 63). This conceptual haziness is convenient for MFT because it is seemingly agnostic to all accounts of how the brain works. But evolution plays a prominent part in MFT, meaning that our evolved brain biology must reflect the existence of moral mental modules for MFT to be tenable. Researchers have been hand-waving the mismatch between MFT and neurobiological evidence with ambiguous and inconsistent descriptions of the moral mind. But this imprecision is a weakness of the theory, not a strength. Another example of unfalsifiability in MFT is its justification for the five foundations. Haidt and his collaborators devised these foundations after examining a wide range of values and behaviors across cultures and identifying five

The Moral Domain

moral concerns that could be linked to evolutionary challenges (Haidt & Joseph, 2007). They note that these are the best candidates for moral psychological primitives, but that “there may be 74, or perhaps 122, or 27, or maybe only five, but certainly more than one” (Graham et al., 2013, p. 58). Indeed, later iterations of MFT have added a sixth foundation – liberty/oppression (Haidt, 2012). Not only is this unparsimonious as a theory of the mind – having upwards of 100 distinct moral mechanisms would be a highly inefficient use of neurobiological resources – but it also poses significant challenges to empirical verification. Neither the current Haidtian paradigm nor the Turielian paradigm are without flaws, but they have both provided crucial insights to the study of human morality. By synthesizing the best parts of what paradigms past and present have to offer, we can build a better model of the moral mind and world for the future.

6.3 Dyadic Morality: Bridging Paradigms Past and Present The past Turielian paradigm and the present Haidtian paradigm are antithetical to each other. One is grounded in harm, reason, and universal domain distinctions, answering the functional question of “how in the mind?” The other is grounded in affect, intuition, and culturally constructed value systems, answering the descriptive question of “what in the world?” Science is supposed to be incremental, each theory seeing further than the last by standing on the shoulders of the giants that came before. But the progression of moral psychology has been decremental, subtracting the insights gained from prior theories to see equally far but in the opposite direction. Instead of creating new theories entirely antithetical to their predecessors, we should be synthesizing past insights to ground our theories equally in past, present, and future. Moral psychologists have pitted the ideas of a harm-based moral mind and a diverse moral world against each other. But these characteristics of the moral domain are not mutually exclusive, and in fact often go hand in hand. The theory of dyadic morality (TDM) suggests that harm is the universal mechanism for moral judgment, but what people perceive as harmful is culturally relative. Culture meaningfully impacts our understanding of the way the world works, including who can be harmed and how. For example, the Oriya Hindu Brahmans in India that Shweder and colleagues (1997) interviewed thought that it was immoral for one’s eldest son to eat chicken the week after his father died, but this was because they believed that this act would place the father’s spiritual transmigration in deep jeopardy (Shweder, 2012). The singular mechanism of perceived harm reconciles the gap between Turielian harm-based accounts and moral pluralism, showing how the moral domain is characterized by “universality without the uniformity” (Shweder, 2012).

137

138

            

6.3.1 The Dyadic Moral Domain: Universality without Uniformity TDM’s main insight is that harm is perceived, like morality. Many philosophical traditions have viewed harm as absolute – objectively present or absent from a situation. But regardless of the ontological status of harm, the perception of harm is necessarily subjective when appraised by social perceivers. Perceptions of harm are influenced by context, culture, socialization, and informational assumptions (Turiel et al., 1991). But unlike the harm of the Turielian paradigm, TDM’s perceived harm explicitly encompasses a wide array of behaviors. People can perceive harm in prototypical moral violations – like theft and murder – and also nonprototypical violations – like abortion, flag burning, and homosexuality. Similarly, the “agents” that cause harm and the “victims” that suffer from it are construed broadly here, too, ranging from individuals, to animals, to cultural groups, to governments or society at large. What we perceive to be harmful – and what entities we perceive as capable of or vulnerable to harm – are influenced by our sociocultural upbringing and vary from person to person. But once we perceive something as harmful, the outcome is the same: We judge it as immoral, too. A good theory should explain our moral judgments, not just describe them. The moral domain is explained in TDM by asking “what in the moral mind allows us to perceive harm in the moral world?” Harm perception relies on a set of fundamental cognitive mechanisms – perceptions of intention (Decety & Cacioppo, 2012; Hesse et al., 2016), causation (Le Guen et al., 2015; Moore et al., 2013), and suffering (Decety & Cowell, 2018; Han et al., 2020). For people to perceive moral harm, they must first perceive an intentional agent causing the suffering of a vulnerable victim. As we will see, these perceptions can vary widely from person to person and can help truly explain the diversity of the moral domain.

6.3.2 Context/History/Roots Despite the rise of modular theories in psychology like MFT and basic emotions theory, mounting evidence in neurobiology supported a more emergent view of the mind – complex mental properties like emotion, language, and social skills likely emerge from much simpler biological systems (O’Connor, 1994). Psychological constructionism began to rival basic emotions theory in the mid 2000s by suggesting that emotions are not so distinct from one another (Barrett, 2006). Rather, all emotions are just a combination of two dimensions of experience: valence (positive – negative) and arousal (high – low). People then use situational and cultural context to construct and reinterpret these emotional experiences in distinct ways. Psychological constructionism popularized the idea that a wide variety of mental experiences can be constructed from the same core psychological experience. This idea is at the heart of TDM (Cameron et al., 2015), and helps explain how the core perception of harm in the moral mind can give rise to a plethora of behaviors in the moral world.

The Moral Domain

This constructionist approach to morality helps us make sense of the very moral disagreements that shaped the emergence and development of moral psychology. The civil rights debates that raged early on in Turiel’s research career could be explained by differing perceptions of harm. Those who saw racial and gender discrimination as morally abhorrent were focused on the suffering that US institutions were inflicting upon women and people of color, but those who were against the civil rights movement were attuned to the potential harm that could befall traditional American society if the civil rights crusaders were successful in their attempts to upheave “law and order.” Harm perceptions can also make sense of the political polarization that motivated Haidt’s work with MFT. Liberals are more attuned to harm befalling minorities and the environment, whereas Conservatives are more focused on harm to religious institutions, unborn babies, and immortal souls. Like emotions, perceptions of harm are socially and culturally constructed. Understanding this is the first step toward understanding (and bridging) moral disagreements.

6.3.3 Deconstructing Dyadic Morality: Mind AND World Mind. Perceived harm is the fundamental basis by which moral judgments are made and maintained by the moral mind. Harm drives the moral condemnation of obviously harmful acts – like murder, assault, rape, and theft – and its perception can also drive the condemnation of nonprototypically harmful behaviors – like trigger warnings (Lukianoff & Haidt, 2015), vaccinations (Mnookin, 2012), dishonor (Nisbett & Cohen, 1996), and even reading Harry Potter (James, 2010). Harm perceptions are ubiquitous in moral judgment not because they are merely post-hoc justifications for our moral intuitions (Haidt, 2001, 2012), but because they are the foundation of moral intuitions themselves. The key driver of the moral mind, therefore, is the perceived presence or absence of harm. According to TDM, the moral mind understands harm via a dyadic cognitive template of an intentional agent causing damage to a vulnerable patient/victim. The more obvious these elements are, the more robust the perception of harm, and the more severe the moral judgment. When some of these elements are unclear, moral judgment is weaker or more variable. This is why people more uniformly condemn the mass shooting of children versus people dying from natural disasters (lack of obvious agent) or attempted murder (lack of obvious patient). Punching a boxer in a boxing match – who has a high tolerance for pain and consented to this kind of harm occurring – is not immoral, but punching a vulnerable infant is. Perceptions of morally relevant harm also involve a causal link between the intentional agent and the suffering victim (Cushman, 2015). The clearer this causation, the more obvious the harm. Immorality exists on a continuum, and TDM suggests that this continuum is grounded in perceived harm. The more an act fits into the cognitive schema of “agent-causing-harm-topatient,” the more immoral it should seem. However, there are cases in which

139

140

            

one part of this template is so salient that we can fill in the gaps of other parts of the template and perceive moral harm. For example, the people of Fontenayaux-Roses, France tried and executed a pig for eating a baby in 1266 (Oldridge, 2017). This is obviously tragic, but the fact that the pig had a full trial demonstrates that the people of this village assigned it moral responsibility. Why? After all, a pig is hardly the malicious, intentional agent that the dyadic template seems to require. But the infant victim was so vulnerable, and the suffering of being eaten alive was so great, that people inferred the rest of the moral dyad by imbuing the pig with agency (and therefore culpability). This process is called dyadic completion (Gray et al., 2014), and it explains why people infer the existence of victims in victimless harms like masturbation, or impute malevolent and intentional agents to situations of neglect. The moral mind applies this dyadic template of harm primarily through intuition. People do not need to reason deliberatively to determine whether a situation is harmful. Even when evaluating complex moral dilemmas that ostensibly require weighing potential harms and benefits, people still rely on intuitive aversion to harm (Cushman et al., 2012). This reliance on intuition means that affect can influence our moral judgments (Gray et al., 2022). World. The moral world is made up of anything to which someone applies the moral dyad. Obviously dyadic harms like theft, abuse, and murder make up most of people’s conceptions of the moral world (Graham et al., 2019). But disagreement emerges as perceptions of harm, intentionality, and suffering vary with cultural and social forces. Because perceived harm is constructed and subjective, individuals and groups of people vary in their moral judgments. Worldview is a cultural phenomenon, and our beliefs about the world influence what harm looks like to us, who can cause it, and who is vulnerable to it. For this reason, so-called harmless wrongs can still garner moral condemnation. People can perceive harm in ostensibly harmless acts, and the moral mind can apply the dyadic template to situations involving more (or less) than two minds. A pickpocket stealing someone’s wallet invokes the perception of “agentharming-patient,” but so can a government disenfranchising its citizens, a pregnant woman aborting and killing her unborn child, or gay people leading others to hell by promoting their “sinful lifestyles.” So long as we perceive one entity as causing suffering to another more vulnerable entity, an act is judged as morally wrong. There are some interpersonal harms that seem not to be judged as immoral, such as inflicting emotional damage on someone by breaking up with them (Royzman & Borislow, 2022). However, this emotional damage is not typically intended: People tend not to break up with someone in order to harm them, but instead for freedom. If someone prioritizes the suffering of their partner, many would think it immoral. Second, people may see overarching benefits in these harms, such as personal growth. People may also fail to condemn harms because they deny the victimhood of the victimized. When people deny the wrongness of sexual assault, they do so by “blaming the victim,” seeing victims as responsible (Niemi & Young, 2014).

The Moral Domain

Similarly, violent conflicts are often justified by either valorizing the soldiers who cause harm or devictimizing the people who suffer from it (Bruneau & Kteily, 2017). These and other “informational assumptions” (Turiel et al., 1991) about harm are deeply influenced by our cultures and social backgrounds, leading to diverging moral judgments across people and places.

6.3.4 Dyadic Morality in Light of Modern Evidence The theory of dyadic morality betters our understanding of the moral mind and world by synthesizing the paradigms that came before it, keeping the insights supported by modern evidence and rejecting the elements that fail to pass empirical scrutiny. What TDM Gets Right. First, modern evidence corroborates the idea that harm is the central concern that guides moral judgment. Like Turiel’s harmbased accounts, TDM’s emphasis on the mind that perceives harm – rather than on the objective harmfulness of an act – has proven to be a critical starting point for understanding moral judgment. Perceptions of harm distinguish immoral acts from nonmoral wrongs (Schein & Gray, 2015) and acts that are merely disgusting (Schein et al., 2016). Second, intuition plays a crucial role in both harm perception and moral judgment. Children too young to even use language use harm in their social evaluations (Hamlin & Wynn, 2011). Studies using high-density event-related potentials (Decety & Cacioppo, 2012) and implanted brain electrodes (Hesse et al., 2016) show that adults can perceive harm in a matter of milliseconds. People make judgments about the harmfulness of both prototypical (e.g., theft) and nonprototypical (e.g., pornography) moral issues just as quickly as – if not quicker than – they make moral judgments (Schein & Gray, 2015), suggesting that harm perception is not only intuitive but also fundamental to moral judgment formation. Though reasoning can influence moral judgments (Paxton et al., 2012; Paxton & Greene, 2010), most researchers agree that it plays a more limited role than intuition (Greene & Haidt, 2002). Although affect is clearly important in moral judgment, research finds that it is important mostly when the presence of harm is unclear. In one study, participants read scenarios depicting bizarre purity violations (e.g., eating a dead dog) and more everyday moral violations (e.g., theft) and then rated how immoral, harmful, and disgusting each scenario was (Gray et al., 2022). Across all scenarios, harm emerged as the primary driver of moral judgments (both everyday and purity scenarios). Affect drove judgments only when harmfulness was ambiguous, such as in bizarre purity violations, and even here, harm remained a key predictor. Third, TDM’s constructionist approach to harm perception and moral judgment is consistent with modern accounts of neurobiology. Converging evidence indicates that social decision making relies on the coordination of multiple neurocognitive systems responsible for domain-general processes like stimulus valuation, perspective taking, and mental state understanding, and that these

141

142

            

same regions support moral judgment (see Yoder & Decety, 2018, for a review). Many of the brain regions that are consistently recruited for both implicit and explicit moral evaluations of a variety of different moral transgressions are also responsible for perceptions of harm (Hesse et al., 2016), intentionality (Young et al., 2011), and the integration of the two (Krueger & Hoffman, 2016), suggesting that these perceptions are fundamental to moral judgment. Finally, the mechanism of harm perception is better positioned to explain cultural differences in moral judgment than descriptive accounts of the moral domain. Cross-cultural research across disciplines highlights the universal importance of perceived harm. In a study of market managers in China, the UK, and Spain, perceived harm consistently predicted moral judgments of a variety of ethically questionable scenarios, including bribery, offensive advertising, and dumping toxic waste (Vitell & Patwardhan, 2008). Similarly, business students across China, Egypt, Korea, Finland, Russia, and the United States were less tolerant of unethical business practices the more they perceived those practices to be harmful (Ahmed et al., 2003). Our understanding of harm is culturally constructed, but the effect of its perceived presence on our moral judgments is universal. Where TDM Needs to Go. Though TDM as a theory is consistent with modern evidence, more work needs to be done to explicitly connect it to cultural differences. While TDM purports to explain cross-cultural differences in moral judgment with the single universal mechanism of harm, limited research has examined this claim with international data. Most work has sought to explain differences between the “micro-cultures” of US liberals and conservatives, but future work should expand beyond within-nation political differences and explore how perceptions of harm can predict between-nation differences in moral judgment and decision making. Additional research also needs to test the unique and combined contribution of affect and harm in moral judgment formation. Historically, TDM has placed less emphasis on the role of affect in harm perception and the formation of moral judgments, focusing instead on nonaffective cognitive mechanisms like agency perception. Recent work is exploring how affective and nonaffective cognitions work in tandem to influence moral judgment (Gray et al., 2022), but this is merely a starting point. It is important for the whole of TDM to be empirically evaluated for us to truly know its contribution to our understanding of the moral domain.

6.4 Conclusion Any definition of the moral domain must answer two key questions: What behaviors in the world can be considered morally good or bad? And how does the mind judge whether these acts are morally good or bad? Elliot Turiel’s account of the moral domain became the first major paradigm in psychology and argued that the moral mind makes judgments by reasoning

The Moral Domain

about the presence of harm, and that the moral world is comprised of all the behaviors that we have reasoned to be harmful. Later, Haidt’s moral foundations theory overtook Turiel’s account and remains a prominent paradigm to this day. Haidt argues that the moral world is comprised of behaviors that violate at least one of five (or more) distinct classes of moral values, and that quick, affective intuitions in the moral mind are the basis of our moral judgments. But “how in the mind” and “what in the world” are inextricably linked, and focusing on one of these questions impacts how we approach the other. The theory of dyadic morality acknowledges this and instead uses a “both and” approach by synthesizing the insights of both Turiel and Haidt, making it a strong candidate for the next paradigm in moral psychology. We argue that the moral mind judges right from wrong based on intuitive perceptions of harm, and that the moral world is comprised of a diverse and culturally informed set of behaviors that people perceive to be harmful. A good scientific theory brings us that much closer to knowing the truth about something in the world. This is achieved not by negating everything from prior theories, but by leveraging old insights and incorporating them with something new. The current Haidtian paradigm has revitalized moral psychology by emphasizing parts of the moral mind and world that were underexplored. But it also abandoned much of the wisdom gained from the prior Turielian paradigm. TDM bridges the gap in perspectives of paradigms past and present by synthesizing the best of what each has to offer: Turiel’s “how in the mind” and Haidt’s “what in the world.” When assessing the value of any theory, it is important to acknowledge not only its empirical and theoretical validity, but also the social and historical forces that shaped its development. Understanding Turiel’s and Haidt’s theories for what they are – paradigms with their own empirical and conceptual limitations – is a necessary step for us to push the field forward. We must acknowledge the insights that these paradigms bring while bringing a critical eye to their flaws. Importantly, TDM is not exempt from these outside influences. Just as Turiel was influenced by the cognitive revolution, and MFT by evolutionary psychology, TDM draws inspiration from theories like psychological constructionism. In popular culture, the growing political polarization in the United States was and continues to be a central problem that has guided the development of TDM. As it stands now, we believe that these influences have improved our understanding of the moral domain, but we acknowledge that the influence of contemporary theories and current cultural questions means that TDM will likely someday become outdated. The same critical eye that we have brought to Turielian and Haidtian paradigms should be brought to all paradigms, including our own. Like extraterrestrials at moral psychology conferences, we are all aliens to prior paradigms. We do not live in the same cultural and historical moments that guided their development, and so we cannot truly understand the perspectives they espouse. But our alienness gives us the unique advantage

143

144

            

of seeing these paradigms as just that – paradigms. They are limited by their social-historical contexts, acting as only crude approximations of the truth. It is far more difficult to see the limitations that plague the paradigms of today, but we should strive to look at our current theories with an alien eye. Only then can we begin to understand the nature of human morality.

References Ahmed, M. M., Chung, K. Y., & Eichenseher, J. W. (2003). Business students’ perception of ethics and moral judgment: A cross-cultural study. Journal of Business Ethics, 43, 89–102. Bargh, J. A. (1994). The four horsemen of automaticity: Intention, awareness, efficiency, and control as separate issues. In R. Wyer & T. Srull (Eds.), Handbook of social cognition (pp. 1–40). Lawrence Erlbaum. Barrett, L. F. (2006). Are emotions natural kinds? Perspectives on Psychological Science, 1(1), 28–58. Bartels, D. M., & Medin, D. L. (2007). Are morally motivated decision makers insensitive to the consequences of their choices? Psychological Science, 18(1), 24–28. Bruneau, E., & Kteily, N. (2017). The enemy as animal: Symmetric dehumanization during asymmetric warfare. PLoS ONE, 12(7), Article e0181422. Buchtel, E. E., Guan, Y., Peng, Q., Su, Y., Sang, B., Chen, S. X., & Bond, M. H. (2015). Immorality East and West: Are immoral behaviors especially harmful, or especially uncivilized? Personality and Social Psychology Bulletin, 41(10), 1382–1394. Cameron, C. D., Lindquist, K. A., & Gray, K. (2015). A constructionist review of morality and emotions: No evidence for specific links between moral content and discrete emotions. Personality and Social Psychology Review, 19(4), 371–394. Ciuk, D. J. (2018). Assessing the contextual stability of moral foundations: Evidence from a survey experiment. Research & Politics, 5(2). https://doi.org/10.1177/ 205316801878174 Cushman, F. (2015). Deconstructing intent to reconstruct morality. Current Opinion in Psychology, 6, 97–103. Cushman, F., Gray, K., Gaffey, A., & Mendes, W. B. (2012). Simulating murder: The aversion to harmful action. Emotion, 12(1), 2–7. Cushman, F., Young, L., & Hauser, M. (2006). The role of conscious reasoning and intuition in moral judgment: Testing three principles of harm. Psychological Science, 17(12), 1082–1089. D’Andrade, R. G. (1995). The development of cognitive anthropology. Cambridge University Press. Davidson, P., Turiel, E., & Black, A. (1983). The effect of stimulus familiarity on the use of criteria and justifications in children’s social reasoning. British Journal of Developmental Psychology, 1(1), 49–65. Decety, J., & Cacioppo, S. (2012). The speed of morality: A high-density electrical neuroimaging study. Journal of Neurophysiology, 108(11), 3068–3072. Decety, J., & Cowell, J. M. (2018). Interpersonal harm aversion as a necessary foundation for morality: A developmental neuroscience perspective. Development and Psychopathology, 30(1), 153–164.

The Moral Domain

Ehrich, K. R., & Irwin, J. R. (2005). Willful ignorance in the request for product attribute information. Journal of Marketing Research, 42(3), 266–277. Ekman, P. (1999). Basic emotions. In T. Dalgleish & M. Power (Eds.), Handbook of cognition and emotion (pp. 45–60). John Wiley & Sons Ltd. Everett, J. A. C., Clark, C. J., Meindl, P., Luguri, J. B., Earp, B. D., Graham, J., Ditto, P. H., & Shariff, A. F. (2020). Political differences in free will belief are associated with differences in moralization. Journal of Personality and Social Psychology, 120(2), 461–483. Feinberg, M., Willer, R., Antonenko, O., & John, O. P. (2012). Liberating reason from the passions: Overriding intuitionist moral judgments through emotion reappraisal. Psychological Science, 23(7), 788–795. Forgas, J. (1995). Mood and judgment: The Affect Infusion Model (AIM). Psychological Bulletin, 117, 39–66. Gelfand, M. J., Harrington, J. R., & Jackson, J. C. (2017). The strength of social norms across human groups. Perspectives on Psychological Science, 12(5), 800–809. Gewirth, A. (1978). The basis and content of human rights. Georgia Law Review, 13, 1143–1170. Gilligan, C. (1993). In a different voice: Psychological theory and women’s development. Harvard University Press. Gilligan, C., & Attanucci, J. (1988). Two moral orientations: Gender differences and similarities. Merrill-Palmer Quarterly, 34(3), 223–237. Goodwin, G. P., & Darley, J. M. (2012). Why are some moral beliefs perceived to be more objective than others? Journal of Experimental Social Psychology, 48(1), 250–256. Graham, J. (2015). Explaining away differences in moral judgment: Comment on Gray and Keeney (2015). Social Psychological and Personality Science, 6(8), 869–873. Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., & Ditto, P. H. (2013). Moral foundations theory: The pragmatic validity of moral pluralism. In Advances in experimental social psychology (Vol. 47, pp. 55–130). Academic Press. Graham, J., Haidt, J., Motyl, M., Meindl, P., Iskiwitch, C., & Mooijman, M. (2019). Moral foundations theory: On the advantages of moral pluralism over moral monism. In K. Gray & J. Graham (Eds.), Atlas of moral psychology (pp. 211–222). Guilford Press. Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96(5), 1029–1046. Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011). Mapping the moral domain. Journal of Personality and Social Psychology, 101(2), 366–385. Gray, K., & Keeney, J. E. (2015). Impure or just weird? Scenario sampling bias raises questions about the foundation of morality. Social Psychological and Personality Science, 6(8), 859–868. Gray, K., MacCormack, J. K., Henry, T., Banks, E., Schein, C., Armstrong-Carter, E., Abrams, S., & Muscatell, K. A. (2022). The affective harm account (AHA) of moral judgment: Reconciling cognition and affect, dyadic morality and disgust, harm and purity. Journal of Personality and Social Psychology, 123(6), 1199–1222.

145

146

            

Gray, K., Schein, C., & Ward, A. F. (2014). The myth of harmless wrongs in moral cognition: Automatic dyadic completion from sin to suffering. Journal of Experimental Psychology: General, 143(4), 1600–1615. Greene, J. D., & Haidt, J. (2002). How (and where) does moral judgment work? Trends in Cognitive Sciences, 6(12), 517–523. Greenwald, A. G., & Banaji, M. R. (1995). Implicit social cognition: Attitudes, selfesteem, and stereotypes. Psychological Review, 102(1), 4–27. Guerra, V. M., & Giner-Sorolla, R. (2010). The Community, Autonomy, and Divinity Scale (CADS): A new tool for the cross-cultural study of morality. Journal of Cross-Cultural Psychology, 41(1), 35–50. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834. Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Vintage. Haidt, J., Bjorklund, D. F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason [Unpublished]. Haidt, J., & Graham, J. (2007). When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. Social Justice Research, 20(1), 98–116. Haidt, J., & Hersh, M. A. (2001). Sexual morality: The cultures and emotions of Conservatives and Liberals. Journal of Applied Social Psychology, 31(1), 191–221. Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133(4), 55–66. Haidt, J., & Joseph, C. (2007). The moral mind: How five sets of innate intuitions guide the development of many culture-specific virtues, and perhaps even modules. In P. Carruthers, S. Laurence, & S. Stich (Eds.), The innate mind (Vol. 3, pp. 367–391). Oxford University Press. Haidt, J., & Joseph, C. (2011). How moral foundations theory succeeded in building on sand: A response to Suhler and Churchland. Journal of Cognitive Neuroscience, 23(9), 2117–2122. Haidt, J., & Kesebir, S. (2010). Morality. In S. T. Fiske, D. T. Gilbert, & G. Lindzey (Eds.), Handbook of social psychology (5th ed., pp. 797–832). John Wiley & Sons. Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Attitudes and Social Cognition, 65(4), 613–628. Hamlin, J. K., & Wynn, K. (2011). Young infants prefer prosocial to antisocial others. Cognitive Development, 26(1), 30–39. Han, X., Zhou, S., Fahoum, N., Wu, T., Gao, T., Shamay-Tsoory, S., Gelfand, M. J., Wu, X., & Han, S. (2021). Cognitive and neural bases of collateral damage during intergroup conflict. Nature Human Behaviour, 5(9), 1214–1225. Hannikainen, I. R., Miller, R. M., & Cushman, F. A. (2017). Act versus impact: Conservatives and liberals exhibit different structural emphases in moral judgment. Ratio, 30(4), 462–493. Hesse, E., Mikulan, E., Decety, J., Sigman, M., Garcia, M. del C., Silva, W., Ciraolo, C., Vaucheret, E., Baglivo, F., Huepe, D., Lopez, V., Manes, F., Bekinschtein, T. A., & Ibanez, A. (2016). Early detection of intentional harm in the human amygdala. Brain, 139(1), 54–61.

The Moral Domain

Hester, N., & Gray, K. (2020). The moral psychology of raceless, genderless strangers. Perspectives on Psychological Science, 15(2), 216–230. Hofmann, W., Wisneski, D. C., Brandt, M. J., & Skitka, L. J. (2014). Morality in everyday life. Science, 345(6202), 1340–1343. Horberg, E. J., Oveis, C., & Keltner, D. (2011). Emotions as moral amplifiers: An Appraisal tendency approach to the influences of distinct emotions upon moral judgment. Emotion Review, 3(3), 237–244. Hutcherson, C. A., & Gross, J. J. (2011). The moral emotions: A social-functionalist account of anger, disgust, and contempt. Journal of Personality and Social Psychology, 100(4), 719–737. Inbar, Y., Pizarro, D. A., & Bloom, P. (2012). Disgusting smells cause decreased liking of gay men. Emotion, 12(1), 23–27. Inglehart, R. F., Basanez, M., & Moreno, A. (1998). Human values and beliefs: A crosscultural sourcebook. University of Michigan Press. Izard, C. E. (1992). Basic emotions, relations among emotions, and emotion-cognition relations. Psychological Review, 99(3), 561–565. Jambon, M., & Smetana, J. G. (2018). Individual differences in prototypical moral and conventional judgments and children’s proactive and reactive aggression. Child Development, 89(4), 1343–1359. James, K. (2010). Is the “Harry Potter. . .” series truly harmless? ChristianAnswers.Net. https://christiananswers.net/q-eden/harrypotter.html Janoff-Bulman, R., & Carnes, N. C. (2016). Social justice and social order: Binding moralities across the political spectrum. PLoS ONE, 11(3), Article e0152479. Jensen, L. A. (2015). Moral development in a global world: Research from a culturaldevelopmental perspective. Cambridge University Press. Kant, I. (1998). Groundwork of the metaphysics of morals. (M. Gregor, Ed. and Trans.). Cambridge University Press. (Original work published 1785) Kivikangas, J. M., Fernández-Castilla, B., Järvelä, S., Ravaja, N., & Lönnqvist, J.-E. (2021). Moral foundations and political orientation: Systematic review and meta-analysis. Psychological Bulletin, 147(1), 55–94. Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Alper, S., Aveyard, M., Axt, J. R., Babalola, M. T., Bahník, Š., Batra, R., Berkics, M., Bernstein, M. J., Berry, D. R., Bialobrzeska, O., Binan, E. D., Bocian, K., Brandt, M. J., Busching, R., . . . Nosek, B. A. (2018). Many Labs 2: Investigating variation in replicability across samples and settings. Advances in Methods and Practices in Psychological Science, 1(4), 443–490. Kohlberg, L., & Hersh, R. H. (1977). Moral development: A review of the theory. Theory into Practice, 16(2), 53–59. Kohlberg, L., Levine, C., & Hewer, A. (1983). Moral stages: A current formulation and a response to critics. Contributions to Human Development, 10, 174. Koleva, S. P., Graham, J., Iyer, R., Ditto, P. H., & Haidt, J. (2012). Tracing the threads: How five moral concerns (especially Purity) help explain culture war attitudes. Journal of Research in Personality, 46(2), 184–194. Krueger, F., & Hoffman, M. (2016). The emerging neuroscience of third-party punishment. Trends in Neurosciences, 39(8), 499–501. Kuhn, T. S. (1970). The structure of scientific revolutions (2nd ed.). University of Chicago Press.

147

148

            

Landy, J. F., & Goodwin, G. P. (2015). Does incidental disgust amplify moral judgment? A meta-analytic review of experimental evidence. Perspectives on Psychological Science, 10(4), 518–536. Le Guen, O., Samland, J., Friedrich, T., Hanus, D., & Brown, P. (2015). Making sense of (exceptional) causal relations. A cross-cultural and cross-linguistic study. Frontiers in Psychology, 6, Article 1645. Lukianoff, G., & Haidt, J. (2015, August 11). How trigger warnings are hurting mental health on campus. The Atlantic. www.theatlantic.com/magazine/archive/2015/ 09/the-coddling-of-the-american-mind/399356/ Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A theory of blame. Psychological Inquiry, 25, 1–40. Marcus, G. F. (2004). The birth of the mind: How a tiny number of genes creates the complexities of human thought. Basic Books. Mikhail, J. (2009). Moral grammar and intuitive jurisprudence: A formal model of unconscious moral and legal knowledge. In B. H. Ross (Ed.), Psychology of learning and motivation (Vol. 50, pp. 27–100). Academic Press. Miller, J. G., & Bersoff, D. M. (1988). When do American children and adults reason in social conventional terms? Developmental Psychology, 24(3), 366–375. Mnookin, S. (2012). The panic virus: The true story behind the vaccine-autism controversy. Simon & Schuster. Moore, J. W., Teufel, C., Subramaniam, N., Davis, G., & Fletcher, P. C. (2013). Attribution of intentional causation influences the perception of observed movements: Behavioral evidence and neural correlates. Frontiers in Psychology, 4, Article 23. Mulvey, K. L. (2016). Evaluations of moral and conventional intergroup transgressions. British Journal of Developmental Psychology, 34(4), 489–501. Nichols, S. (2021). Rational rules: Towards a theory of moral learning. Oxford University Press. Niemi, L., & Young, L. (2014). Blaming the victim in the case of rape. Psychological Inquiry, 25(2), 230–233. Nisbett, R. E., & Cohen, D. (1996). Culture of honor: The psychology of violence in the South. Westview. O’Connor, T. (1994). Emergent properties. American Philosophical Quarterly, 31(2), 91–104. Oldridge, D. (2017). Strange histories: The trial of the pig, the walking dead, and other matters of fact from the Medieval and Renaissance worlds (2nd ed.). Routledge. Paxton, J. M., & Greene, J. D. (2010). Moral reasoning: Hints and allegations. Topics in Cognitive Science, 2(3), 511–527. Paxton, J. M., Ungar, L., & Greene, J. D. (2012). Reflection and reasoning in moral judgment. Cognitive Science, 36(1), 163–177. Petty, R., & Cacioppo, J. (1986). The elaboration likelihood model of persuasion. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 19, pp. 1–24). Springer. Piaget, J. (2013). The moral judgment of the child. Routledge. Rai, T., & Fiske, A. (2011). Moral psychology is relationship regulation: Moral motives for unity, hierarchy, equality, and proportionality. Psychological Review, 118, 57–75. Rawls, J. (1999). A theory of justice (Rev. ed). Harvard University Press.

The Moral Domain

Royzman, E. B., & Borislow, S. H. (2022). The puzzle of wrongless harms: Some potential concerns for dyadic morality and related accounts. Cognition, 220, 1–12. Royzman, E. B., Kim, K., & Leeman, R. F. (2015). The curious tale of Julie and Mark: Unraveling the moral dumbfounding effect. Judgment and Decision Making, 10(4), 296–313. Schein, C., & Gray, K. (2015). The unifying moral dyad: Liberals and conservatives share the same harm-based moral template. Personality and Social Psychology Bulletin, 41(8), 1147–1163. Schein, C., & Gray, K. (2018). The theory of dyadic morality: Reinventing moral judgment by redefining harm. Personality and Social Psychology Review, 22(1), 32–70. Schein, C., Ritter, R. S., & Gray, K. (2016). Harm mediates the disgust-immorality link. Emotion, 16(6), 862–876. Schnall, S., Haidt, J., Clore, G. L., & Jordan, A. H. (2008). Disgust as embodied moral judgment. Personality and Social Psychology Bulletin, 34(8), 1096–1109. Schwitzgebel, E., & Cushman, F. (2012). Expertise in moral reasoning? Order effects on moral judgment in professional philosophers and non-philosophers. Mind & Language, 27(2), 135–153. Shweder, R. A. (2012). Relativism and universalism. In D. Fassin (Ed.), A companion to moral anthropology (pp. 85–102). Wiley-Blackwell. Shweder, R. A., Mahapatra, M., & Miller, J. G. (1987). The emergence of morality in young children. University of Chicago Press. Shweder, R. A., Much, N. C., Mahapatra, M., & Park, L. (1997). The “big three” of morality (autonomy, community, divinity) and the “big three” explanations of suffering. In A. M. Brandt & P. Rozin (Eds.), Morality and health (pp. 119–169). Routledge. Sinnott-Armstrong, W. (2018). Asking the right questions in moral psychology. In K. Gray & J. Graham (Eds.), Atlas of moral psychology (pp. 565–571). Guilford Press. Skitka, L. J., Washburn, A. N., & Carsel, T. S. (2015). The psychological foundations and consequences of moral conviction. Current Opinion in Psychology, 6, 41–44. Smetana, J. G. (1981). Reasoning in the personal and moral domains: Adolescent and young adult women’s decision-making regarding abortion. Journal of Applied Developmental Psychology, 2(3), 211–226. Smith, K. B., Alford, J. R., Hibbing, J. R., Martin, N. G., & Hatemi, P. K. (2017). Intuitive ethics and political orientations: Testing moral foundations as a theory of political ideology. American Journal of Political Science, 61(2), 424–437. Stich, S. (2018). The moral domain. In K. Gray & J. Graham (Eds.), Atlas of moral psychology (pp. 547–555). Guilford Press. Turiel, E., Edwards, C. P., & Kohlberg, L. (1978). Moral development in Turkish children, adolescents, and young adults. Journal of Cross-Cultural Psychology, 9(1), 75–86. Turiel, E., Hildebrandt, C., Wainryb, C., & Saltzstein, H. D. (1991). Judging social issues: Difficulties, inconsistencies, and consistencies. Monographs of the Society for Research in Child Development, 56(2), i–116.

149

150

            

Turiel, E., Killen, M., & Helwig, C. C. (1987). Morality: Its structure, functions, and vagaries. In J. Kagan & S. Lamb (Eds.), The emergence of morality in young children (pp. 155–243). University of Chicago Press. Van Bavel, J. J., Packer, D. J., Haas, I. J., & Cunningham, W. A. (2012). The importance of moral construal: Moral versus non-moral construal elicits faster, more extreme, universal evaluations of the same actions. PLoS ONE, 7(11), Article e48693. Van de Vondervoort, J. W., & Hamlin, J. K. (2018). The early emergence of sociomoral evaluation: Infants prefer prosocial others. Current Opinion in Psychology, 20, 77–81. Vitell, S. J., & Patwardhan, A. (2008). The role of moral intensity and moral philosophy in ethical decision making: A cross-cultural comparison of China and the European Union. Business Ethics: A European Review, 17(2), 196–209. Ward, S. J., & King, L. A. (2018). Individual differences in reliance on intuition predict harsher moral judgments. Journal of Personality and Social Psychology, 114(5), 825–849. Weston, D. R., & Turiel, E. (1980). Act-rule relations: Children’s concepts of social rules. Developmental Psychology, 16(5), 417–424. Wheatley, T., & Haidt, J. (2005). Hypnotic disgust makes moral judgments more severe. Psychological Science, 16(10), 780–784. White, S. F., Zhao, H., Leong, K. K., Smetana, J. G., Nucci, L. P., & Blair, R. J. R. (2017). Neural correlates of conventional and harm/welfare-based moral decision making. Cognitive, Affective, & Behavioral Neuroscience, 17(6), 1114–1128. Yoder, K. J., & Decety, J. (2018). The neuroscience of morality and social decisionmaking. Psychology, Crime & Law, 24(3), 279–295. Young, L., & Saxe, R. (2011). When ignorance is no excuse: Different roles for intent across moral domains. Cognition, 120(2), 202–214. Young, L., Scholz, J., & Saxe, R. (2011). Neural evidence for “intuitive prosecution”: The use of mental state information for negative moral verdicts. Social Neuroscience, 6(3), 302–315. Yucel, M., Hepach, R., & Vaish, A. (2020). Young children and adults show differential arousal to moral and conventional transgressions. Frontiers in Psychology, 11, Article 548.

PART II

Thinking and Feeling

7 Moral Decision Making The Value of Actions Laura Niemi and Shaun Nichols

Much work in moral psychology examines moral judgment. For example, we ask people whether they think that a certain action is permissible. Or we ask how likely they think it is that a certain action is the right thing to do. But we can also ask about moral decisions – how do people decide what to do in moral contexts? Decisions characteristically depend on judgments, but decisions go beyond judgment to initiate action. I might judge that the right thing to do is to give to Save the Children, but it’s a further question whether I will decide to write a check. The topic of moral decision making is vast in scope. In this chapter, we limit our theoretical treatment by focusing largely on expected utility theory, the mainstay model of decision theory generally. To capture a broader swathe of moral decision making, we present an augmentation of the standard outcome-based expected utility hypothesis: an action-based account. We describe how estimates of utility based on the value of actions explain moral decision making alongside outcomebased estimates. We also discuss recent advances in the science of moral decision making that support this augmented expected utility model.

7.1 The Foundations for a Theory of Moral Decision Making: Expected Utility Theory The most prominent theory of decision making is expected utility theory (EUT), and we will investigate moral decision from this starting point. Notoriously, much of human decision making does not conform to expected utility theory. There is an entire tradition of work on this, but one representative example is that people overweight small risks – people effectively treat a 1 percent chance of some event as having a significantly higher probability (e.g., 4 percent). Insurance companies capitalize on this human vulnerability (Kahneman, 2011 provides an accessible review). Nonetheless, expected utility theory provides a useful starting point from which to articulate key aspects of moral decision making. The theory of expected utility relies on a specific collection of components: a set of options available to the decision-maker, the decision-maker’s expectations regarding the likelihood of a particular choice resulting in a specific outcome, the subjective importance or utilities assigned by the decision-maker to each potential outcome, and simple computations involving these utilities and expectations. 153

154

       

The most straightforward approach to explain this framework is by considering different bets that one might take involving money. I assign higher utility (meaning I consider it more valuable) to an outcome where I receive $100 compared to an outcome where I receive $90. This preference arises from my valuation of money, where I generally prefer having more over having less for things I value. Consequently, when faced with a decision between (A) receiving $100 and (B) receiving $90, the appropriate choice is A. Nevertheless, if the decision involves choosing between (C) a 10 percent chance of receiving $100 and (D) a 90 percent chance of receiving $90, then the better option is D. In the initial scenario, the EUT serves as a normative theory – it outlines the decisions one ought to make based on their expectations and utilities. However, it also forms the basis for a descriptive theory, and indeed, in the given examples, it is highly probable that individuals would opt for A over B and D over C.

7.1.1 Outcome-Based EUT Certainly, not all choices revolve around finances, and this aspect contributes to why decision theory expresses the personal worth attributed to outcomes using the concept of “utility” rather than monetary measures. To take a familiar example from decision theory that incorporates probabilities, consider being faced with a decision about whether to take an umbrella with you to work. You place a very low utility on an outcome in which you don’t have your umbrella and it rains; in that case, you get soaked. This brings you no happiness at all, thus the utility of this outcome for you is 0. If you do have your umbrella, and it does not rain (you lugged your umbrella around for nothing), this is also a lowvalue outcome. Still, it’s better than not having your umbrella when it does rain – let’s say your utility for this outcome is 15. The outcome in which you don’t take your umbrella and it doesn’t rain would certainly be better than this. Let’s say that this has the utility 70 for you. Finally, you’ll be very happy with your decision to take your umbrella when it in fact rains; let’s say your utility for this outcome is 90. Now, to decide whether or not to take the umbrella, you consult the weather forecast, which leads you to think the chance of rain is 75 percent. This scenario can be represented with a decision tree (see Figure 7.1). Given these probabilities and values, EUT dictates that you should take the umbrella.1 However, if the chance of rain is only 5 percent, then EUT says that you should not take the

1

We calculate this by taking each option and, given that option, multiplying the probability of each outcome by the utility of that outcome. So, for the option in which you take the umbrella, there is a 75 percent chance that you have the umbrella and it rains; the utility for that option is 90, and so the product of that utility with the probability is 67.5. There is a 25 percent chance that you take the umbrella and it doesn’t rain; the utility for that option is 15, and so the product of that with the probability is 3.75. We add these together to yield the expected utility of that option which will be 71.25. By the same process, we can determine that the expected utility of not taking the umbrella is 17.5. So, the expected utility of taking the umbrella is higher than the expected utility of not taking it.

Moral Decision Making: The Value of Actions

Figure 7.1 A decision tree reflecting outcome-based EUT; u(o) ¼ utility of this outcome, for example, the utility of the outcome in which you have your umbrella and it rains is 90.

umbrella.2 We can dub this approach to decision theory outcome-based EUT. Once more, while we have presented this illustration in the framework of a normative theory of decision making (wherein, in the initial scenario, selecting the umbrella is the recommended course of action), it is also feasible that an outcome-oriented EUT accurately depicts how individuals might arrive at decisions based on the provided utilities and expectations.

7.1.2 Moral Decisions and Outcome-Based EUT Already with this outcome-based EUT model of decision making we can accommodate morally important decisions of a utilitarian bent.3 Indeed, as Baron writes: “Utilitarianism is the natural extension of expected-utility to decisions for many people. The utilitarian normative model here is to base the decision on the expected number of deaths” (Chapter 8, this volume). Consider the following case, which we call “Triage.” An ambulance driver knows that he is the only person who can address life-threatening injuries from an avalanche. There are two clusters of people in need, and he only has time to attend to one cluster. One cluster consists of two people on the north side of the mountain; the other cluster consists of five people on the east side of the mountain. Given the information available, he has the expectation that if he goes to the north side he 2

3

In this case, the expected utility of taking the umbrella is 18.75, and the expected utility of not taking it is 66.5. We are focusing on act-utilitarian accounts, according to which (roughly) we ought morally to do that which produces the highest utility for everyone involved. Rule utilitarians (e.g., Hooker, 2000) maintain that we ought morally to follow the rules that will produce the highest utility for everyone involved. The critical difference emerges when a particular act that follows the “right” rule will not produce the highest utility. Broadly speaking, in such a case, rule utilitarianism counsels following the rule, and act utilitarianism counsels flouting the rule. For the purposes of the paper, act utilitarianism provides a simpler and more familiar model, hence our focus on it. (Thanks to Philip Robbins for prompting us to clarify this distinction.)

155

156

       

can save the two people there and if he goes to the east side he can save the five people there. Since the ambulance driver values human life, and more is better than less, he should (normative EUT) and will (descriptive EUT) decide to go to the east side. The situation changes if he learns that the injuries for the people at the north are such that for each of these two people, he has a 95 percent chance of saving that person, and that the injuries for the people at the east are such that for each one of these five people, he has a 1 percent chance of saving them. In that case, he should and will factor these probabilities into his calculation and decide to go instead to the north side. Outcome-based EUT accommodates variations in moral decision making. For instance, one person might assign a higher utility to saving cats than to saving dogs, and another person might have the reverse utility assignments; in such a case, if the first person were confronted with a dilemma that required sacrificing one dog or one cat, she should prefer to save the cat (we should expect the second person to prefer saving the dog). Outcome-based EUT can also accommodate subtler phenomena, such as when people represent actions as associated with entirely different outcomes. For example, the decision of whether to tell a joke or not might be associated with expectations about the outcome of hurting the target of the joke (a moral consideration), or expectations about the outcome of making the target of the joke laugh (nonmoral consideration). As such, two people considering the same action, the joke, might weigh expectations about different outcomes with very different intrinsic moral stakes (Moore et al., 2011). As work on indirect speech indicates, whether people are aware of it or not, such moral and nonmoral characterizations of the same event are common, for example, in the form of euphemistic language (Bandura et al., 1996; Chakroff et al., 2015; Orwell, 1946). An outcome can also be represented as the result of different causes or different motivations for approach or avoidance (Janoff-Bulman et al., 2009), construals that potentially have different implications for moral judgment and decision making. For example, an accident holding up traffic might be construed as the outcome of two cars colliding or one person’s careless texting.

7.1.3 Moral Decisions and Issues for Outcome-Based EUT Examples like Triage are actually abundant in our everyday lives when we make decisions about how to produce the most good. However, and famously, there is a wide range of cases in which people’s decisions (or, at any rate, their reports of what they would decide) diverge from the dictates of the outcomebased EUT. That is, the decision recommended by outcome-based EUT is not the decision that people make. The closest match to a case like Triage is the “Footbridge” trolley case in which the agent has to decide whether to throw a man in front of a train to prevent the deaths of five other people. Given a focus on outcomes and the utility placed on human life, the utilitarian outcome-based EUT model suggests that one will and should throw the man in front of the train. But this is not what most people say they would do (only around

Moral Decision Making: The Value of Actions

30 percent, e.g., Greene, et al., 2009; Petrinovich & O’Neill, 1996). Most people say that they would not throw the man in front of the train. Cases like the Footbridge example are exceptions to the happy harmony we see between normative and descriptive versions of outcome-based EUT for cases like Triage. Other examples in which people’s (reported) decisions diverge from outcome-based EUT include protected values (e.g., Baron & Spranca, 1997), the act/omission distinction (Cushman & Young, 2011; Henne et al., 2019; Spranca et al., 1991; Willemsen & Reuter, 2016), and status quo bias (Ritov & Baron, 1992). In these and many other cases, people’s decisions diverge from what outcome-based EUT normatively prescribes. One option is to maintain that outcome-based EUT is the correct normative theory and where people diverge from outcome-based EUT, they are simply being irrational (cf. Baron, 1994; Greene, 2008; Singer, 2005). On this approach, one might dispense with EUT as descriptively useful for exceptional cases. An alternative option, which we will explore in this section, is to elaborate and augment EUT in ways that will capture some of the apparently exceptional cases. (In Section 7.2, we will explore challenges to rational moral decision making.)

7.1.4 Moral Decisions and Action-Based EUT Although there is an elegant simplicity to outcome-based EUT, it seems illequipped to explain many everyday moral decisions. In particular, many nonutilitarian decisions seem difficult to accommodate within outcome-based EUT. Perhaps EUT need not be so elegantly simple. As we’ve hinted in Section 7.1.3, in addition to assessing actions according to the utility of their outcomes, we might also assign utility to actions themselves. In other words, agents sometimes assign utility to outcomes (i.e., states of affairs), but agents also sometimes assign utility4 to an action-type. Such an augmentation of EUT is at least somewhat plausible. Indeed, Cushman (2013) proposed a dual-system account of morality linking valuation of actions and outcomes, model-free and modelbased learning, and automatic and controlled processing. Here, we detail an action-based augmented EUT. Consider the following stylized experiment (cf. Batson et al., 1997; Fischbacher & Föllmi-Heusi, 2013). Participants are told to flip a coin in private and then report the result to the experimenter. If they report that the coin came up heads, they get $2; if they report that it came up tails, they get $1. In these kinds of experiments, the reports of the coin coming up heads is greater than would be expected by chance. However, many participants do report that the coin came up tails, receiving $1 rather than $2. What is going on with them? Does this mean that they don’t care about money? Or that they have a bizarre preference of less to more? Drawing such conclusions would

4

This use of “utility” refers to the subjective value assigned directly to an outcome or action (sometimes called the “instantaneous value function,” represented by the lower-case “u” in Figure 7.1), as opposed to the “utility function” which calculates expected utility from expectations and utilities across the possible effects of a choice.

157

158

       

be extreme, and it would fail to make broader sense of the agent’s overall decisions. Instead of attributing incoherence to the participants, we might conclude that these subjects assign low (or negative) utility to a type of action – lying (see, e.g., Gaus & Thrasher, 2022, pp. 43, 78; Nichols, 2021, pp. 224–225). And the reason they assign low utility to those types of actions is plausibly because they accept a moral rule against lying. We will refer to this augmented EUT framework as action-based EUT. We can frame this as a normative theory of decision making according to which when deciding which action to take, agents should calculate the utilities assigned to both the outcomes of the actions and the types of actions they are. Further reason for thinking that people assign low utility to actions that constitute rule violations comes from recent research in economics on “naïve rule following.” In one experiment, participants undertook a computer-based task involving moving balls into either of two buckets. They were informed that placing a ball in the yellow bucket would earn them $0.10, while placing it in the blue bucket would yield $0.05. The earnings were displayed on the screen following each ball placement. The study’s rule specification was straightforward: After learning about the earnings, participants were instructed, without further elaboration: “The rule is to place the balls into the blue bucket” (Kimbrough & Vostroknutov, 2018, p. 148). Consequently, this rule contradicts their financial interests, and no rationale for the rule is provided. Despite this, in five distinct countries (the USA, Canada, the Netherlands, Italy, and Turkey), over 70 percent of participants demonstrated instances of naïve rule adherence. That is, they put the balls in the blue bucket, despite the fact that this entailed monetary losses. This again suggests that assignments of utility are not limited to outcomes, since many people apparently assign low utility to actions that constitute rule breaking (for a different kind of evidence that seems to support action-based EUT, see Białek et al., 2014; Gawronski et al., 2017). Although we think the foregoing analysis provides reason to adopt actionbased EUT, one might challenge this interpretation.5 Perhaps people only conform to rules because they recognize the potential costs of breaking rules. For instance, in the bucket study we have described, maybe participants follow the arbitrary rule because they think there is really a hidden cost to breaking the rule; for example, perhaps participants fear that the experimenter will punish rule breakers in some way. Although this is a possible interpretation, and maybe some participants really do have those thoughts, we think that, in many cases, the influence of internalized rules is more direct. Consider rules of etiquette. I put the fork to the left of the plate because I learned a rule to that effect. I don’t know how or why the rule came into place. And when I set the table, I think “the fork goes here” but never “following the fork-on-the-left rule has lower potential costs (or higher potential benefits).” The simpler, direct account of the value placed on rules also seems to have an advantage of efficiency. Less information needs to be

5

We are grateful to an anonymous referee for raising the objection.

Moral Decision Making: The Value of Actions

stored, less information needs to be retrieved, and less time is required to follow the rule. So if I simply internalize the rule “put balls in blue bucket,” with a low utility assigned to violations, I don’t have to think further about the motivations of the experimenter, and that in itself is an advantage. We have already acknowledged that some participants might think about the potential costs of breaking the rules but also maintained that many participants likely do not engage in such extra thinking. This generates some testable hypotheses. If some people follow the rule without considering costs, and some follow the rule through considering costs, we should expect processing differences. One way to test this hypothesis would be through retrospective protocol analysis (Ericsson & Simon, 1984). For instance, after the instructions, for the first ball, participants who put the ball in the blue bucket might be asked to report their thoughts prior to putting the ball in the bucket. We might find that some participants explicitly report having thoughts about potential costs of breaking the rule, and others might simply say something about the rule. A key question is whether those who mention the costs would have taken longer to put the ball in the bucket than those who merely mentioned the rule. If such a difference were found, this would support the idea that there are different processes in the two cases. Furthermore, such a finding would be consistent with humans’ aversion to cognitive effort at the cost of accuracy (Johnson & Payne, 1985; Zenon et al., 2019), their ubiquitous use of biases and heuristics (Kahneman et al., 1982), and their struggle when requested to respond without the interference of automatic action (e.g., word-reading in the Stroop test; MacLeod, 1991). If we grant that people assign utility to actions themselves (in addition to assigning utility to the outcomes of actions), we gain a powerful way of accommodating the exceptional instances mentioned earlier. People’s commitment to moral rules might shape the utilities they have for different kinds of actions. Part of the reason that people would not push the man in front of the train is that the moral rules that they endorse lead them to assign a low utility to actions like pushing people to their deaths. Something similar holds for less dramatic moral issues, like stealing, cheating, and lying. Some participants in the coin-flip study described earlier plausibly endorse the rule that one should not lie, and this leads them to assign a low utility to actions in which they lie. We can build this into our decision tree. Suppose that a subject in one of these studies is aware that if he lies, there is a 75 percent chance of receiving $5, and if he tells the truth, there is only a 25 percent chance of getting $5. We can suppose that getting $5 would yield 3 units of utility. Now, imagine that for actions that fall under the type, lying, he assigns a low utility, say 2, and for actions that fall under the type, truth telling, he assigns a higher utility, say, 2. Under those circumstances, we can complete our decision tree (see Figure 7.2) and compute that telling the truth yields a greater expected utility compared to lying, despite the fact that the anticipated monetary gain is smaller.6

6

In this case, the expected utility of lying is 0.25, and the expected utility of telling the truth is 2.75.

159

160

       

Figure 7.2 A decision tree reflecting action-based EUT; u(o) ¼ utility of this outcome, for example, the utility of the outcome in which you lie and get $5 is 1 (i.e., –2 + 3).

With action-based EUT, we can partially reharmonize the normative and descriptive decision theories. Consider the participant faced with options about whether to lie in the experiment. In the action/outcomes of his options from above (and in Figure 7.2), he assigns a low weight to actions in which he lies. Since he assigns a very low weight to such, he will and should decide in favor of option B rather than A. Importantly, though, we need not think that the utility regarding lying would always overwhelm other factors in moral decision making. Suppose the options are: (C) lie and receive $500,000, (D) tell the truth and receive $0. In that case, the utility he assigns to the money will exceed the utility he assigns to telling the truth; as a result, he will and should choose (C). These examples are simple but may be scaled up. We are proposing that the action types on the table as choices, such as lying and truth-telling, have their own utility values to a person, shaped by moral norms. And those can then resonate through moral decision making.

7.1.5 Moral Decision Making and Actions Thus, action-based EUT has resources to accommodate many cases that cannot be captured by outcome-based EUT. The EUT framework also allows us to appreciate another way in which rules might impact moral decision making. The initial branch of a decision tree outlines the options under consideration by the decision-maker (see Figure 7.1). If a potential option is not represented by the agent, it becomes unavailable for selection. If I’m trying to decide where to go to dinner, and I don’t know about some new excellent restaurant, then there is no chance that I will select that option. It isn’t available on my decision tree. It’s plausible that for many rules, once they are internalized, they effectively prune the decision tree. After internalizing a moral rule prohibiting stealing, that option might not even show up on the decision tree in many contexts where

Moral Decision Making: The Value of Actions

stealing is in fact a possible option (see, e.g., Phillips & Cushman, 2017). When I go to the hardware store to buy nails, it would be easy to put the nails into my pocket (now that I think of it), but at the time that I was in the hardware store, it never occurred to me that I might steal the nails. Or, to hark back to the trolley cases, remember when you were innocent of philosophical examples. Imagine walking across a bridge when you notice five people on the tracks below who will be hit by a train. You also see a man with a large backpack looking over the bridge. Here’s something that would never occur to you – Should I push this man off the bridge to stop the train? That is not a live option for you. It’s excluded from your option set.

7.2 The Science of Moral Decision Making 7.2.1 Implications for Moral Values and Behaviors: Fairness We now turn from an abstract characterization of moral decision making to more concrete scientific work on the topic. What are the implications of actionbased EUT for moral values and behaviors? Here, we discuss a selection of findings that illustrate some of these implications – with recognition that much more empirical work needs to be done. One prominent area of study concerns how people decide to fairly allocate resources, or how they define fairness. Without question, some fairness norms are universal, including preferences for impartiality, equitable allocations, and reciprocity (Blake et al., 2015; Deutsch, 1975; McAuliffe et al., 2017). In one sense, fairness seems action-agnostic; for example, things are made equal in countless situations. However, allocations that are considered fair have also been found to vary among people and across situations – fairness seems to emerge from a variety of actions (Deutsch, 1975; Niemi et al., 2017; Niemi & Young, 2017; Rawls, 1971, 2001; Trivers, 1971). For example, in some cases, the fair action is giving more to some than others, in order to reduce disparities in need or compensate work; in other cases, the fair action is impartial. There are even individual differences in the emotional underpinnings of fairness values (e.g., need-based fairness is more morally praised by people higher in empathy; Niemi & Young, 2017), as well as divergent neural underpinnings (social and nonsocial cognition; Niemi et al., 2017). The actions that comprise what people consider fair are diverse. Indeed, the variety of actions that can be included in a person’s decisions about fairness helps explain why what counts as fairness is subject to ongoing social disagreement. An action-based model of decision making fits with this moral diversity. It also fits with the universality and order we observe: Humans prefer a fairly limited set of higher-order action types to be considered potentially fair. At a coarser grain, internalized fairness norms guide which actions are considered relevant for a decision-maker aiming for fairness (e.g., impartiality or equality?). While people endorse such broad, abstract terms as fairness-relevant and

161

162

       

very morally important, at a finer grain, people making decisions that will be evaluated for fairness must ultimately assess the utility of particular actions, including ways of distributing resources or designing procedures.

7.2.2 Interpersonal Moral Decision Making In some theories of moral psychology, severity or seriousness is fundamental to morally relevant events. Moral evaluations involve norm-violating events, actions that make an impact – unlike the choice to carry an umbrella. What especially matters in a morally relevant decision is how our decision affects others, as described in the various ways of allocating fairly. Even decisions that might seem personal (e.g., should I get a divorce; should I move away; should I answer that text?) involve calculation of the expected utility of the decision not only for me but for the person who will be affected. In turn, the decisions of others affect what I choose to do. Thus, moral decision making entails negotiating the value of one’s own actions, the other’s actions, and outcomes – shared and nonshared. How are these different inputs to moral decisions weighed? Possibly, because people presume others are (also) self-interested and playing by the same “rules,” moral decision making involves representation of twin decision trees, with outcomes for the other person weighed against outcomes for the self. This possibility is supported by the literature on perspective taking and theory of mind in moral decision making, which suggests that people value more than self-interest when making moral judgments and decisions. People’s valuations of others are also reflected in these decisions. In particular, perspective taking can be viewed as a process that enables socially attuned moral decision making, and behavioral and neural evidence supports the possibility that perspective taking is a crucial process during moral cognition, including allocation of blame and praise (Buckholtz & Marois, 2012; Green & Haidt, 2002; Moll et al., 2002; Young et al., 2010; Young et al., 2007; Yoder & Decety, 2014). Furthermore, other fMRI and behavioral work (Niemi et al., 2017; Niemi & Young, 2017) shows that perspective taking may be behind variation we see in people’s moral evaluations of different types of fairness, including the extent to which they consider reciprocity and impartiality praiseworthy. Taken together, this research suggests that moral decision making incorporates the perspectives of others. Different values recruit perspective taking to varying degrees during moral decision making, as do different individuals. Individual differences in empathy and concern for others are associated with numerous kinds of moral decision making, from resource allocation problems to harm dilemmas. Research with the interpersonal orientation task (Van Lange, 1999; Van Lange et al., 1997) over the last few decades indicates that when people are given the choice between three options: an equal allocation of valuable points between the self and an anonymous other (e.g., 450/450), an individualistic allocation that maximizes points to self (e.g., 550/450), or a competitive allocation that

Moral Decision Making: The Value of Actions

minimizes the other’s points (e.g., 420/320), people typically choose the equal allocation, rather than the self-interested options. Context matters, though. Among business school student participants, choice of the individualistic or competitive options increases, relative to noneconomics students (Van Lange et al., 2011). The fact that we see divergent decision making based on decisionmakers’ capacity or desire to adopt others’ perspectives suggests that, on average, moderation of self-interest by concern for others may be a vulnerable aspect of moral decision making. Other research with individuals with psychopathy indicates that their moral decision making may lack the emotionally aversive responses to harm that people lower in psychopathy demonstrate, leading them to choose the equivalent of “pushing the man off the footbridge” (in this case, smothering a crying baby; Glenn et al., 2010). Accordingly, people high, compared to low, in psychopathy demonstrate a dampened neural response in brain areas for representation of affect (Decety et al., 2013) when viewing another person’s pain (i.e., pictures of injured others); however, they show no reduction, relative to nonpsychopathic individuals, when viewing pain described as having happened to themselves. Certainly, humans are on a spectrum of sensitivity to harm (e.g., a caring continuum; Marsh, 2019) and are sometimes concerned with different outcomes altogether in morally relevant decision making. However, typically, the agent does not make moral decisions in a purely self-interested way. The other person’s outcome (harm or benefits) is referenced and, if present and severe enough, activates emotional responses that give value to the action for the decision-maker.

7.2.3 The In-Group and Moral Decision Making People’s moral decisions have weighty consequences for social life, as they effectively divide people into moral communities (Graham et al., 2009; Haidt, 2012). In turn, moral communities bind together through shared, moral conceptualizations of actions. People’s group-level moral commitments contain rules that factor into calculations of the expected utility of their individual decisions – the moral codes of in-groups both prune the options for actions and shape the interpretation of actions. For example, membership in a moral community that collectively values empathy and equality might contribute to an interpretation of charitable giving as a way to achieve fairness. Likewise, membership in a group of revolutionaries might turn an act of vandalism into bravery. The influence of the group structure on human psychology has been described for decades. Clearly decision making molds to the group through a variety of cognitive mechanisms, as shown in research on group conformity, group polarization, and groupthink. Even given minimal, completely arbitrary cues to group membership, people easily identify with a group (Tajfel, 1982); the phenomenon of minimal group formation has been observed in childhood through adulthood (Dunham et al., 2011). Mature moral cognition involves

163

164

       

countless examples of in-group-based decision making, often referred to as ingroup bias. For example, participants have been found to favor others who share their political orientation in moral decision making about whether people accused of sexual misconduct should be reprimanded (Klar & McCoy, 2021); and they favor close others over distant others in their causal explanations for moral violations (Niemi et al., 2023). Indeed, a cluster of moral values proposed in moral foundations theory (Graham et al., 2011), referred to as binding values, are concerned with maintaining the bonds of relationships and groups, rather than an individual’s obligations to other individuals. Binding values, such as loyalty and respect for authority, by their nature, mold decision making to benefit fellow group members and relationship partners, even at the cost of harming an individual. The influence of the group on moral decision making represents a factor limiting the influence of empathy and perspective taking (Bloom, 2017). Research on dehumanization, prejudice, and stereotyping shows that affect may be blunted in response to morally relevant needs of out-group members (Zaki & Cikara, 2015; see also Chapter 14 in this volume). If the people affected by one’s moral decisions are not viewed as people, then representations of the value of the action and outcome of the decision are unlikely to incorporate rich representations of the outcome’s value for the affected person. In that case, instead of weighing and negotiating twin decision trees, the decision-maker’s self-interested expectations about utility might overpower the effects of empathy and perspective taking on moral decision making. Neglect of the outcome during moral decision making has also been observed to vary based on the ideological groups with which people identify (Hannikainen et al., 2017). Moral prohibition of actions appears to be more likely for people with more conservative values, as these values tend to involve rules about concrete actions, for example, sexual activity, food taboos, unpatriotic gestures, disgusting behavior. While actions and intentions are typically both factored into moral judgments, it is possible that, sometimes, individuals do not need more than the act itself to be able to comment on its wrongness. There is much room for research that maps the influence of group norms on moral decision making, including how evaluations of the utility of both actions and outcomes are influenced by moral communities and factored into moral decision making.

7.2.4 The Implementation of Decisions and Representations of Action The structure of moral decision making can be further illuminated by considering the possibility that moral decision making is sometimes outcome-based and sometimes action-based: 1) people ignore outcomes and make morally relevant decisions based on actions alone, as described earlier and 2) people overlook the value of actions and decide to pursue a morally relevant outcome. Outcomes considered “moral” or “ethical” might require neglect of either actions or outcomes. For example, a parent faced with the decision to keep their

Moral Decision Making: The Value of Actions

unvaccinated child out of school or vaccinate their child and send them to school might ignore the direct outcome of the choice on the child’s education and base their decision on an action: injecting the child with the vaccine. Neglecting the discrete actions and focusing on the big picture, the outcome, during deliberate moral decision making, also presents issues. As the research on implementation actions (Gollwitzer, 1999) indicates, a person who decides on an intended outcome, such as “get more involved with charity this year,” or “stop being mean to my brother” is more likely to reach that outcome if it is broken down into actionable steps. Leaving morally relevant goals as abstract outcomes might inspire action, but the outcome is more likely if the concrete actions associated with the goal are realistically evaluated. When people negotiate a morally relevant decision, their decision may focus on evaluation of the outcome or the prerequisite actions. Research on event segmentation (Kurby & Zacks, 2008; Zacks & Swallow, 2007) finds that people are capable of splitting up events in time in finer and coarser segments. For example, a bride might assign value to each of the following, as high-stakes decision outcomes: the engagement party, the bachelorette party, the catering tasting, the venue selection, the wedding ceremony, the honeymoon, and, finally, being married. A relative of the bride might see things differently, assigning moral weight and high utility to just one outcome: the bride being married. The action-based EUT would suggest that the bride, compared to her relative, perceives more decision trees before being married and, therefore, sees exponentially more options, each associated with valuable actions and outcomes. According to event segmentation theory, both parsing events into smaller action units and larger units reflecting goals are crucial to everyday perception, and it is not unusual for people to segment events in roughly similar ways. People may differ, however, on the “grain” in which they break down one event. Like the bride focused on the many actions before each of the outcomes involved in becoming married, people focused on subgoals of an event describe actions and use more precise verbs (Kurby & Zacks, 2008). By contrast, like the relative of the bride, when people focus on coarser-grained events, different features are perceived: objects and more precise nouns. It is theorized that fine- and coarse-grained event segmentation reflects the capacity and function of working memory, chunking information into cognitively manageable actions and outcomes. Neural research on narrative interpretation and recall demonstrates that short event boundaries reflect activity in sensory regions, whereas longer events reflect activity in “high-level” cognitive areas responsible for abstract models of situations (Baldassano et al., 2017). Actions and outcomes are represented differently in the brain, but they are tied together when we make sense of the world. Actions are nested inside events, but this hierarchical organization doesn’t necessarily translate to order between people. When required to negotiate decisions, the bride and her relative may find it difficult to see eye-to-eye about what is the current goal. The utility of the one shared, highly valued outcome, marriage, might be complicated by the proliferation of action and outcome

165

166

       

utility estimates experienced by the bride. At any given point in time before the wedding, it is more likely that decision making will be focused on the outcome for the relative and on some action for the bride. According to Vallacher and Wegner (1989), the target of focus matters, in terms of competence and morality. The authors proposed that action focus versus outcome focus reflects a social-personality dimension of “personal agency”: When we are low-level agents, we are detail-oriented and concerned about mechanism; when we are high-level agents, we see meaning, implications, consequences. Low-level action identification is proposed to be more likely when a person is in unfamiliar territory, feeling their way through one step at a time. High-level action identification, by contrast, is proposed to emerge when a person has some expertise. The authors suggest that low-level and high-level action identification directly relate to moral decision making, with high-level action identification necessary for the kind of causal reasoning and understanding of abstract moral implications that prevent impulsive offenses.

7.2.5 Pruning Options through Valuation of Actions Unlike the Footbridge problem (push or don’t push the man to save five lives), moral dilemmas in real life often have more than yes/no choices such as harm vs. don’t harm, or be fair vs. don’t be fair. People, guided by instincts, norms, and habits, instead face dilemmas over options for how to act that have various and sometimes unclear moral implications (which is why they are dilemmas). Recalling the example given earlier in this chapter, a parent trying to decide whether to keep their unvaccinated child out of school or vaccinate their child and send them to school might consider several important factors (e.g., disease risk, effects on education, social development, religious concerns) each of which has morally relevant value to the parent. In order to make a decision like this among confusing morally relevant options, a person may transform the options so that they don’t conflict with their own values. The possibility that people alter their options and associated actions, implicitly and deliberately, in order to facilitate decision making is supported by research on moral decision making. Research indicates that people do tend to export their value systems when judging or making decisions about others. That is, they believe that what is good and right for them is good and right for the other person; no alternative is possible (Newman et al., 2014). This suggests that if a person is attempting to take the perspective of someone in order to estimate an outcome (i.e., in empathy-guided moral decision making), the perspective they take will ultimately bear a resemblance to their own. In this vein, research on moral hypocrisy shows that people are inconsistent moral judges. When they violate a moral commitment they may judge themselves more favorably and their behaviors as more morally permissible than someone else who carries out that violation (e.g., Batson et al., 2002; Conway & Peetz, 2012; Graham et al., 2015; Valdesolo & DeSteno, 2007, 2008).

Moral Decision Making: The Value of Actions

The importance of people’s transformation of actions into morally acceptable options is synchronous with people’s thinking about omissions during decision making. Moral norms transform omissions, the absence of an action, into legitimate moral and immoral options. For example, when a person does nothing in the face of suffering, this may be perceived as a decision associated with bad character, such as callousness or cowardice (Duff, 1993). Moral norms (i.e., to reduce suffering in others) transform nonactions to be just as influential as actions in decision making.

7.3 Conclusion In this chapter, we’ve described how action-based EUT accommodates moral decision making, in terms of actions, options, and learning. We focused on EUT to show that, by incorporating action, EUT can explain a great deal of moral decision making. We acknowledge, however, that there is a wide world of moral decisions to explain. Some of them, for example, may be better represented by other accounts, including game theory (Binmore, 2011). Furthermore, the possibility that there are important individual differences and situational influences on moral decision making is suggested by the reported scientific findings. At this point, there are still unanswered questions regarding the integration of EUT and reinforcement learning models. Nevertheless, it is clear that enormous headway has been made over (at least) the last half century in the study of moral judgment and decision making, and the prospects of an increasingly evidence-based understanding of the topic are strong.

References Baldassano, C., Chen, J., Zadbood, A., Pillow, J. W., Hasson, U., & Norman, K. A. (2017). Discovering event structure in continuous narrative perception and memory. Neuron, 95(3), 709–721. Bandura, A., Barbaranelli, C., Caprara, G. V., & Pastorelli, C. (1996). Mechanisms of moral disengagement in the exercise of moral agency. Journal of Personality and Social Psychology, 71(2), 364–374. Baron, J. (1994). Nonconsequentialist decisions. Behavioral and Brain Sciences, 17(1), 1–10. Baron, J., & Spranca, M. (1997). Protected values. Organizational Behavior and Human Decision Processes, 70(1), 1–16. Batson, C. D., Kobrynowicz, D., Dinnerstein, J. L., Kampf, H. C., & Wilson, A. D. (1997). In a very different voice: Unmasking moral hypocrisy. Journal of Personality and Social Psychology, 72(6), 1335–1348. Batson, C. D., Thompson, E. R., & Chen, H. (2002). Moral hypocrisy: Addressing some alternatives. Journal of Personality and Social Psychology, 83(2), 330–339.

167

168

       

Białek, M., Terbeck, S., & Handley, S. (2014). Cognitive psychological support for the ADC model of moral judgment. AJOB Neuroscience, 5(4), 21–23. Binmore, K. (2011). Natural justice. Oxford University Press. Blake, P. R., McAuliffe, K., Corbit, J., Callaghan, T. C., Barry, O., Bowie, A., Kleutsch, L., Kramer, K. L., Ross, E., Vongsachang, H., Wrangham, R., & Warneken, F. (2015). The ontogeny of fairness in seven societies. Nature, 528(7581), 258–261. Bloom, P. (2017). Empathy and its discontents. Trends in Cognitive Sciences, 21(1), 24–31. Buckholtz, J. W., & Marois, R. (2012). The roots of modern justice: Cognitive and neural foundations of social norms and their enforcement. Nature Neuroscience, 15(5), 655–661. Chakroff, A., Thomas, K. A., Haque, O. S., & Young, L. (2015). An indecent proposal: The dual functions of indirect speech. Cognitive Science, 39(1), 199–211. Conway, P., & Peetz, J. (2012). When does feeling moral actually make you a better person? Conceptual abstraction moderates whether past moral deeds motivate consistency or compensatory behavior. Personality and Social Psychology Bulletin, 38(7), 907–919. Cushman, F. (2013). Action, outcome, and value: A dual-system framework for morality. Personality and Social Psychology Review, 17(3), 273–292. Cushman, F., & Young, L. (2011). Patterns of moral judgment derive from nonmoral psychological representations. Cognitive Science, 35(6), 1052–1075. Decety, J., Chen, C., Harenski, C., & Kiehl, K. A. (2013). An fMRI study of affective perspective taking in individuals with psychopathy: Imagining another in pain does not evoke empathy. Frontiers in Human Neuroscience, 7, Article 489. Deutsch, M. (1975). Equity, equality, and need: What determines which value will be used as the basis of distributive justice? Journal of Social Issues, 31(3), 137–149. Duff, R. A. (1993). Choice, character, and criminal liability. Law and Philosophy, 12(4), 345–383. Dunham, Y., Baron, A. S., & Carey, S. (2011). Consequences of “minimal” group affiliations in children. Child Development, 82(3), 793–811. Ericsson, K. A., & Simon, H. A. (1984). Protocol analysis: Verbal reports as data. MIT Press. Fischbacher, U., & Föllmi-Heusi, F. (2013). Lies in disguise—an experimental study on cheating. Journal of the European Economic Association, 11(3), 525–547. Gaus, J. & Thrasher, J. (2022). Philosophy, politics, and economics. Princeton University Press. Gawronski, B., Armstrong, J., Conway, P., Friesdorf, R., & Hütter, M. (2017). Consequences, norms, and generalized inaction in moral dilemmas: The CNI model of moral decision-making. Journal of Personality and Social Psychology, 113(3), 343–376. Glenn, A. L., Koleva, S., Iyer, R., Graham, J., & Ditto, P. H. (2010). Moral identity in psychopathy. Judgment and Decision Making, 5(7), 497–505. Gollwitzer, P. M. (1999). Implementation intentions: Strong effects of simple plans. American Psychologist, 54(7), 493–503. Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96(5), 1029–1046.

Moral Decision Making: The Value of Actions

Graham, J., Meindl, P., Koleva, S., Iyer, R., & Johnson, K. M. (2015). When values and behavior conflict: Moral pluralism and intrapersonal moral hypocrisy. Social and Personality Psychology Compass, 9(3), 158–170. Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011). Mapping the moral domain. Journal of Personality and Social Psychology, 101(2), 366–385. Greene, J. D. (2008). The secret joke of Kant’s soul. In W. Sinnott-Armstrong (Ed.), Moral psychology (Vol. 3, pp. 35–80). MIT Press. Greene, J. D., Cushman, F. A., Stewart, L. E., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2009). Pushing moral buttons: The interaction between personal force and intention in moral judgment. Cognition, 111(3), 364–371. Greene, J. D., & Haidt, J. (2002). How (and where) does moral judgment work? Trends in Cognitive Sciences, 6(12), 517–523. Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Vintage. Hannikainen, I. R., Miller, R. M., & Cushman, F. A. (2017). Act versus impact: Conservatives and liberals exhibit different structural emphases in moral judgment. Ratio, 30(4), 462–493. Henne, P., Niemi, L., Pinillos, Á., De Brigard, F., & Knobe, J. (2019). A counterfactual explanation for the action effect in causal judgment. Cognition, 190, 157–164. Hooker, B. (2000). Ideal code, real world: A rule-consequentialist theory of morality. Oxford University Press. Janoff-Bulman, R., Sheikh, S., & Hepp, S. (2009). Proscriptive versus prescriptive morality: Two faces of moral regulation. Journal of Personality and Social Psychology, 96(3), 521–537. Johnson, E. J., & Payne, J. W. (1985). Effort and accuracy in choice. Management Science, 31(4), 395–414. Kahneman, D. (2011). Thinking, fast and slow. Macmillan. Kahneman, D., Slovic, S. P., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. Cambridge University Press. Kimbrough, E. O., & Vostroknutov, A. (2018). A portable method of eliciting respect for social norms. Economics Letters, 168, 147–150. Klar, S., & McCoy, A. (2021). Partisan-motivated evaluations of sexual misconduct and the mitigating role of the #MeToo movement. American Journal of Political Science, 65(4), 777–789. Kurby, C. A., & Zacks, J. M. (2008). Segmentation in the perception and memory of events. Trends in Cognitive Sciences, 12(2), 72–79. MacLeod, C. M. (1991). Half a century of research on the Stroop effect: An integrative review. Psychological Bulletin, 109(2), 163–203. Marsh, A. A. (2019). The caring continuum: Evolved hormonal and proximal mechanisms explain prosocial and antisocial extremes. Annual Review of Psychology, 70, 347–371. McAuliffe, K., Blake, P. R., Steinbeis, N., & Warneken, F. (2017). The developmental foundations of human fairness. Nature Human Behaviour, 1(2), 1–9. Moll, J., de Oliveira-Souza, R., Bramati, I. E., & Grafman, J. (2002). Functional networks in emotional moral and nonmoral social judgments. NeuroImage, 16(3, Part A), 696–703.

169

170

       

Moore, A. B., Stevens, J., & Conway, A. R. (2011). Individual differences in sensitivity to reward and punishment predict moral judgment. Personality and Individual Differences, 50(5), 621–625. Newman, G. E., Bloom, P., & Knobe, J. (2014). Value judgments and the true self. Personality and Social Psychology Bulletin, 40(2), 203–216. Nichols, S. (2021). Rational rules: Towards a theory of moral learning. Oxford University Press. Niemi, L., Doris, J. M., & Graham, J. (2023). Who attributes what to whom? Moral values and relational context shape causal attributions to the person or the situation. Cognition, 232, Article 105332. Niemi, L., Wasserman, E., & Young, L. (2017). The behavioral and neural signatures of distinct conceptions of fairness. Social Neuroscience, 13(4), 399–415. Niemi, L., & Young, L. (2017). Who sees what as fair? Mapping individual differences in valuation of reciprocity, charity, and impartiality. Social Justice Research, 30(4), 438–449. Orwell, G. (1946). Politics and the English language. Penguin Classics. Petrinovich, L., & O’Neill, P. (1996). Influence of wording and framing effects on moral intuitions. Ethology and Sociobiology, 17(3), 145–171. Phillips, J., & Cushman, F. (2017). Morality constrains the default representation of what is possible. Proceedings of the National Academy of Sciences, 114(18), 4649–4654. Rawls, J. (1971). A theory of justice. Harvard University Press. Rawls, J. (2001). Justice as fairness: A restatement. Harvard University Press. Ritov, I., & Baron, J. (1992). Status-quo and omission biases, Journal of Risk and Uncertainty, 5, 49–61. Singer, P. (2005). Ethics and intuitions. The Journal of Ethics, 9(3), 331–352. Spranca, M., Minsk, E., & Baron, J. (1991). Omission and commission in judgment and choice. Journal of Experimental Social Psychology, 27(1), 76–105. Tajfel, H. (1982). Social psychology of intergroup relations. Annual Review of Psychology, 33, 1–39. Trivers, R. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46, 35–57. Valdesolo, P., & DeSteno, D. (2007). Moral hypocrisy: Social groups and the flexibility of virtue. Psychological Science, 18(8), 689–690. Valdesolo, P., & DeSteno, D. (2008). The duality of virtue: Deconstructing the moral hypocrite. Journal of Experimental Social Psychology, 44(5), 1334–1338. Vallacher, R. R., & Wegner, D. M. (1989). Levels of personal agency: Individual variation in action identification. Journal of Personality and Social Psychology, 57(4), 660–671. Van Lange, P. A. M. (1999). The pursuit of joint outcomes and equality in outcomes: An integrative model of social value orientation. Journal of Personality and Social Psychology, 77(2), 337–349. Van Lange, P. A. M., Otten, W., De Bruin, E. M. N., & Joireman, J. A. (1997). Development of prosocial, individualistic, and competitive orientations: Theory and preliminary evidence. Journal of Personality and Social Psychology, 73(4), 733–746. Van Lange, P. A. M., Schippers, M., & Balliet, D. (2011). Who volunteers in psychology experiments? An empirical review of prosocial motivation in volunteering. Personality and Individual Differences, 51(3), 279–284.

Moral Decision Making: The Value of Actions

Willemsen, P., & Reuter, K. (2016). Is there really an omission effect? Philosophical Psychology, 29(8), 1142–1159. Yoder, K. J., & Decety, J. (2014). The good, the bad, and the just: Justice sensitivity predicts neural response during moral evaluation of actions performed by others. Journal of Neuroscience, 34(12), 4161–4166. Young, L., Camprodon, J. A., Hauser, M., Pascual-Leone, A., & Saxe, R. (2010). Disruption of the right temporoparietal junction with transcranial magnetic stimulation reduces the role of beliefs in moral judgments. Proceedings of the National Academy of Sciences, 107(15), 6753–6758. Young, L., Cushman, F., Hauser, M., & Saxe, R. (2007). The neural basis of the interaction between theory of mind and moral judgment. Proceedings of the National Academy of Sciences, 104(20), 8235–8240. Zacks, J. M., & Swallow, K. M. (2007). Event segmentation. Current Directions in Psychological Science, 16(2), 80–84. Zaki, J., & Cikara, M. (2015). Addressing empathic failures. Current Directions in Psychological Science, 24(6), 471–476. Zenon, A., Solopchuk, O., & Pezzulo, G. (2019). An information-theoretic perspective on the costs of cognition. Neuropsychologia, 123, 5–18.

171

8 Are Moral Judgments Rational? Jonathan Baron

In the fall of 2021, many hospitals in the United States were overwhelmed with COVID-19 patients, most of whom had refused to be vaccinated against the disease. Many of these nonvaccinators appealed to moral principles concerning freedom and rights, which they took to outweigh the consequences of their decision. They claimed the right to make decisions about their own bodies and the right to freedom from government control over personal behavior. Some politicians supported these views even to the point of trying to prohibit schools and private businesses from imposing mandates for mask wearing or vaccination. Note that the expected consequences of nonvaccination are bad for everyone. Vaccination reduces the probability of serious illness for the individual, and it reduces the probability of an infected person, even one without symptoms, transmitting the disease to others. If effects on others are morally relevant, then nonvaccination is not only individually irrational but also immoral, unless some other moral principle outweighs these effects. This case is an example of a frequent conflict between moral principles that people advocate and try to follow, on the one hand, and the expected consequences of following those principles, on the other. The moral principles at issue are inconsistent with moral principles based on utilitarianism, which holds that choice options should be evaluated in terms of their expected consequences for all those affected, but this is not all that makes these principles irrational. The choice of nonvaccination for oneself conflicts with expected utility theory (discussed later in the chapter) as applied to individual choices; it is a losing gamble. And opposing vaccinations for others is simply harmful to them, which by itself is inconsistent with any concept of morality. Apparent examples of this sort of inconsistency in the real world have been extensively documented. In many cases, the analysis of expected consequences is based on economics rather than utilitarian analysis, but the conclusions of economic analysis are generally consistent with those that utilitarian analysis would imply.1 Apparent inconsistencies have been found in allocation of

1

172

Traditional economics is concerned with wealth maximization rather than utility maximization. If the winners from some policy change could compensate the losers with enough money so that everyone would rationally agree on the change, the change is recommended, even if the compensation is not paid. Modern “welfare economics” is more consistent with utilitarianism. Like utilitarianism, welfare economics usually assumes that the utility of money is marginally

Are Moral Judgments Rational?

resources to large humanitarian tragedies (Bhatia et al., 2021); in insurance decisions by firms and individuals (Johnson et al., 1993); in excessive attention to some risks coupled with neglect of others (Breyer, 1993; Kunreuther & Slovic, 1978; Sunstein, 2002); in tax policy (McCaffery, 1997); in economic policies concerning trade, price controls, and wages (Caplan, 2007); and elsewhere. All these realistic cases (and many more) support the argument that people’s moral judgments, when put into practice, can lead to consequences that people themselves would consider worse on the whole than what might have been achieved. But the real world is complicated. It is possible that the principles can have a utilitarian defense after all. For example, many of these apparently selfdefeating policies arise through the functioning of institutions, such as legislatures and courts, that are imperfect yet better than any feasible alternatives, so that any attempt to overturn their results would, in the long run, make matters worse as a result of weakening these institutions. It thus becomes reasonable to ask whether people really apply nonutilitarian principles when they make moral judgments. One way to answer this question is to do psychology experiments, and those are the main topic of this chapter. At issue is the question of whether we can demonstrate truly irrational and nonutilitarian reasoning in hypothetical or real judgments under controlled conditions and, if so, whether we can learn something about the determinants of these judgments.

8.1 Normative, Descriptive, and Prescriptive “Models” in Experimental Psychology Since the nineteenth century, psychologists have studied reasoning in contexts in which right answers are defined by some formal theory such as the logic of syllogisms. A common finding is that reasoning did not conform well to the model, thus, Henle (1962) begins by pointing out that “[t]he question of whether logic is descriptive of the thinking process, or whether its relation to the thinking process is normative only, seems to be easily answered. Our reasoning does not, for example, ordinarily follow the syllogistic form, and we do fall into contradictions” (p. 366). Around the same time (the 1950s and 1960s), others were comparing human judgments to other normative models, including probability and statistics (Bruner et al., 1956; Chapman & Chapman, 1969; Meehl, 1954). In retrospect, we can think of such research as comparing “descriptive models” – psychological accounts of what people are doing – to normative models. The term “model” is inappropriate because it implies some sort of formal system. A few such systems exist, but the term is used even when they do not. Kahneman and Tversky (1979; Tversky, 1967; Tversky & Kahneman, 1981) began to apply this approach to decisions as well as judgments (and their declining, so that simple redistribution from rich to poor, even at some cost, can increase total utility (but not wealth).

173

174

       

1979 paper proposed a true descriptive model that accounted fairly well for choices among simple gambles). Their normative model was expected utility theory in the form proposed by Savage (1954), in which both probability and utility were subjective (even if numerical probabilities were included in problem statements). (See also Chapter 7 in this volume.) Given this normative model, researchers could not always determine whether a given decision conformed to the model or not. For example, one person might prefer $10 for sure over a gamble with a 0.6 probability of $25 and a 0.4 probability of $0. Another person might prefer the gamble. The former person’s utility for $25 might be less than twice as high as her utility for $10, and she might think of 0.6 as “essentially an even chance,” so that her subjective probability of winning would be closer to 0.5. Thus, for her the expected utility (subjective probability times subjective utility) of the gamble would be less than that of $10 for sure. To overcome this problem and show that choices were inconsistent with the normative model, Tversky and Kahneman (1981) emphasized the use of framing effects, in which the same choice, in terms of consequences and their probabilities, was offered in different words. If subjects made different choices in the two versions, then they could not be following the normative model, which concerns consequences and probabilities. A classic example was the Asian disease problem, in which some subjects were told that an Asian disease was approaching and 600 deaths would be expected if nothing was done. In one version, the subjects chose between “200 saved” and a 0.33 chance to save 600. In another version the choice was between “400 die” and a 0.67 probability that 600 would die. Most subjects in the “saved” condition chose “200 saved,” and most subjects in the “die” condition chose the gamble. This experiment had two properties that have received little attention in the extensive literature about it. One is that it is essentially a moral problem, not an individual choice like the money gambles used in other studies. It is moral because it is a decision about the well-being of other people. Research on decisions had slipped from a focus on expected utility to a focus on utilitarianism. Utilitarianism is the natural extension of expected utility to decisions for many people.2 The utilitarian normative model here is to base the decision on the expected number of deaths (usually assuming that the subjective probabilities match the given probabilities). The second property of the Asian disease problem concerns strong preferences for the two options. The expected utilities of the two options are close. Thus, strong preferences for different options violate a feature of utilitarianism (and other moral theories), which is to treat all lives equally. In gains, for

2

Utilitarianism requires that we add up expected utilities across people. In situations such as those at issue, where the people are anonymous and drawn randomly from the same population, they can all be treated as if they have the same expectation. Other situations require interpersonal comparisons, trading off the gains for some people against the losses for others, using what we know about the different individuals.

Are Moral Judgments Rational?

example, a strong preference for “save 200” implies that the extra 400, beyond the 200 saved, are given less weight than twice that of the first 200 lives. Slovic (e.g., 2007) has explored this finding of unequal treatment extensively. One way to think of this phenomenon is in terms of the curve relating total disutility to number of deaths. People tend to make decisions as if the slope of this curve decreases: the millionth death matters less than the tenth, or the first. Here is another example of the move from individual to moral decisions. The pertussis vaccine used to prevent whooping cough in the 1980s would often cause a disease very much like the one it prevented but at a much lower probability. Despite the clear benefits, many people resisted (and still resist) vaccination (Asch et al., 1994; Sherman et al., 2021). Ritov and Baron (1990) found, in a laboratory study, that many people would not want such a vaccine, because (presumably) they would not want to cause the disease through their action. Ritov and Baron also found that people would also oppose requiring the vaccine as a public health measure. The individual decision was purely a matter of self-interest, but the public health decision was moral, because it concerned the well-being of other people. Note that, in this case, the self-interested decision is irrational (from the perspective of expected utility) because omission of the vaccine increases personal risk. Could we say that the moral omission is also irrational because it means that more people will be sick? Some moral systems have a rule against using people as means to help others, and it could be argued that those who suffer from the side effects will serve as means to prevent disease in a greater number. Yet it seems inconsistent to say that the decision that is rational for each individual is immoral when applied to the population. In these examples, the general approach of comparing laboratory decisions to normative models can be, and has been, extended from individual decisions to moral decisions, often with the implicit use of utilitarianism as a normative model. Further research, some of which I review here, finds that the departures from utilitarianism are systematic. As noted, some of these departures result from distortions in the way people think about quantities. Many others result from the application of nonutilitarian principles to the problems of interest. These principles may be absolute or “prima facie,” that is, considerations that can be overridden by other considerations (Ross, 1930/2002). Examples are: “We have a right to control our bodies,” “Do no harm” (meaning do no harm through action, as opposed to omission), “Do not use people as means to achieve better outcomes for others,” or “Do not kill innocent people.” They are often called “moral intuitions” (Hare, 1981) or “moral heuristics” (Sunstein, 2005). “Heuristics” originally referred to weak methods that might be helpful in solving problems, such as: “Do you know a related problem?” (Polya, 1945), but the term was used by Tversky and Kahneman to refer to judgment tasks. An example is judging the probability that someone is a member of a group by the similarity of that person to prototypical members of the group, thus ignoring other relevant attributes such as the size of the group (Tversky & Kahneman, 1974).

175

176

       

When this view is extended to moral judgments, other problems arise. In principle, a heuristic is a “fast and frugal” method that often works but sometimes does not.3 In morality, though, some of these heuristics seem to become hardened into rules that people knowingly apply, believing that they constitute the best possible moral judgments. Theologians and philosophers defend these rules as normative in this way (e.g., the rule about not using people as means). Such rules are often part of deontology, a class of moral systems based on rules, rights, and duties, which go beyond simply bringing about the best expected consequences for all.4 Thus, much of the research on moral judgment focuses not so much on heuristics but on the contrast between deontological rules and utilitarianism. In this research literature, the terms “deontological” and “utilitarian” are not meant to imply that choices are based on representations of either system, just that they are consistent with what those choices would be.5 Here I use the term “(moral) intuition,” for what others call moral heuristics. I think it captures the idea that the relevant moral principles tend to be evoked immediately upon presentation of a moral problem, without any extra effort to look for other relevant considerations. All normative models are controversial to some degree, including Bayesian probability theory and expected utility theory (e.g., Ellsberg, 1961), but utilitarianism seems more controversial than most of the others that are studied psychologically, in part because it yields conclusions that seem to conflict with strong moral intuitions held by philosophers and psychology researchers as well as by experimental subjects. Hare (1981) has dealt with this conflict explicitly and in depth. His approach turns out to be surprisingly relevant to experimental psychology (as I discuss later). But there are other reasons for looking for biases relative to a utilitarian normative model, even for those who do not accept utilitarianism as truly normative. Specifically, if people consistently violate the utilitarian standard in the same biased way (as in favoring harmful omissions over less harmful acts), we should not be surprised if the real consequences turn out to be clearly worse than if the utilitarian standard were followed.6 As I suggested, many examples in the real world can be explained in terms of such biases. Thus, the study of violations of utilitarianism can at least help us understand why things

3

4 5

6

The term “heuristic” is also used to refer to simple algorithms that are more accurate than the more complex algorithms they replace (Gigerenzer et al., 1999). Deontological systems usually include a role for consequences, but as one criterion among many. The term “deontology” is variously defined, and many of the intuitions are deontological only in the broadest sense that they refer to properties of an action (or inaction) other than its consequences. By “consequences,” I refer to states of the world to which individuals assign some value. The values must depend on the states alone and not on whatever actions or natural causes brought them about (although actions in themselves can be evaluated as states); values are thus not opinions about what should be done. That is, the resulting state of affairs would be judged as worse, if the means of reaching it (by following some nonutilitarian moral rule) were ignored.

Are Moral Judgments Rational?

in the real world are not as good as they could be. If the violation of utilitarian standards is the result of following nonutilitarian moral rules, then we at least learn the potential cost of following those rules. Of course, much more could be said in defense of utilitarianism (e.g., Hare, 1981, whose other work is nicely summarized by Singer, 2002), but this is not the place for it.7 In the 1980s, a third type of “model” became apparent, which was called “prescriptive” (Bell et al., 1988).8 The idea is that normative models specify a standard and prescriptive models tell us what to do in order to do better by that standard. The distinction arose because the idea of “decision analysis” was exactly to bring decisions into conformity with various forms of expected utility theory, but decision analysis had many techniques that were not part of that theory, and most of them lead to approximations at best, for example, ways of estimating probability and utility of outcomes. Expected utility theory tells us about the mathematical relations among inputs and outputs. Decision analysis is not the only prescription for good decision making (where good is defined in terms of expected utility). Others are “decision architecture” (Thaler & Sunstein, 2008), and various educational programs. As an example of the distinction, consider the concept of division in arithmetic. The formal definition (normative model) is (roughly): if A/B¼C, then A¼BC; we define division (A/B) in terms of multiplication. But this definition does not tell us how to do it. Many people now learn “long division” as a prescriptive procedure to divide large decimal numbers using successive approximations, starting with the leftmost digits in the dividend. To “understand long division” is to see why this procedure leads to a normatively good conclusion (exact for integers but sometimes just a good approximation). For utilitarianism as a normative model, various prescriptive models have been proposed. One is simply to start with knowledge of the normative idea and then ask if it is obvious which option is better, for example, when the matter involves substantial benefits to some people at the cost of small inconvenience to others (e.g., vision tests for driver’s licenses). Another is decision analysis itself when it includes estimates of utilities for different groups of people who are affected. (This is close to “welfare economics.”) Another is to follow a rule, such as: “Do not commit adultery.” Hare (1981) suggests that, in real cases where adultery is an issue, any attempt to evaluate probabilities and utilities will be so biased by emotional factors that it is likely to lead to erroneous and very harmful consequences. (J. S. Mill made similar arguments.) Thus, the best way

7

8

This would include discussion of the complex idea of rules that result from “rule utilitarianism,” which are sometimes consistent with act utilitarianism (as discussed by Hare, 1981 and elsewhere), and sometimes prescriptive rules, as discussed later here. Others used the term to refer to a semantic property of statements, which is that they say what to do. In this meaning, all moral statements are prescriptive, for example, the Ten Commandments, which are “commands.”

177

178

       

to conform to the utilitarian standard is sometimes to try to do something else. Moral intuitions may also be prescriptive in this sense. Hare (following Sidgwick) thinks they usually are, but the research described here suggests that some are more harmful than helpful. We might say that a major prescriptive question for utilitarians is how to bring up (and educate) children so that they do not fall prey to the harmful intuitions. The approach to the study of moral judgment discussed here is an extension of the general approach to the study of judgments and decisions just summarized. We look for biases, that is, systematic departures from normative models. We try to explain these in terms of descriptive models. Based on what we have found, we then (ideally) try to design prescriptions for fixing these biases. This approach makes the study of moral judgment part of applied psychology, part of an attempt to make things better, like clinical psychology or educational psychology, except that we try to make the normative model, the standard for success, explicit. Alternative approaches, not discussed here, involve description “for its own sake,” without any attempt at evaluation and improvement.

8.2 Methods and Biases In this section I will discuss several experimental methods and possible biases, organized by method rather than substantive topic, although I comment on the normative approach to some of the topics. All of these methods are potentially capable of showing that judgments or hypothetical decisions are nonutilitarian. It is worth noting that essentially all of the nonutilitarian biases I describe here are the result of processes also found in nonmoral situations. Cushman and Young (2011) and Greene (2008) have argued explicitly for the parallelism between “cognitive biases” and patterns found in moral judgment.

8.2.1 Framing Effects A framing effect, as noted already, is found when two equivalent cases yield different responses. An example from moral psychology is the effect on fairness judgments of describing tax rate differences as surcharges or bonuses, holding constant the same pre-tax and post-tax income levels. McCaffery and Baron (2004), inspired by classroom demonstrations reported by Thomas Schelling, presented subjects with examples like the following (edited for simplicity): Childless Surcharge Low income: A married couple with $25,000 total income and two children pays $3,000 in taxes, as a couple. The same couple, if it had no children, would pay a surcharge of $1,000. High income: A married couple with $100,000 total income and two children pays $30,000 in taxes, as a couple. The same couple, if it had no children, would pay a surcharge of $3,000.

Are Moral Judgments Rational?

Child Bonus Low income: A married couple with $25,000 total income and no children pays $4,000 in taxes, as a couple. The same couple, if it had two children, would get a bonus of $1,000. High income: A married couple with $100,000 total income and no children pays $33,000 in taxes, as a couple. The same couple, if it had two children, would get a bonus of $3,000. For the Child Surcharge, most subjects judged the surcharge as too high for the low-income family and too low for the high-income family. For the Child Bonus, they judged the bonus as too low for the low-income family and too high for the high-income family. Yet the high-income bonus and surcharge are the same, as are the low-income bonus and surcharge. Although the question is about fairness, a moral property, nothing here depends on utilitarianism as such. The intuitions about fairness that drive this result cannot be consistent with utilitarianism because they lead to different consequences depending on the description. In another example of a framing effect, Harris and Joyce (1980) told subjects that a group of partners had opened a business (e.g., selling plants at a flea market). The partners took turns operating the business, so different amounts of income were generated while each partner was in control, and different costs were incurred. Subjects favored equal division of profits when they were asked about division of profits, and they favored equal division of expenses when asked about expenses. Because expenses and profits were unequal in different ways, their two judgments conflicted. This result depends on an intuitive principle of equality, and, again, the inconsistency does not depend on utilitarianism. A more complex framing effect concerns the effect of marriage (McCaffery & Baron, 2004). When asked directly, many subjects favor “marriage neutrality,” which means that marriage does not affect the total taxes paid. People also favor progressive taxation, which means that those with higher incomes pay a higher percentage in taxes. Finally, people tend to favor “couples neutrality,” which means that couples with the same income pay the same tax, regardless of which earner makes more. Careful reflection (left as an exercise for the reader) implies that these three principles are incompatible. One of them must give.9 This is, like the child bonus/surcharge, a logical inconsistency, hence a form of framing effect, which involves focusing on the question that is asked, an “isolation effect” (discussed later).

8.2.2 Contrast of Utilitarian and Nonutilitarian Options Other methods involve asking subjects to decide between two options, one of which is consistent with utilitarianism and the other of which is not. The nonutilitarian option deviates by exemplifying a particular bias. 9

Hint: Compare the case where one partner earns $200,000 and the other earns $0 to the case where each earns $100,000. Before marriage, the total tax is higher for the first couple.

179

180

       

8.2.2.1 Omissions A great deal of research has concerned action/omission dilemmas such as the vaccine case already described, in which people are more willing to accept the harms caused by omission than the harms caused by action. Although Ritov and Baron (1990) coined the term “omission bias” as a name for this bias, that term was misleading. A simple bias toward omission would be a bias toward the default, whatever it is. Although a default bias does exist, it plays a minor role in the bias at issue (Baron & Ritov, 1994). Another determinant is the amplification effect, in which the consequences of action are simply given more weight than those of omission. If both options involve gains rather than losses, the amplification effect induces a bias toward action, which can be large enough to overcome the default bias. Recent studies have tended to concern a set of dilemmas originally designed by philosophers as extreme cases on which to test, and try to explain, their moral intuitions (e.g., Foot, 1967/1978). In the simple trolley case, a runaway trolley is headed toward five people and will kill them if nothing is done. You can divert the trolley onto another track where it will kill only one person. Most people think diversion is the best response. In the “footbridge” version, the only way to stop the trolley is to push a large man off a footbridge, so that he falls on the track and blocks the trolley, being killed in the process. Most people resist this solution, and many experiments have tried to examine and explain this sort of difference. A potential issue for experiments like these is what question to ask. In many experiments, the researcher asks: “Is it acceptable to push the man . . .?” The problem with this is that “acceptable” applies only to a single option, and utilitarians (and others concerned with decisions) must ask the question “compared to what?” The relevant question for us is which option is better, morally. Deontology often makes distinctions between what is permitted, forbidden, or morally required. Because these categories apply to options, not choices, it is possible for both options in a choice to be acceptable, or both forbidden. Other alternatives that work for everyone are: “Which option should [the agent] choose?” and “Which option should you choose [if you were the agent]?” Some studies have asked replaced “should” with “would.” This may be interesting, but some people say, explaining themselves, “I know that I should do it, but I could not bring myself to actually do it” (Baron, 1992).

8.2.2.2 The Nature of “Omission Bias” The literature has identified two major determinants of “omission bias”: deontological rules and the use of a limited concept of causality.10 10

Spranca et al. (2001), Experiments 5 and 6, examined other possible determinants and found some evidence for some of them, and other literature has found still others, especially a preference for harm caused by “nature” over harm caused by people (e.g., Kahneman & Ritov, 1994). The possible causes of the basic result are not mutually exclusive and are often confounded.

Are Moral Judgments Rational?

Rules favoring omission are more common than those favoring action (Baron & Ritov, 2009). Rules that prescribe acts are usually conditional on some role. A physician, once accepting a patient, is morally and legally obliged to try to save the patient’s life (unless instructed otherwise) but a rule requiring anyone to try to save every life at risk is impossible to take seriously. Likewise, a rule against performing abortions is easier to follow than a rule requiring prevention of miscarriages in similar situations. “Utilitarian moral dilemmas” often involve rules of this sort, such as prohibitions against killing, or tampering with human genes that affect future generations. When these rules are understood to be absolute (as discussed in Section 8.2.2.3), we would expect that subjects would object to action regardless of how beneficial its consequences are. These results are found (Baron & Ritov, 2009). Thus, one determinant of the usual bias favoring omissions over less harmful acts, is the result of specific rules that are understood to be absolute (or nearly so). Another determinant concerns causality (Cushman, 2008). We can (loosely) classify judgments of causality into two categories. One category, which includes “but for” causality, may be called “make a difference.” You are (perhaps partially) causally responsible for some outcome if something under your control could have made a difference in whether the outcome occurred or not. This view does not distinguish acts and omissions as such. It is often applied to tort law, especially lawsuits against someone who is supposed to take care to avoid harming others. Utilitarianism implies make-a-difference causality, at least when the options are clearly laid out and both possible.11 The other category might be called direct causality. By this view, you are causally responsible for some outcome if there is a chain of events between your action and the outcome, with each link in the chain following some known principle of causality, such as the laws of physics (but any science will do). By this view, people may sometimes be held morally responsible for outcomes that they could not have avoided. (Spranca et al., 1991, report a few instances of this.) Young children tend to consider outcomes only, thus judging that an act is wrong if it causes harm by accident (Piaget, 1932). The apparent bias toward harmful omissions over less harmful acts seems to be closely related to direct causality. Supporting a role for perceived direct causality, Baron and Ritov (1994, Experiment 4) compared the original vaccination case (in which vaccination deaths were side effects) with a “vaccine failure” case, in which the deaths that result if the vaccination is chosen are caused by its failure to prevent the natural disease. The bias against vaccination (action) was much stronger in the original condition than in the vaccine failure condition. 11

Utilitarianism is often criticized for implying infinite obligations that cannot possibly be met. One answer (among many) is that utilitarianism need not imply that we are bad people if we fail to maximize utility, as this is inevitable; it is a doctrine of better vs. worse, not best vs. failure. Second, as a part of decision theory, utilitarianism applies to options that are “on the table” in any particular real decision. Some options are not considered, perhaps because of prior commitments to others, or because self-interest is just too strong, or simply because they are not offered to the decision maker.

181

182

       

Royzman and Baron (2002) compared cases in which an action caused direct harm with those in which an action caused harm as a side effect (i.e., “caused” only in the make-a-difference sense). For example, in one case, a runaway missile is heading for a large commercial airliner. A military commander can prevent collision with the airliner either by interposing a small plane between the missile and the large plane or by asking the large plane to turn, in which case the missile would hit a small plane now behind the large one. The indirect case (the latter) was preferred. In Study 3, subjects compared indirect action, direct action, and omission (i.e., doing nothing to prevent the missile from striking the airliner). Subjects strongly preferred omission to direct action but only weakly preferred omission to indirect action. Baron and Ritov (2009, Study 3) found similar results; they also found that perceived action was the main determinant of bias against action. Greene et al. (2009) found that direct causality is a matter of degree. The most resistance to action occurred when a physical effect of action (hands-on pushing a man) caused a death, compared to cases in which the causal link between action and outcome involved more steps. In sum, it seems that the bias against beneficial action is the result of at least two factors other than default bias: the perception of direct causality, as opposed to make-a-difference causality; and the commitment to particular rules that prohibit certain actions. All of these studies, it should be noted, are consistent with sometimes extreme individual differences, with some subjects making the utilitarian response almost all the time. These subjects apparently do follow make-a-difference causality. In some experiments, we have found subjects who equate inaction with standing by in the face of evil, as with those German citizens who tolerated Hitler (e.g., Spranca et al., 1991). Note that some of these studies also ask about “blame” or “responsibility.” The latter term is ambiguous between causal, moral, and legal meanings (Malle, 2021). The former is sometimes subsumed under the term “punishment,” which is examined more directly (and less ambiguously) in other experiments (later in this chapter).

8.2.2.3 Protected Values Some deontological rules are taken to be absolute (Baron & Ritov, 2009). Tetlock (e.g., 2003), has used the term “sacred values” for essentially the same phenomenon, and Roth (2007) has used the term “repugnant transactions” for moral prohibitions on transactions such as a live donor selling a kidney. These protected values (PVs) are thus “protected” from trade-offs with other considerations. Some PVs are based on religion, but many are held by atheists, such as rules against cloning or genetic engineering of humans. In such cases, people say they should not violate the rule (usually a prohibition) no matter how great the benefits are. However, when asked to try hard to think of cases in which the benefits would be great enough, or when given some possible counterexamples,

Are Moral Judgments Rational?

most people admit that the rules are not in fact absolute, so they seem to be absolute only as a result of insufficient reflection (Baron & Leshner, 2000; Tetlock et al., 2017). Protected values may function as heuristics that serve the purpose of avoiding further thought about whether some trade-off is warranted (Hanselmann & Tanner, 2008). Thus, they appear to be nonutilitarian. However, J. S. Mill (1859) argued that we should follow certain moral rules even if it seems clear that the consequences of breaking them in some situation would be better than those of following the rule. Suppression of free speech was an example. The idea here is that our own judgments about expected consequences in such cases are not trustworthy; we are subject to self-serving biases and ordinary error. We do not need to deceive ourselves in order to follow such rules. When asked to join a terrorist cell, a person today might think to himself: It seems that the cause is just, and that the total harm of the deaths that we would cause would be much smaller than the harm we would prevent by carrying through the plan. But I know that almost all the terrorists throughout history have drawn just this conclusion, and the vast majority of them have been incorrect. Thus, it is probably best if I don’t join.

Note that everything is conscious here. No self-deception is needed. Thus, in experiments on PVs, it is worth giving subjects ways to express apparent PVs that are actually consistent with utilitarianism. Baron and Leshner (2000, Experiment 2) included the following, among other nonexclusive options for responses to possible PV items such as “cutting all the trees in an old-growth forest”: (1) I cannot imagine any situations in which this is acceptable. (38) (2) I can imagine situations in which the benefits are great enough to justify this, but these situations do not happen in the real world. (7) (3) There are situations in the real world in which the benefits are great enough, but people cannot recognize these situations, so it is best never to do this. (9) (4) This is unacceptable as a general rule, but we should make exceptions to it if we are sure enough. (28) The percentages of choices are shown in parentheses, so it seems that apparent PVs are not usually the result of a Mill-type explanation and are truly nonutilitarian principles. In this experiment, only the first response (with 38 percent) represented a true PV. Note that our claim here is that PVs exist with sufficient prevalence to matter; both subjects and items differed substantially in the prevalence of true PVs.

8.2.2.4 Parochialism and Self-interest From a utilitarian perspective – as well as many other perspectives – a major bias in people’s reasoning is parochialism (Baron, 2012a, 2012b; also called “ingroup bias”). The technical use of the term refers to a class of experimental social-dilemma games (Bornstein & Ben-Yossef, 1994). In a social dilemma,

183

184

       

each player can help other players in the group at some cost to himself, and the total benefit to the group is greater than the cost. This is called “cooperation.” Examples in the real world are widespread, from doing one’s job without shirking, to following rules (e.g., rules for income taxes) even when there is no chance of getting caught breaking them, to contributing to charities. Parochialism arises when each player’s behavior can affect an in-group and an out-group, and some players are willing to help the in-group at some personal cost while hurting the out-group even more (perhaps as a result of ignoring the out-group). Consider voting as an example. “Cooperation” means voting for the candidate or proposition that is best for those who are relevant to your vote, which could be you and your family, your compatriots, or everyone in the world. Defection in this example is not voting. Voting has a cost. It is well known but not well understood, that the probability of being the pivotal (decisive) voter is so low that, even if you gain a large amount of money from your side winning, the expected return of voting is, like that of a lottery ticket, not worth the cost. However, if you care enough about other people, taking their utilities as part of your own, with some weight for each other person, then voting can be worth the cost (Edlin et al., 2007). Given this mathematical fact, a situation could arise in which it is not worth voting if all you care about is yourself, not quite worth voting if you care about your nation, but well worth voting if you care about humanity. If you are rational, you would then vote for candidates or proposals that are best for humanity. Otherwise, voting is not worth the cost. The same applies to many other forms of political action. Possible current examples of policies that affect the world are climate change, refugees, migration, population pressure on natural resources, fisheries, biodiversity, world peace, world trade, and the strength of international institutions that attempt to regulate these matters. Nationalist policies often work to the detriment of adequate attention these issues. In its general form, parochialism is a candidate for the most harmful departures from utilitarian decision making. “Cosmopolitanism” is sometimes used as a technical term used for the attitude of caring about the world. Although this attitude sounds as idealistic and fanciful as the John Lennon song “Imagine,” in fact it is fairly common in the modern world (Buchan et al., 2011; Buchan et al., 2009). Arguably, it could arise as a result of reflection (Singer, 1982). What principle can justify caring about some people but ignoring others? Answers could arise, but when we reflect on them (without bias toward inherited opinions) they may seem weak. Surely this sort of reasoning was part of what has led people to oppose slavery and to promote women’s political and legal rights. The absence of it allows parochialism to exist. Other sorts of reasoning lead to parochialism (Baron, 2012a, 2012b). People think they have a duty to support their nation because their nation has given them the vote, or in return for what their nation has done for them. (Of course, most nations do not tell their citizens, even naturalized citizens, that this is expected, and it is well known that some voters, especially in a nation of

Are Moral Judgments Rational?

immigrants, are concerned with particular foreign countries to which they are tied in some way.)

8.2.3 Attending to Irrelevant Attributes or Ignoring Relevant Ones Kahneman and Frederick (2002) proposed that many biases can be explained in terms of “attribute substitution.” Two options differ in terms of two or more attributes. Some attributes are normatively relevant and some are not, but the latter are easier to use and typically correlated (imperfectly) with the relevant ones. So people use the irrelevant ones and sometimes ignore the relevant ones completely.

8.2.3.1 Allocation A great deal of research has examined how people think they should allocate benefits and burdens. Allocations can be local, such as the distribution of grades in a class or housework among those living together. But I focus here on policy. These issues include income, wealth, taxes, criminal penalties, tort fines, insurance, and compensation. Much of this research has examined the principles that people use for allocation decisions (e.g., Deutsch, 1975). These include equality (everyone gets the same); contribution (to each according to their contribution, also called “equity”); need (to each according to need); and maximization (e.g., maximization of total wealth – economic efficiency or total utility). But punishment also raises questions about distribution. What principle should determine criminal or tort penalties? Likewise compensation for misfortune, whether at the hands of nature or a harmful act of someone else; compensation is provided by insurance, social insurance, or tort penalties. Utilitarian theory implies that distributions of goods (e.g., of income or wealth) should be based on maximization of utility, but this principle implies two other criteria: declining marginal utility of most goods, and incentive. A given amount of money has more utility to the poor than to the rich (i.e., the utility of money is marginally declining, that is, the slope of the curve relating utility to money decreases as the amount of money increases). Hence, other things being equal, utility would be maximized if we took from the rich and gave to the poor until everyone is equal. However, this would prevent the use of income as an incentive for work (and has “transaction costs” of its own). Hence, maximization requires a compromise between equality and contribution. Such a principle is useless for psychology experiments. Even if it were possible to calculate the optimum trade-off, ordinary people would have no way of knowing the result. However, experiments can show deviations from any such model, even nonutilitarian models that incorporate similar assumptions. Such deviations can be explained in terms of simple heuristics such as equality, or demonstrated by framing effects, such as those described earlier.

185

186

       

People sometimes prefer equality over maximization that involves lives rather than money.12 Several studies (e.g., Ubel & Loewenstein, 1996) have presented subjects with allocation dilemmas of the following sort: Two groups of 100 people each are waiting for liver transplants. Members of group A have a 70 percent chance of survival after transplantation, and members of group B have a 30 percent chance. How should 100 livers – the total supply – be allocated between the two groups? The simple utilitarian answer is “all to group A,” but only a minority of the subjects chose this allocation. People want to give some livers to group B, even if less than half. Many want to give half. Many people are willing to trade lives for fairness to the two named groups. Surely there is some third group that is not in the scheme at all, so inequality is inevitable. (Such results are found even when group membership is unknown to anyone: Baron, 1995.)

8.2.3.2 Compensation and Deterrence in Tort Law and Criminal Law Compensation is justified by declining marginal utility. If you have a house fire that requires construction work, or an illness that requires expensive treatment, your utility for money increases. You have an immediate need for more of it. Insurance, including medical insurance and social insurance (such as unemployment compensation) is a scheme for transferring money from those who have a lower utility, those who pay insurance premiums or taxes, to those who have a higher utility. Like progressive taxes, compensation should be limited when its availability can provide incentive for reducing risks. For example, fire insurance could require installation of fire extinguishers. Health insurance may cost more for smokers, but this is consistent with utilitarianism only if this incentive effect actually causes people not to smoke. Tort penalties and criminal penalties are justified by incentive effects, that is, by the principle of deterrence. If you know that you are likely to be punished or fined for some behavior (including omissions, in some situations), then you are less likely to engage in that behavior. Penalties “send a message” to the person penalized and to others: “Don’t do this.”13 Experiments (e.g., Baron & Ritov, 1993) are hardly needed to demonstrate that these principles are not followed in the real world. Many people still advocate health insurance in which people pay premiums according to their individual “risk” even when that risk is beyond the individual’s control, hence not subject to incentive effects. (This practice is partially banned in the United States.) Compensation is often provided to relatives for “wrongful death” (but not for other deaths), even when the death in question reduces the utility of 12

13

As noted earlier in connection with the Asian disease problem, declining marginal utility of lives is difficult to justify by any account. In most but not all jurisdictions, tort penalties are used to fund compensation for victims. Theoretically, this political convenience is not necessary. Fines for deterrence could be paid into a general fund, and the fund could pay compensation when it is warranted, regardless of whether there is someone to sue.

Are Moral Judgments Rational?

money for them. And tort penalties are often levied even when the incentive effect leads to more harmful behavior (e.g., a lawsuit for side effects of a beneficial vaccine with rare side effects causes the company to withdraw the vaccine entirely; see Baron & Ritov, 1993). Likewise, criminal punishments are often inconsistent with the principle of deterrence (Carlsmith et al., 2002). Preferences for penalties are based more on the heinousness of the offense than on factors that should affect deterrence. For example, by utilitarian theory, the severity of punishment should be higher when the probability of detection is lower; this way, potential offenders are risking a larger loss in the unlikely event that they get caught. But probability of detection plays little role. Littering is lightly penalized but rarely detected.

8.2.4 Comparison of Moral Judgments to Consequence Judgments In some cases, such as the vaccination case described, the utilitarian answer is fairly clear. When the answer cannot be specified, a simple alternative approach for experimenters is to ask the subject which option, on the whole, has the best overall consequences for everyone affected. When the subject gives one answer to that question and a different answer to the question of what should be done, then we have pretty good evidence that the subject is giving a nonutilitarian answer, and we can go on to explore the reasons for this discrepancy. Baron et al. (2013) asked subjects what was best for their nation (or national group, in the case of Arab and Jewish Israelis), what was best on the whole, what was best for the other group (in Israel), and what their moral duty was. Many subjects thought it was their duty to go against their own judgment of what was best on the whole, in the direction of parochialism (in-group bias), and they indicated that they would do their duty in a real vote. Baron and Jurney (1993) asked subjects if they would vote for various reforms. In one experiment, 39 percent of the subjects said they would vote for a large tax on gasoline (to reduce global warming). Of those who would vote against the tax, 48 percent thought that on the whole it would do more good than harm; this group was thus admitting that they were not following their own perception of overall consequences. Of those subjects who would vote against the tax despite judging that it would do more good than harm, 85 percent cited the unfairness of the tax as a reason for voting against it (for instance, the burden would fall more heavily on people who drive a lot), and 75 percent cited the fact that the tax would harm some people (e.g., drivers). The latter subjects were apparently unwilling to harm some people in order to help others, even when they see the benefit as greater than the harm. This effect may be related to “omission bias.” Unlike other results summarized here, the principle in question is nonutilitarian but is endorsed by other moral theories. Yet, its application in the real world can make things worse.

187

188

       

8.2.5 Isolation Effects In “isolation” effects, people attend only (or primarily) to data or issues immediately before them (Camerer, 2000; Kahneman & Lovallo, 1993; Read et al., 1999). These effects are related to, or identical to, what others have called a focusing effect (Idson et al., 2004; Jones et al., 1998; Legrenzi et al., 1993). People know about indirect effects but do not consider them, or do not consider them enough. The idea came from the theory of mental models in reasoning (Legrenzi et al., 1993): People reason from mental models, and when possible they use a single, simple, model that represents just the information they are given. Other factors are ignored or underused. McCaffery and Baron (2006) found apparent isolation effects in evaluation of taxes and other policies. For example, people prefer “hidden” taxes, such as a tax on corporations, without thinking about where the money comes from (employees, consumers, stockholders). If people are asked who actually pays, they realize that such taxes are not “free.” Caplan (2007) reports similar effects for policies such as rent control, which have an immediate desirable effect on prices but an undesirable secondary effect on the supply of housing. Often people seem to evaluate policies (such as long prison sentences) in terms of their intended effects, even if those are not their main effects. These evaluations, working through the political system, affect real policies.

8.3 Moral Rules and Intuitions Many demonstrations of nonutilitarian biases, or their cognitive bias cognates, seem to result from intuitive responses rather than any sort of reflection. At issue is whether these biases would be reduced if people engaged in more thinking, or more thinking of a certain sort. Hare (1981; see www.utilitarianism.net/ for additional citations) proposed a related account. In defending utilitarianism, he proposed a two-level theory of moral thinking, with an intuitive and “critical” level. The critical level, which is a normative model in the sense discussed earlier, is utilitarian and is rarely approximated in human thinking but also rarely needed. Optimal decisions at this level are what would result if the decision maker could sympathetically represent to herself the preferences of all those affected and reach a decision as if the conflicts among their preferences were conflicts among her own preferences. At this level the distinction between case-by-case decisions and moral principles almost disappears, since each case specifies the decision for other cases that are similar in morally relevant ways and becomes a principle for just those cases (however few or many there may be). The principles and decisions accepted through such idealized reflection are those that would be rationally accepted by anyone, even if that person in real life would lose from application of the principle. The principles are thus universal, but each principle (unlike heuristics or intuitive rules) need not be simple. It includes all morally relevant features of a given case (those that could in principle affect the choice).

Are Moral Judgments Rational?

Hare argues that the term “moral” implies such universal agreement (an idea he attributes to Kant). Roughly, the idea is that we would balk at calling a principle “moral” if I applied it when you were in one position (e.g., the loser) and I was in another (the winner) but would not apply it if we switched positions (including with each position all its relevant features). This idea is embodied in the “veil of ignorance” (Rawls, 1971), which is a possible prescriptive intervention (Huang et al., 2021). Hare’s intuitive level, as I noted earlier, consists of intuitions that could be prescriptive principles worth following, or at least worth considering. But it also includes intuitions that may be the cause of harmful biases, such as “do no harm,” if it is understood as referring to actions but not omissions.

8.3.1 Intuitions and Dual Systems Many approaches to reasoning have relied on the idea of dual systems, intuitive and reflective, with at least the intuitive level being similar to Hare’s. That system, by various accounts, is automatic (uncontrollable but also free of demands on cognitive resources), driven by emotion (or affect), and based on associations rather than rules. The reflective system is controllable and requires some effort. Because it is controllable, it may or may not become active after the intuitive system has begun its work. In principle, if the subject knows that reasoning is required, the reflective system could begin right away. Kahneman (2011) argues that a corrective version of this theory, in which reflection begins after results of intuition are available and can function to correct the intuition, is relevant to a variety of tasks studied in the heuristics-and-biases tradition. The corrective theory has also been proposed as an account of moral judgment by Joshua Greene (e.g., 2008, p. 44, although elsewhere Greene is less specific about the ordering of events in time). Several lines of evidence seem to support the dual system theory for moral judgment. First, response times (RTs) for “personal” dilemmas, those that involve direct killing, such as the footbridge dilemma, are longer, especially when subjects choose the utilitarian option. A common finding in choice tasks is that RT is longer when the response is rarely made or when the options are similar in attractiveness (hence conflicting). These factors alone can explain RT differences found. Note that the corrective model implies that RT is longer for utilitarian than for deontological responses when their probability is equal (which is also where the two responses are maximally conflicting). Baron and Gürçay (2017), in a meta-analysis of 26 experiments, estimated this RT for each response by assuming that each subject had an “ability” to make utilitarian responses, and each dilemma had a “difficulty” for making that response. (Thus, the footbridge problem is more “difficult” than the simple trolley case.) The two choices would be equally likely when ability was equal to difficulty, according to our measures. A plot of RT for each choice as a function of ability minus difficulty indeed showed the longest RT when this difference was zero, but, at this point, the utilitarian

189

190

       

responses took no longer than the deontological responses. Rosas et al. (2019) also found that RTs were determined mainly by conflict. These results are inconsistent with any form of the corrective model. Baron and Gürçay (2017) also noted that subjects who made more utilitarian responses had longer RTs on everything, a result consistent with the claim that reflection-impulsivity is correlated with utilitarian responding. Why this happens may depend on developmental processes that have already occurred before the experiment. For example, people who are generally reflective may come to favor utilitarian solutions over time.14 Other results concern the effects of time pressure or cognitive load, which, in some studies, seem to affect utilitarian responses but not deontological responses. A general problem with these studies is that the effects vary for different dilemmas, not only in magnitude but also in direction (as also found by Gürçay & Baron, 2017, despite finding no overall effect of time pressure vs. instructions to reflect). For example, to deal with load or time pressure, subjects may skip or skim the less salient parts of the printed description, and those may vary with how the dilemma is described. Researchers should at least test effects in ways that take into account the variance across dilemmas as well as across subjects, and most researchers have not done this (for an exception, see Patil et al., 2021), as well as trying different ways of ordering the information in the dilemma. These sorts of results concerning time pressure and cognitive load have been difficult to replicate (e.g., Bago & De Neys, 2019). Rosas and Aguilar-Pardo (2020) found that utilitarian responses can occur under extreme time pressure. Moreover, studies that track the position of the pointer (mouse) during experiments with moral dilemmas do not show any tendency to switch from utilitarian to deontological responses during the time (usually 10–20 seconds) that the subject deliberates (Gürçay & Baron, 2017; Koop, 2013). In sum, the most plausible account is that, when presented with dilemmas that pit utilitarian and deontological responses against each other, people are aware of the conflict as soon as they understand the dilemma. Yet more reflective people, for as yet unknown reasons, are more likely to favor the utilitarian resolution to the conflict. This kind of account in terms of individual differences in reflection is not far from Greene’s two-system account, but it does not assume any sequential effects involving suppressing an early response by a late one, so it is thus consistent with the known results, and with versions of dual system theory that assume that the systems work in parallel rather than sequentially (e.g., Sloman, 1996). It is clear by any account that people differ in some sort of reflectiveness, and these differences are related to differences in at least some moral dilemmas (Patil et al., 2021).

14

A personal example: As a child I was puzzled by the proverb “two wrongs don’t make a right,” which suggests that punishment is wrong. Ultimately I figured out that punishment can be justified if it prevents future wrongs. Later I learned that this was the utilitarian solution to the puzzle.

Are Moral Judgments Rational?

8.4 Future Directions A lot remains to be known about moral reasoning. The reader who has gotten this far will not be surprised that I think this topic should be part of cognitive psychology, which has been studying reasoning more generally for well over a century. Many of the methods of psychology remain to be fully applied to moral reasoning. But moral reasoning is important for practical purposes too. It is tied up with politics and public policy. Political judgments of citizens are often moral judgments (see Chapter 22, this volume). These merit special attention because the actions or omissions of citizens affect other citizens and noncitizens at home and abroad. Many of the world’s problems, within and among nations, can be traced to policies approved by citizens. The utilitarian argument I made earlier applies here. If citizens collectively follow nonutilitarian moral intuitions, then we should not be surprised if the final results they influence are deficient, for all those affected, everywhere. Differences in thinking about politics arise in individual development (as studied by Adelson, Kohlberg, and many others; see, for example, Adelson, 1971, and Kohlberg, 1963) and in cultural evolution. Hallpike’s (2004) analysis, which is analogous to that of Kohlberg, suggests that something like developmental stages occurred over the course of cultural evolution, with the earlier stages still present. Early people, and those who still live as they did, and young children, do not distinguish morality, laws and social conventions, and etiquette. They are just “the way we do things.” With the growth of cities and writing, codified laws came to exist and were soon “written in stone” or in parts of what is now the Old Testament. Similar developments may occur in early adolescence (depending on culture; see Haidt et al., 1993). The development of a concept of morality, independent and outside of laws and conventions, came relatively recently in human history, possibly only a few thousand years ago, after writing became generally used. The concept of morality, like other concepts such as “science,” is not fully developed in human cultures. And the developments so far are not well understood by many people. For most who make the distinction between morality and convention now, it comes in adolescence. The existence of a concept of morality raises the possibility of rational thought about what it should be. It is apparent that culture has a large effect not only on moral beliefs but also on how (or whether) people reason about them, or about other issues such as politics (Baron et al., 2023). A question of interest is how cultural traditions persist over generations and over historical time (even within generations) and how they change. Attitudes toward homosexuality, for example, have changed enormously in the last 50 years, in some countries. And it is clear that there are cultural influences on beliefs about what good thinking is. One way to study cultural change over time is to examine written documents, both for their content and for the type of reasoning they exhibit. Some of this sort of work has been done (Suedfeld, 1985; Suedfeld et al., 2003), but it has been confined to

191

192

       

documented records of groups, such as legislators who make speeches, that are not representative of any larger cultural tradition. It is clear that education can be designed to encourage rational thinking (e.g., Baron, 1993). Liberal education at the university level is often explicit in its attempts to encourage questioning, consideration of diverse views, and understanding of the nature of expert knowledge. Many secondary schools do this too (e.g., Metz et al., 2020). Several efforts have been made to teach moral thinking in a way that views it as a type of thinking rather than a set of rules. Kohlberg, in particular, encouraged widespread experimentation with moral discussion in high schools around the world (Snarey & Samuelson, 2008). Much of this work disappeared with Kohlberg’s death and with claims that his ideas were biased against women (claims that were consistently shown to be unfounded, as Snarey and Samuelson point out). Education is one important domain where people’s thinking can be influenced. Others, probably to a lesser extent, are journalism and politics itself. Ultimately, individuals and cultures change from a variety of influences, and we cannot expect applied research on one domain or another to provide the key. Change is slow, but the world would benefit if people’s moral thinking became more rational.

Acknowledgments I thank Bertram Malle for extensive and very helpful comments on an earlier draft.

References Adelson, J. (1971). The political imagination of the young adolescent. Daedalus, 100, 1013–1050. Asch, D., Baron, J., Hershey, J. C., Kunreuther, H. C., Meszaros, J., Ritov, I., & Spranca, M. (1994). Omission bias and pertussis vaccination. Medical Decision Making, 14(2), 118–123. Bago, B., & De Neys, W. (2019). The intuitive greater good: Testing the corrective dual process model of moral cognition. Journal of Experimental Psychology: General, 148(10), 1782–1801. Baron, J. (1992). The effect of normative beliefs on anticipated emotions. Journal of Personality and Social Psychology, 63(2), 320–330. Baron, J. (1993). Why teach thinking? – An essay. Applied Psychology: An International Review, 42(3), 191–237. Baron, J. (1995). Blind justice: Fairness to groups and the do-no-harm principle. Journal of Behavioral Decision Making, 8(2), 71–83. Baron, J. (2012a). Parochialism as a result of cognitive biases. In R. Goodman, D. Jinks, & A. K. Woods (Eds.), Understanding social action, promoting human rights (pp. 203–243). Oxford University Press.

Are Moral Judgments Rational?

Baron, J. (2012b). The “culture of honor” in citizens’ concepts of their duty as voters. Rationality and Society, 24(1), 37–72. Baron, J., & Gürçay, B. (2017). A meta-analysis of response-time tests of the sequential two-systems model of moral judgment. Memory and Cognition, 45(4), 566–575. Baron, J., Isler, O., & Yilmaz, O. (2023). Actively open-minded thinking and the political effects of its absence. In V. Ottati & C. Stern (Eds.), Divided: Open-mindedness and dogmatism in a polarized world (pp. 162–182). Oxford University Press. Baron, J., & Jurney, J. (1993). Norms against voting for coerced reform. Journal of Personality and Social Psychology, 64(3), 347–355. Baron, J., & Leshner, S. (2000). How serious are expressions of protected values. Journal of Experimental Psychology: Applied, 6(3), 183–194. Baron, J., & Ritov, I. (1993). Intuitions about penalties and compensation in the context of tort law. Journal of Risk and Uncertainty, 7, 17–33. Baron, J., & Ritov, I. (1994). Reference points and omission bias. Organizational Behavior and Human Decision Processes, 59(3), 475–498. Baron, J., & Ritov, I. (2009). Protected values and omission bias as deontological judgments. In D. M. Bartels, C. W. Bauman, L. J. Skitka, & D. L. Medin (Eds.), Moral judgment and decision making, Vol. 50 in B. H. Ross (Series Ed.), The psychology of learning and motivation (pp. 133–167). Academic Press. Baron, J., Ritov, I., & Greene, J. D. (2013). The duty to support nationalistic policies. Journal of Behavioral Decision Making, 26(2), 128–138. Bell, D. E., Raiffa, H., & Tversky, A. (Eds.). (1988). Decision making: Descriptive, normative, and prescriptive interactions. Cambridge University Press. Bhatia, S., Walasek, L., Slovic, P., & Kunreuther, H. (2021). The more who die, the less we care: Evidence from natural language analysis of online news articles and social media posts. Risk Analysis, 41(1), 179–203. Bornstein, G., & Ben-Yossef, M. (1994). Cooperation in intergroup and single-group social dilemmas. Journal of Experimental Social Psychology, 30(1), 52–57. Breyer, S. (1993). Breaking the vicious circle: Toward effective risk regulation. Harvard University Press. Bruner, J. S., Goodnow, J. J., & Austin, G. A. (1956). A study of thinking. Wiley. Buchan, N. R., Brewer, M., Grimalda, G., Wilson, R., Fatas, E., & Foddy, M. (2011). Global social identity and global cooperation. Psychological Science, 22(6), 821–828. Buchan, N. R., Grimalda, G., Wilson, R., Brewer, M., Fatas, E., & Foddy, M. (2009). Globalization and human cooperation. Proceedings of the National Academy of Sciences, 106(11), 4138–4142. Camerer, C. F. (2000). Prospect theory in the wild: Evidence from the field. In D. Kahneman & A. Tversky (Eds.), Choices, values, and frames (pp. 288–300). Cambridge University Press. Caplan, B. (2007). The myth of the rational voter: Why democracies choose bad policies. Princeton University Press. Carlsmith, K. M., Darley, J. M., & Robinson, P. H. (2002). Why do we punish? Deterrence and just deserts as motives for punishment. Journal of Personality and Social Psychology, 83(2), 284–299. Chapman, L. J., & Chapman, J. P. (1969). Illusory correlation as an obstacle to the use of valid psychodiagnostic signs. Journal of Abnormal Psychology, 74(3), 271–280.

193

194

       

Cushman, F. (2008). Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment. Cognition, 108(2), 353–380. Cushman, F., & Young, L. (2011). Patterns of moral judgment derive from nonmoral psychological representations. Cognitive Science, 35(6), 1052–1075. Deutsch, M. (1975). Equity, equality, and need: What determines which value will be used as the basis of distributive justice? Journal of Social Issues, 31(3), 137–149. Edlin, A., Gelman, A., & Kaplan, N. (2007). Voting as a rational choice: Why and how people vote to improve the well-being of others. Rationality and Society, 19(3), 293–314. Ellsberg, D. (1961). Risk, ambiguity, and the Savage axioms. Quarterly Journal of Economics, 75(4), 643–699. Foot, P. (1978). The problem of abortion and the doctrine of the double effect. In P. Foot, Virtues and vices and other essays in moral philosophy (pp. 19–32). University of California Press. (Originally published 1967 in Oxford Review, no. 5) Gigerenzer, G., Todd, P. M., & the ABC Research Group (1999). Simple heuristics that make us smart. Oxford University Press. Greene, J. D. (2008). The secret joke of Kant’s soul. In W. Sinnott-Armstrong (Ed.), Moral psychology: Vol. 3. The neuroscience of morality: Emotion, brain disorders, and development (pp. 36–79). MIT Press. Greene, J. D., Cushman, F. A., Stewart. L. E., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2009). Pushing moral buttons: The interaction between personal force and intention in moral judgment. Cognition, 111(3), 364–371. Gürçay, B., & Baron, J. (2017). Challenges for the sequential two-systems model of moral judgment. Thinking and Reasoning, 23(1), 49–80. Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or, is it wrong to eat your dog? Journal of Personality and Social Psychology, 65(4), 613–628. Hallpike, C. R. (2004). The evolution of moral understanding. Prometheus Research Group. Hanselmann, M., & Tanner, C. (2008). Taboos and conflicts in decision making: Sacred values, decision difficulty, and emotions. Judgment and Decision Making, 3(1), 51–63. Hare, R. M. (1981). Moral thinking: Its levels, method, and point. Oxford University Press (Clarendon Press). Henle, M. (1962). On the relation between logic and thinking. Psychological Review, 69(4), 366–378. Harris, R. J., & Joyce, M. A. (1980). What’s fair? It depends on how you phrase the question. Journal of Personality and Social Psychology, 38(1), 165–179. Huang, K., Bernhard, R. M., Barak-Corren, N., Bazerman, M. H., & Greene, J. D. (2021). Veil-of-ignorance reasoning mitigates self-serving bias in resource allocation during the COVID-19 crisis. Judgment and Decision Making, 16(1), 1–19. Idson, L. C., Chugh, D., Bereby-Meyer, Y., Moran, S., Grosskopf, B., & Bazerman, M. (2004). Overcoming focusing failures in competitive environments. Journal of Behavioral Decision Making, 17(3), 159–172. Johnson, E. J., Hershey, J. C., Meszaros, J., & Kunreuther, H. (1993). Framing, probability distortions, and insurance decisions. Journal of Risk and Uncertainty, 7, 35–51.

Are Moral Judgments Rational?

Jones, S. K., Frisch, D., Yurok, T. J., & Kim, E. (1998). Choices and opportunities: Another effect of framing on decisions. Journal of Behavioral Decision Making, 11(3), 211–226. Kahneman, D. (2011). Thinking, fast and slow. Farrar, Strauss and Giroux. Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 49–81). Cambridge University Press. Kahneman, D., & Lovallo, D. (1993). Timid choices and bold forecasts: A cognitive perspective on risk taking. Management Science, 39(1), 17–31. Kahneman, D., & Ritov, I. (1994). Determinants of stated willingness to pay for public goods: A study of the headline method. Journal of Risk and Uncertainty, 9, 5–38. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291. Kohlberg, L. (1963). The development of children’s orientations toward a moral order. I. Sequence in the development of human thought. Vita Humana, 6(1–2), 11–33. Koop, G. J. (2013). An assessment of the temporal dynamics of moral decisions. Judgment and Decision Making, 8(5), 527–539. Kunreuther, H., & Slovic, P. (1978). Economics, psychology, and protective behavior. American Economic Review, 68(2), 64–69. Legrenzi, P., Girotto, V., & Johnson-Laird, P. N. (1993). Focussing in reasoning and decision making. Cognition, 49(1–2), 37–66. Malle, B. F. (2021). Moral judgments. Annual Review of Psychology, 72, 293–318. McCaffery, E. J. (1997). Taxing women. University of Chicago Press. McCaffery, E. J., & Baron, J. (2004). Framing and taxation: Evaluation of tax policies involving household composition. Journal of Economic Psychology, 25(6), 679–705. McCaffery, E. J., & Baron, J. (2006). Isolation effects and the neglect of indirect effects of fiscal policies. Journal of Behavioral Decision Making, 19(4), 1–14. Meehl, P. E. (1954). Clinical versus statistical prediction: A theoretical analysis and a look at the evidence. University of Minnesota Press. Metz, S. E., Baelen, R. N., & Yu, A. (2020). Actively open-minded thinking in American adolescents. Review of Education, 8(3), 768–799. Mill, J. S. (1859). On liberty. J. W. Parker & Son. Patil, I., Zucchelli, M. M., Kool, W., Campbell, S., Fornasier, F., Calò, M., . . . Cushman, F. (2021). Reasoning supports utilitarian resolutions to moral dilemmas across diverse measures. Journal of Personality and Social Psychology, 120(2), 443–460. Piaget, J. (1932). The moral judgment of the child. The Free Press. Polya, G. (1945). How to solve it: A new aspect of mathematical method. Princeton University Press. Rawls, J. (1971). A theory of justice. Harvard University Press. Read, D., Loewenstein, G., & Rabin, M. (1999). Choice bracketing. Journal of Risk and Uncertainty, 19, 171–197. Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: Omission bias and ambiguity. Journal of Behavioral Decision Making, 3(4), 263–277.

195

196

       

Rosas, A., & Aguilar-Pardo, D. (2020). Extreme time-pressure reveals utilitarian intuitions in sacrificial dilemmas. Thinking and Reasoning, 26(4), 534–551. Rosas, A., Bermúdez, J. P., & Aguilar-Pardo, D. (2019). Decision conflict drives reaction times and utilitarian responses in sacrificial dilemmas. Judgment and Decision Making, 14(5), 555–564. Ross, W. D. (1930). The right and the good. (Reprinted 2002 by Oxford University Press) Roth, A. E. (2007). Repugnance as a constraint on markets. Journal of Economic Perspectives, 21(3), 37–58. Royzman, E. B., & Baron, J. (2002). The preference for indirect harm. Social Justice Research, 15(2), 165–184. Savage, L. J. (1954). The foundations of statistics. Wiley. Sherman, G. D., Vallen, B., Finkelstein, S. R., Connell, P. M., Boland, W. A., & Feemster, K. (2021). When taking action means accepting responsibility: Omission bias predicts parents’ reluctance to vaccinate due to greater anticipated culpability for negative side effects. Journal of Consumer Affairs, 55(4), 1660–1681. Singer, P. (1982). The expanding circle: Ethics and sociobiology. Farrar, Strauss & Giroux. Singer, P. (2002). R. M. Hare’s achievements in moral philosophy. Utilitas, 14(3), 309–317. Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119(1), 3–22. Slovic, P. (2007). “If I look at the mass I will never act”: Psychic numbing and genocide. Judgment and Decision Making, 2(2), 79–95. Snarey, J., & Samuelson, P. (2008). Moral education in the cognitive developmental tradition: Lawrence Kohlberg’s revolutionary ideas. In L. P. Nucci & D. Narvaez (Eds.), Handbook of moral and character education (pp. 53–79). Routledge. Spranca, M., Minsk, E., & Baron, J. (1991). Omission and commission in judgment and choice. Journal of Experimental Social Psychology, 27(1), 76–105. Suedfeld, P. (1985). APA presidential addresses: The relation of integrative complexity to historical, professional, and personal factors. Journal of Personality and Social Psychology, 49(6), 1643–1651. Suedfeld, P., Guttieri, K., & Tetlock, P. E. (2003). Assessing integrative complexity at a distance: Archival analyses of thinking and decision making. In J. M. Post (Ed.), The psychological assessment of political leaders: With profiles of Saddam Hussein and Bill Clinton (pp. 246–270). University of Michigan Press. Sunstein, C. R. (2002). Risk and reason: Safety, law, and the environment. Cambridge University Press. Sunstein, C. R. (2005). Moral heuristics (with commentary). Behavioral and Brain Sciences, 28(4), 531–573. Tetlock, P. E. (2003). Thinking the unthinkable: Sacred values and taboo cognitions Trends in Cognitive Sciences, 7(7), 320–324. Tetlock, P. E., Mellers, B. A., & Scoblic, J. P. (2017). Sacred versus pseudo-sacred values: How people cope with taboo trade-offs. American Economic Review, 107(5), 96–99. Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.

Are Moral Judgments Rational?

Tversky, A. (1967). Additivity, utility, and subjective probability. Journal of Mathematical Psychology, 4(2), 175–202. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211(4481), 453–458. Ubel, P. A., & Loewenstein, G. (1996). Distributing scarce livers: The moral reasoning of the general public. Social Science and Medicine, 42(7), 1049–1055.

197

9 Moral Categorization and Mind Perception Philip Robbins

A central task for moral psychology is articulating the structure of the moral categories to which individuals, human or otherwise, are assigned in the context of everyday moral reasoning. Of these categories, two stand out as especially foundational: moral agents and moral patients (Gray & Wegner, 2009; Schein & Gray, 2018). Moral agents can commit moral wrongs and be held morally responsible for their actions; moral patients can be morally wronged and their interests given moral consideration.1 What research on moral categorization aims to understand, broadly speaking, is the basis on which individuals are assigned to these categories. How do we go about determining whether an individual has moral agency or moral patiency? Addressing this question leads directly to the study of mind perception, that is, the attribution of mental capacities and traits (Epley & Waytz, 2010; Gray et al., 2012). The link between moral categorization and mind perception is not surprising, given the extent to which social cognition in general, and moral cognition in particular, involves the representation of other minds. As we will see in this chapter, however, the interplay between attributions of moral status and attributions of mindedness is multifaceted and complex. Attaining a proper understanding of the connection between these types of attribution may require moving beyond standard models of mind perception, which tend to focus either on the representation of mental capacities or the representation of mental traits (i.e., mental capacities that are regularly exercised), to a hybrid model that encompasses both capacities and traits. (For clarification of the distinction between capacities and traits, see Section 9.1.1.) The structure of the chapter is as follows. Section 9.1 is an overview of research on mind perception, focusing on alternative accounts of the attribution of mental capacities and traits. Sections 9.2 and 9.3 address the role of mind perception in the categorization of individuals as moral patients and moral agents, focusing on how the attribution of mental states and capacities influences the attribution of moral status. Section 9.4 concludes the chapter with a brief discussion of how empirical study of the causal nexus between mind

1

198

A third moral category, less studied than the first two, is that of standing to blame, the status assigned to individuals who are eligible to evaluate the moral wrongness of actions (Friedman, 2013; Todd, 2019). Though this topic has attracted considerable attention in recent years, especially in philosophy, it lies beyond the scope of this chapter.

Moral Categorization and Mind Perception

perception and moral categorization bears on philosophy, law, and research on artificial intelligence (AI).

9.1 Mind Perception For social beings like us, few distinctions are more basic than the distinction between things that have minds (e.g., people, pets) and things that do not (e.g., pebbles, biscuits). This makes sense, given that the ability to think about minds plays an essential role in social cognition, enabling us to predict, understand, and influence the behavior of others (Waytz et al., 2010). No less important than determining whether something has a mind, however, is determining what kind of mind it has. The idea here is that people tend to think of psychological capacities and traits as clustering together in a particular way and that the pattern of clustering reveals the intuitive ontology of the mental, or what Gray et al. (2007) call the “dimensions of mind perception.” Theories of mind perception vary with respect to the number of dimensions they posit and the mental features that lie on those dimensions.

9.1.1 Two-Dimensional Models The idea that we perceive minds in multiple dimensions has deep philosophical roots. In discussions of the metaphysics of consciousness, for example, it has long been assumed that some mental states (the phenomenal type) are intrinsically linked to conscious experience, whereas other mental states (the intentional type) are not (Block, 1995; Chalmers, 1995; Nagel, 1974). The typological distinction between phenomenal and intentional states figured prominently in later work on consciousness that turned away from metaphysics proper toward folk metaphysics. It was argued, for example, that the psychological origin of the “hard problem” of explaining how brain activity gives rise to conscious experience could be traced to features of our cognitive architecture that make it difficult for us to think about the phenomenal mind in mechanistic terms, at least in an intuitively satisfying way (Arico et al., 2011; Robbins & Jack, 2006). Early work in the experimental philosophy of consciousness also validated a basic distinction in folk metaphysics between phenomenal and intentional aspects of mind – a precursor to the idea that mind perception operates in two dimensions, only one of which is tied to conscious experience (Knobe & Prinz, 2008). In psychology, the basic premise of two-dimensional accounts of mind perception is that people intuitively think of mental capacities and traits as belonging to one of two categories or clusters. Some of these accounts focus primarily on the representation of mental traits, whereas other accounts focus primarily on the representation of mental capacities. On this point it is important to note that, though the terms capacity and trait are sometimes used interchangeably, they are not synonymous. To illustrate the distinction with

199

200

      

an example: Having the capacity for cooperation entails having the potential to interact with others in a cooperative way (Vetter, 2013), whereas having the trait of cooperativeness entails possessing the capacity for cooperation and exercising that capacity on a regular basis (Vollmer, 1993). In general, traits are grounded in, but not identical to, behavioral capacities. Cooperativeness, for example, is grounded in the capacity to engage in a certain kind of prosocial behavior (i.e., cooperation). Since it is possible to have a capacity without regularly exercising it, however, mere possession of a capacity does not entail possession of traits constitutively linked to that capacity. Thus, the ascription conditions for mental traits (like cooperativeness) are more restrictive than the ascription conditions for the mental capacities in which those traits are grounded (like cooperation). A variety of trait-based models of mind perception have been advanced by social psychologists. These models originate in research on person perception rather than research on mind perception, which tends to focus on mental capacities and exclude mental traits – even though mental traits are at least as constitutive of the mind as mental capacities are, and arguably more so. The warmth–competence model distinguishes between traits like friendliness, helpfulness, and trustworthiness and traits like intelligence, creativity, and efficacy (Fiske et al., 2006; Fiske et al., 2002). The agency–communion model exhibits a similar dichotomy, with traits like ambition, dominance, and independence on the agency side of the ledger and traits like cooperativeness, trustworthiness, and nurturance on the communion side (Abele & Wojciszke, 2007). Interestingly, research on these two models suggests that the warmth/communion dimension has separable moral and nonmoral components and that traits associated with the moral component, like trustworthiness, play a more important role in person perception than traits associated with the nonmoral component, like friendliness (Brambilla & Leach, 2014; Goodwin et al., 2014). In the dehumanization literature, the human nature–human uniqueness model distinguishes between universal (human nature) traits like emotionality, interpersonal warmth, and curiosity and culturally variable (human uniqueness) traits like civility, refinement, and moral sensibility (Haslam, 2006; Haslam & Loughnan, 2014; see also Chapter 14 in this volume). By contrast with accounts that emphasize the representation of mental traits, the dominant account of mind perception in moral psychology, the experience– agency model, focuses on the representation of mental capacities (Gray et al., 2007). In a landmark study, participants rated a cast of 13 characters (e.g., a normal adult, a child, a dog, a robot) pairwise on one of 18 mental capacities. On each trial, participants were shown pictures of two of the characters, each picture accompanied by a brief verbal description, and asked to indicate on a five-point scale which of the two characters was more likely to have that capacity. These comparative ratings were then aggregated across trials to determine a mean relative rating for each character, and the process was repeated for each capacity. A principal components analysis of correlations between ratings of different capacities across characters suggested a divide

Moral Categorization and Mind Perception

between two dimensions of mental life: experience (e.g., pain, desire, and joy) and agency (e.g., self-control, memory, and planning). Of the two components, experience was primary, including most of the capacities (> 60 percent) and accounting for most of the variance in the data (88 percent). Though the two dimensions were strongly correlated (r ¼ 0.90) (Piazza et al., 2014), most characters scored higher on one dimension than the other, and in some cases the difference was dramatic – for example, the robot and God characters were attributed a lot of agency but relatively little experience, whereas the infant and the dog characters were attributed a lot of experience but relatively little agency. The correlation between agency and experience was almost perfect (r ¼ 0.97), however, when nonnatural and atypical characters (e.g., God, a robot, a dead person, a fetus) were excluded (Piazza et al., 2014). Consistent with this finding, factor analysis of data from a later study of mind perception with a large set of animal targets revealed a single factor accounting for 29–48 percent of the variance (Bastian, Loughnan, et al., 2012), suggesting that a one-dimensional model of mind perception may be sufficient for some natural, ordinary entities (e.g., wild and domestic animals). The experience–agency framework, at least as characterized by Gray et al. (2007), has conceptual shortcomings. First, the range of capacities specified for each dimension is relatively narrow. It leaves out a host of capacities that might also feature in the intuitive ontology of the mental, such as perception, reasoning, self-awareness, and empathy (Malle, 2019; Weisman et al., 2017, 2021). Such omissions are especially troublesome insofar as some of these capacities (e.g., perception, self-awareness, and empathy) are not obviously more experiential than agentic, or vice versa. Second, some capacities in Gray et al.’s two-dimensional space (thought, morality) are specified at such a high level of generality that it is difficult to determine which psychological functions they encompass (Malle, 2019). Indeed, one of them (personality) is so abstract and multifaceted that it does not seem like a capacity at all. The experience–agency framework is a powerful tool for mapping the intuitive ontology of the mind, yet Gray et al.’s (2007) empirical findings support it only up to a point. First, the observation that participants rate characters similarly on pain and fear, say, does not show that they think of pain and fear as related at the level of basic ontology; it might mean simply that they think of characters as similar in terms of the possession of these capacities (Weisman et al., 2017). Indeed, though Weisman et al. (2017) replicated Gray et al.’s (2007) findings using the same design with a larger set of mental capacities, they found that when participants rated the capacities of characters individually rather than pairwise, the ratings did not pattern as predicted by the experience–agency model. For this reason, drawing conclusions about the intuitive ontology of mind from perceived patterns of resemblance among different entities is a risky business. A second issue concerns the method used by Gray and colleagues to partition the set of capacities, associating each capacity with the factor it loaded more strongly on. In the most part, this procedure was unproblematic, since 5 of the 18 capacities (hunger, fear, pain,

201

202

      

pleasure, and rage) loaded much more strongly on experience than agency, and 5 capacities (self-control, morality, memory, emotion recognition, and planning) loaded much more strongly on agency than experience. The remaining 8 capacities (desire, personality, consciousness, pride, embarrassment, joy, communication, and thought), however, loaded about equally on both factors, suggesting that many of the capacities surveyed do not fit neatly into either category (Malle, 2019). That said, empirical support for the experience–agency model is not limited to Gray et al.’s (2007) study, which was replicated by Weisman and colleagues (2017) using an expanded set of capacities. Another source of evidence for the model comes from studies of how people rate the naturalness of different kinds of mental state ascriptions to group entities, such as corporations (Knobe & Prinz, 2008). In one such study, participants were asked to rate the naturalness of sentences ascribing mental states to a fictional entity called the Acme Corporation, some of which ascribed agentic states (e.g., “Acme Corp. has just decided to adopt a new marketing plan”) while others ascribed experiential states (e.g., “Acme Corp. is feeling excruciating pain”). Analysis of the responses showed that ascriptions of agentic states to the corporate group entity were rated more natural than ascriptions of experiential states. In another study, participants were randomly assigned to either the “feeling” condition, in which explicitly experiential states were ascribed (e.g., “Acme Corp. is feeling upset”), or the “no-feeling” condition, in which the ascriptions were not explicitly experiential (e.g., “Acme Corp. is upset about the court’s recent ruling”). Here, naturalness ratings of the “no feeling” ascriptions were higher than ratings of the “feeling” ascriptions. A similar asymmetry has been observed in the case of ascription of mental states to robots (Huebner, 2010; Sytsma & Machery, 2010). In both cases the asymmetry may have arisen because the target of ascription was physically constituted differently than conscious beings typically are, that is, either not in a single biological body (corporations) or not biologically at all (robots). This suggestion – what Phelan et al. (2013) call the discontinuity hypothesis – dovetails with the fact that participants in Gray and colleagues’ (2007) study seemed to think of the mind of God, an immaterial entity lacking any sort of physical instantiation, as much richer in agentic capacities than experiential ones. The latter finding, however, is somewhat at odds with the literature on anthropomorphism, which suggests that people tend to think of God as rich in both agency and experience (Barrett & Keil, 1996). In one study, for example, a majority of participants attributed to God a wide range of both agentic and experiential capacities, including emotions (e.g., happiness, sadness, and worry), with only a few capacities (e.g., smell, taste, pain) attributed by a minority of participants (< 30 percent) (Shtulman & Lindeman, 2016, Study 1). What’s more, support for the discontinuity hypothesis is limited, especially as it pertains to the attribution of mental capacities to groups. In Knobe and Prinz’s second study, for example, the two types of mental state ascriptions rated by participants differed in grammatical complexity: ascriptions in the “no-feeling” condition included a prepositional phrase, but ascriptions in the

Moral Categorization and Mind Perception

“feeling” condition did not. As a result, the effect of the manipulation may have been due to an experimental artifact (Arico, 2010; Sytsma & Machery, 2009). Further, studies by Phelan and colleagues (2013) suggest that ascriptions of mental states to groups are typically understood distributively rather than collectively, that is, as ascriptions of mental states to the group’s members, not to the group per se. This opens up the possibility that differences in naturalness ratings between experiential and agentic ascriptions to groups stem from the fact that depending on context (e.g., the type of group at issue and the role played by the individuals comprising it), attributing agentic states to the members of a group may seem more appropriate than attributing experiential states to them (Phelan et al., 2013). Differences in naturalness ratings between experiential and agentic ascriptions to robots may be explained in a similar fashion, in terms of observers’ tacit assumptions about the robot’s function (Buckwalter & Phelan, 2013). Given these considerations, the strength of evidence for the experience–agency model derived from studies of how people perceive the mindedness of corporations and artifacts is open to question. To sum up: Two-dimensional models of mind perception posit either a dichotomous representation of mental traits (warmth vs. competence, communion vs. agency, human nature vs. human uniqueness) or a dichotomous representation of mental capacities (experience vs. agency). Of these models, the experience–agency model has been far and away the most influential in moral psychology. Whether the distinction between experience and agency marks a fundamental divide in the intuitive ontology of the mental, however, is less clear.

9.1.2 Three-Dimensional Models Recent research on mind perception has pointed in the direction of a threedimensional, rather than two-dimensional, account of how we attribute mental capacities. Results from studies in which participants rated one or more characters individually on a range of mental capacities – rather than a set of characters pairwise on a single capacity, as in the paradigm used by Gray et al. (2007) – yielded an alternative picture that Weisman et al. (2017, 2021) call the body–heart–mind model. The three dimensions in this model are comprised primarily of somatosensory capacities, like hunger, pain, and pleasure (body); social-emotional capacities, like empathy, pride, and guilt (heart); and perceptual and inferential capacities, such as seeing, reasoning, and decision making (mind). An analogous three-dimensional structure was identified in studies by Malle (2019), in which principal components analysis of participants’ ratings of individual characters revealed a tripartite distinction between affect (e.g., hunger, pleasure, and anger), moral and mental regulation (e.g., empathy, deliberation, and planning), and reality interaction (e.g., perception, communication, and logical reasoning). These three-dimensional models differ from the experience–agency model in two important ways. First, they span, and are empirically based on, a much wider set of mental capacities. Second, they retain the experience dimension of

203

204

      

the experience–agency model but split the agency dimension into two subdimensions, each of which includes experiential elements. In Weisman et al.’s (2017) model, the capacities grouped under body are all experiential, whereas both heart and mind are characterized by a mix of agentic and experiential capacities. Likewise, the first dimension of Malle’s (2019) model (affect) is composed of capacities linked to experience, and the second and third dimensions (moral and mental regulation and reality interaction) have both agentic and experiential features. Due to the dominance of the experience–agency model, however, the explanatory power of three-dimensional models of mind perception for research on moral categorization has largely yet to be explored. To sum up: Existing models of mind perception vary in dimensionality, content, and scope. Two-dimensional (dichotomous) models can be either trait-based or capacity-based, and they tend to include a relatively small number of mental features on each dimension. Three-dimensional (trichotomous) models tend to be capacity-based, and they encompass a larger set of features on each dimension. In Section 9.2 we begin to explore how some of these models have been applied to the study of moral categorization, focusing on the experience–agency model.

9.2 Moral Patiency According to a standard definition of moral patiency, something is a moral patient just in case it can be morally wronged (Goodwin, 2015; Piazza et al., 2014; Sytsma & Machery, 2012). Three features of this definition are worth noting. First, to be a moral patient is to have a kind of moral standing, namely, the moral standing associated with the possession of rights. Hence, to be a moral patient is to be an individual whose interests or well-being deserve the sort of consideration that grounds a duty on the part of others (Raz, 1984).2 Second, being a moral patient is distinct from being a potential target of morally wrong action. This distinction is required by the fact that a morally wrong action directed toward one individual could be morally wrong because it resulted in a wrong being done to a different individual. For example, it might be wrong to cut down a tree in your neighbor’s yard without their permission because doing so would result in a wrong being done to your neighbor, not

2

On this conception of moral patiency, only minded entities can be moral patients, since having interests requires having desires and goals, and having desires and goals requires having a mind. More expansive conceptions of moral patiency (e.g., moral considerability), according to which moral patiency does not require mindedness, have been explored in the philosophical and psychological literature (Bastian et al., 2023; Callicott, 1980; Goodpaster, 1978). Since our concern in this chapter is with the role of mind perception in moral categorization, those alternative conceptions of moral patiency will not be discussed here.

Moral Categorization and Mind Perception

because of a wrong being done to the tree.3 Third, the concept of moral patiency is normative, not descriptive. To be a moral patient is to possess a type of intrinsic value that governs how one ought (and ought not) to be treated by others; whether one is treated in a way that meets that standard is irrelevant. For this reason, the concept of moral patiency should not be conflated with psychological patiency (Goodwin, 2015).4 To be a psychological patient is to possess the mental capacities that distinguish sentient from nonsentient beings, namely, the capacities associated with conscious experience (e.g., pain, pleasure, joy). Unlike the concept of moral patiency, the concept of psychological patiency is descriptive, not normative; there is nothing intrinsically evaluative about it. The importance of maintaining the distinction between moral patiency and psychological patiency will become clear later on, when we review research on the various factors affecting the categorization of individuals as moral patients, which clearly transcend psychological patiency. Indeed, as we will see, some of the factors that influence judgments of moral patiency transcend the psychological realm altogether (hence, transcending the domain of mind perception). Note that, in speaking of the categorization of individuals as moral patients, we are tacitly assuming that moral patiency can be understood as a property that an individual either has or lacks, as opposed to a property that admits of degrees. Whether this assumption is correct from the standpoint of normative theory – that is, from the perspective of theorizing about what qualifies something as a moral patient – is a matter of controversy (DeGrazia, 2008). Fortunately, this is not a debate in which we need to enter, since our focus is on ordinary people’s attribution of moral patiency. The question for us is not: What characteristics determine whether an individual can be morally wronged and whether its interests deserve moral consideration? Our question is this: What characteristics contribute to ordinary people’s perception of an individual as something that can be morally wronged and ordinary people’s perception of an individual as something whose interests morally matter? In addressing the latter question, we can remain neutral in the philosophical debate about whether moral patiency admits of degrees. Neutrality is an option here because our concern is with everyday attributions of moral patiency, not moral patiency itself.5

3

4

5

The concept of an indirect duty is relevant here (Kant, 1797/1996). According to Kant, it would be wrong to cut down the tree because doing so would violate a duty owed to your neighbor, not because it would violate a duty owed to the tree. Any duty to the tree is indirect, derived from a duty to the tree’s owner. On this view, while both the tree and its owner have a kind of moral standing, the moral standing of the tree is merely extrinsic, existing only in relation to the intrinsic moral standing of the tree’s owner. The literature on moral typecasting, for example, includes several studies in which moral patiency is operationalized in terms of experience; the rationale for this being that moral patiency and experience are strongly correlated (Gray & Wegner, 2009, 2011). This operationalization is problematic, insofar as it collapses the distinction between the normative (moral) and descriptive (psychological) forms of patiency. Failing to observe this distinction tips the scales in favor of a particular view of moral patiency, the correctness of which is not a settled matter. Of course, one might argue that perceived moral patiency cannot be conceptualized in binary terms, that is, it must be understood as a matter of degree. Note, however, that while one individual might be perceived as more patient-like than another in a given context, this does

205

206

      

That said, philosophical theorizing about moral patiency (what qualifies something as a moral patient) provides a natural starting point for psychological theorizing about the attribution of moral patiency (what causes something to be seen as a moral patient). What stands out in the philosophical literature on the topic is a pair of diametrically opposed views. The first view, associated with a utilitarian perspective in ethics, is that psychological patiency (i.e., sentience) is necessary and sufficient for moral patiency (Bentham, 1789/ 1970; Bernstein, 1998; Singer, 1989). The second view, associated with a deontological perspective in ethics and endorsed by certain versions of social contract theory, is that psychological agency, or the suite of high-level cognitive capacities required for rational thought and behavior (e.g., deliberative reasoning, self-consciousness, autonomy), is necessary and sufficient for moral patiency (Carruthers, 1992; Kant, 1785/1998, 1797/1996). The contrast between these views could hardly be starker. Regarding the moral value of animals, Bentham wrote: “The question is not, Can they reason? Nor, Can they talk? But, Can they suffer?” (Bentham, 1789/1970, p. 283n). For Kant, by contrast, animals “have only a relative worth, as means, and are therefore called things, whereas rational beings are called persons because their nature . . . marks them out as an end in itself” (Kant, 1797/1996, p. 79). Both the utilitarian and deontological perspectives have informed psychological research on the attribution of moral patiency by suggesting ways in which mind perception might contribute to the categorization of individuals as moral patients: first, via the perception of experiential capacities, or psychological patiency; second, via the perception of agentic capacities, or psychological agency. One line of evidence for the relevance of both these dimensions of mind to moral categorization comes from Gray et al.’s (2007) landmark study of mind perception. Participants in this study, in addition to making comparative judgments of characters’ mental capacities, were also asked to make comparative judgments of characters’ moral status, both with respect to an indirect measure of moral agency (“If both characters had caused a person’s death, which one do you think would be more worthy of punishment?”) and an indirect measure of moral patiency (“If you were forced to harm one of these characters, which one would it be more painful for you to harm?”).6 Both moral categories were positively correlated with the two dimensions of mind revealed

6

not necessarily mean that the two individuals are perceived as having different degrees of patiency in that context. It might mean only that they are perceived to differ in degree of similarity to the prototype of a moral patient, or that they differ in the degree to which their moral patiency is salient to the perceiver. Thus, while it might seem that perceived moral patiency must be a matter of degree, it can also be understood in binary terms, as a category or type of individual. The measure of moral patiency used in this study is potentially problematic, insofar as it lacks a normative component. A better (more direct) question would have been something along the lines of: “If you were forced to harm one of these characters, which one would it be more wrong for you to harm?” or: “If both of these characters were at risk of being harmed, which one do you think it would be more important to protect from that harm?” That said, normative measures of moral patiency, such as entitlement to protection from harm, tend to correlate strongly with nonnormative measures, like concern for an individual’s welfare (Goodwin, 2015). Given this

Moral Categorization and Mind Perception

by Gray et al.’s factor analysis, but moral agency was more strongly correlated with agency than experience (r ¼ 0.82 vs. r ¼ 0.22), and moral patiency was more strongly correlated with experience than agency (r ¼ 0.85 vs. r ¼ 0.26). Thus, characters scoring much higher on agency than experience (like God) were attributed more moral agency than patiency, whereas characters scoring lower on agency than experience (like the infant) were attributed more moral patiency than agency (Gray & Wegner, 2009). Indirect evidence for the hypothesis that moral patiency is more closely linked to experience than agency comes from an experimental study of folkpsychological explanation (Knobe & Prinz, 2008, Study 6). Participants in this study were randomly assigned to one of two conditions in which they read a story involving a fictional character interested in the psychological capacities of fish. In one condition, the character was described as wanting to know how well the fish could remember the location of food sources in their habitat (an agentic capacity); in the other condition, the character wanted to know whether fish could feel pain (an experiential capacity). After reading the story, participants were asked to explain why the character might want to have this information. The pattern of responses between conditions varied dramatically: in the memory condition, typical answers referred to the character’s interest in predicting, explaining, or controlling the behavior of the fish; in the pain condition, the focus of explanation was on the character’s concern about the welfare of the fish. Consistent with the correlational evidence reported by Gray et al. (2007), these findings support the experientialist hypothesis that the attribution of moral patiency is driven mostly by psychological patiency, with psychological agency playing either a minor role or no role at all. Note that this hypothesis implies only that psychological patiency is the most heavily weighted feature in the multidimensional feature space corresponding to the concept of moral patiency. According to a stronger form of experientialism, psychological patiency is the only feature in that space and hence the sole determinant of moral patiency. Though some experimental studies of moral patiency involving the direct manipulation of perception of a target’s experience and agency have lent support to experientialism, evidence for this view is mixed. On the positive side, in a study using vignettes about the treatment of lobsters, lobsters were rated higher in moral patiency (e.g., more deserving of protection from harm) when described as high in sentience and low in intelligence than when described as low in sentience and high in intelligence; indeed, while moral patiency ratings increased relative to an initial baseline measure (taken prior to the addition of information about lobster psychology) when the lobsters were described as sentient but unintelligent, moral patiency ratings dropped below baseline when the lobsters were described as intelligent but insentient (Jack & Robbins, 2012, Study 1). Consistent with this result, two studies using a full-factorial design, one involving a story about lobster farming and the other a story about surgical pattern of correlation, it seems plausible that emotional aversion to harming something is a reasonably reliable indicator of the perceived moral wrongness of causing that harm.

207

208

      

research on monkeys, showed an effect of sentience on ratings of moral patiency but no effect of intelligence, consistent with experientialism (Jack & Robbins, 2012, Study 2; Sytsma & Machery, 2012, Study 1). On the negative side, studies of moral patiency using an “alien species” paradigm are difficult to square with experientialism. In one such study, moral patiency ratings of a fictional extraterrestrial species called “atlans” were higher when the creatures were described as high in agency than when they were described as low in agency, but moral patiency ratings were no higher when the creatures could feel pleasure and pain than when they lacked those capacities (Sytsma & Machery, 2012, Study 2).7 In a companion study using a vignette about an individual atlan (rather than atlans in general), there was an effect of both agency and experience on moral patiency, but the effect of agency was greater than that of experience (Sytsma & Machery, 2012, Study 4). And, in an unrelated study, moral patiency ratings of a fictional extraterrestrial species called “trablans” were higher in the high agency condition than the low agency condition (Piazza & Loughnan, 2016, Study 1).8 This last finding is prima facie at odds with experientialism, insofar as the large size of the effect (d ¼ 0.84) suggests that the attribution of moral patiency is strongly influenced by the perception of psychological agency.9 A further challenge to experientialism comes from evidence that attributions of moral patiency are highly sensitive to whether an individual is seen as having a harmful disposition (Khamitov et al., 2016; Opotow, 1993; Piazza et al., 2014). This third factor is a cluster of psychological traits, not a cluster of capacities. As such, it is not accounted for in capacity-based models like the experience–agency model (Gray et al., 2007), though it does implicitly figure in trait-based models, as the opposite pole of the warmth dimension of the warmth–competence model (Fiske et al., 2002) and the communion dimension of the agency–communion model (Abele & Wojciszke, 2007). In one study, participants rated the moral patiency of an alien species that was variously described, depending on condition, as high or low in intelligence, high or low in sentience, and harmful or harmless (Piazza et al., 2014, Study 2). Moral patiency was measured using a five-item measure that included questions about 7

8

9

It’s possible that the magnitude of the effect of condition on moral patiency in the agency dimension reflected the fact that atlans in the high agency condition were described as having a friendly, peaceful disposition and “highly developed literary, musical, and artistic traditions” (Sytsma & Machery, 2012, p. 317), qualities that are difficult to imagine in a creature that lacked a rich experiential life (Jack & Robbins, 2012). But that would not explain why moral patiency ratings in the low agency–high experience condition did not significantly differ from ratings in the low agency–low experience condition. Note that the experimental manipulation in this study was of psychological agency, not moral agency. Hence, the effect of condition observed here is consistent with moral typecasting, the hypothesis that moral patiency and moral agency are inversely correlated both within and (more controversially) across contexts of evaluation (Gray & Wegner, 2009). Interestingly, the effect of intelligence on the moral standing of animals appears to be species relative. Though evidence suggests that it applies to alien species (like trablans) and wild animals (like tapirs), it seems not to apply to pigs, even when all three types of animal are described as farmed for human consumption (Piazza & Loughnan, 2016, Study 2).

Moral Categorization and Mind Perception

the wrongness of harming the creatures, the extent to which the creatures were entitled to protection from harm, and the extent to which they deserved to be treated with compassion. Main effects were observed for all three dimensions of variation, most dramatically in the case of the harmfulness factor, the effect of which (i.e., decreased moral patiency) dwarfed in size the effects of both intelligence and sentience (increased moral patiency). Interestingly, the sentience manipulation affected moral patiency regardless of whether the creature was described as harmful, but there was no effect of intelligence when the creature was described as harmless. This asymmetry could be taken to suggest that, though harmfulness was the most important of the three factors, psychological patiency played a more fundamental role in shaping attributions of moral patiency than psychological agency did. That said, the main finding here – that harmfulness dramatically diminishes moral patiency – is at odds with experientialism, at least insofar as harmfulness is a behavioral trait, not an experiential capacity. (Recall that according to experientialism, the attribution of moral patiency is driven for the most part by the perception of psychological patiency, understood as a cluster of experiential capacities.) Further support for the relevance of psychological traits to moral patiency can be found in research on dehumanization. In one study, traits associated with human nature (HN), such as emotional responsiveness and openness, were positively correlated with moral patiency (as measured by opposition to mistreatment), but no analogous correlation was found for traits associated with human uniqueness (HU), such as civility and refinement (Bastian et al., 2011, Study 1). In a companion study, fictional characters described as high in HN traits were rated higher in moral patiency than characters described as low in HN traits, while high-HU characters got lower ratings than their low-HU counterparts, even though characters described as high in either type of trait were equally liked by participants (Bastian et al., 2011, Study 2). The full array of factors affecting the attribution of moral patiency is not limited to mental capacities and traits. For example, perceived physical attractiveness also influences judgments of whether a species of animal deserves to be protected from harm or otherwise shown moral consideration (Gunnthorsdottir, 2001; Klebl et al., 2022; Klebl et al., 2021). Perceived similarity to humans, over and above psychological similarity, may also play a role (Bastian, Costello, et al., 2012), though some studies suggest otherwise (Akechi & Hietanen, 2021; Kozachenko & Piazza, 2021; Opotow, 1993). Whatever else is the case, it seems clear that two-dimensional models of mind perception, whether capacitybased or trait-based, are insufficient for understanding how mind perception contributes to judgments of moral patiency. What is needed are higherdimensional models incorporating both capacities and traits. A substantial body of empirical research supports the hypothesis that mind perception influences the attribution of moral patiency. The converse hypothesis – that the attribution of moral patiency to a target directly influences the perception of its mindedness – is more controversial. What is relatively uncontroversial is that categorizing an animal as a food source tends to reduce

209

210

      

attributions of moral patiency and attributions of mindedness, a phenomenon that makes sense in light of the general tendency to reduce cognitive dissonance (Bastian, Loughnan, et al., 2012; Loughnan et al., 2010; Piazza & Loughnan, 2016). Similarly, describing an animal as vulnerable to harm tends to increase both the extent to which it is seen as a moral patient and the extent to which it is seen as a psychological patient (Jack & Robbins, 2012, Studies 3 and 4). But there is currently no good evidence that reducing the perceived moral patiency of an animal causes a reduction in its perceived mindedness. Indeed, results from one small study (N ¼ 80) showed that the effect of categorizing an animal as food on perception of its moral patiency was mediated by perception of its capacity to suffer, not the other way around (Bratanova et al., 2011). This finding is hard to square with the hypothesis that categorizing an animal as a moral patient affects how its mind is perceived.10 By contrast, there is some evidence that the perceived moral patiency of other types of entity (i.e., nonanimals) affects the perception of their mindedness (Ward et al., 2013). In a series of four studies, participants attributed more experience and agency to three fictional characters – a human in a persistent vegetative state, a robot, and a human corpse – when the character was described as a victim of intentional harm. In each case, a manipulation check confirmed that the harmful action was seen as morally wrong and hence that the victim was seen as a moral patient (Ward et al., 2013, Studies 1–4). One might hesitate to attach too much significance to these results, given that the vignettes used in these studies featured atypical, nonnatural characters. Another concern is that the measure of moral patiency used in these four studies (i.e., the perceived moral wrongness of the action directed at the target) is too indirect. The concern here relates to a key feature of the characterization of moral patiency introduced at the beginning of this section. Moral patiency entails the possibility of being wronged, not just the possibility of being the recipient of a wrong action. Hence, the fact that something is seen as a target of wrongdoing does not show that it is seen as a moral patient. Indeed, it seems plausible to suppose that people regard intentionally harming a robot as wrong for some other reason than that it would result in the robot’s being wronged; for example, it might be seen as wrong because it would result in a wrong being done to the robot’s owner. (The same line of reasoning applies to the other two cases: the patient in a vegetative state and the corpse.) But this concern does not apply to the fifth study, which used a vignette about a normal human character. Here, however, describing the character as a victim of intentional wrongdoing produced the puzzling result – likely the effect of an experimental artifact – that the character was attributed less experience and agency (Ward et al., 2013, Study 5).

10

Curiously, Goodwin (2015) draws the opposite conclusion, despite noting Bratanova et al.’s (2011) results and pointing out gaps in Jack and Robbins (2012) argument for the causal dependence of psychological patiency on moral patiency, which overlooks the need for mediation analysis.

Moral Categorization and Mind Perception

In short, while attributing moral patiency to a target may indeed directly influence how its mind is perceived, evidence of this influence is limited.

9.3 Moral Agency A moral agent is an individual that can commit morally wrong actions and be held responsible for those actions, in the sense that it is normatively appropriate to blame or punish them accordingly. This characterization of moral agency is a relatively narrow one, as it covers only actions with a negative valence (immoral behavior). A fuller characterization of moral agency would also include positively valenced actions (moral behavior). Judgments of moral agency in this broader sense appear to be sensitive to different factors and sensitive in different ways to the same factor, depending on the valence of the action (Anderson et al., 2020; Robbins & Alvear, 2023). A potential advantage of a more focused perspective on moral agency, however, is that it enables us to conceptualize moral agency as the mirror image of moral patiency: Moral patients have rights, and moral agents have a duty to respect those rights. Moral agency, like moral patiency, is a normative concept. Hence, just as moral patiency should not be conflated with psychological patiency (the possession of experiential mental capacities), moral agency should not be confused with psychological agency (the possession of agentic mental capacities). And like moral patiency, moral agency can be conceptualized either as a property that admits of degree or in binary terms. The same point applies to moral responsibility: Though an individual might deserve more or less blame, or more or less severe punishment, for a morally wrong action, being worthy of any amount of blame or punishment suffices for moral responsibility (hence, for moral agency) in the categorical sense. Further, moral agency is a relatively stable, context-invariant property of individuals. Hence, even if moral responsibility does admit of degree, having more (or less) moral responsibility for a particular action does not entail having more (or less) moral agency. Philosophical accounts of moral responsibility provide a useful jumping-off point for thinking about the role of mind perception in the attribution of moral agency. Standard views of moral responsibility tie moral responsibility to the possession of sophisticated cognitive abilities that appear to be unique to our species, such as the capacity to recognize, grasp, and act on the basis of moral considerations (Arpaly, 2003; Fischer & Ravizza, 1998; Strawson, 1962). Translating these accounts to the attributional realm entails linking the attribution of moral responsibility – and hence, the categorization of an individual as a moral agent – primarily to the perception of mental capacities on the agentic side of the ledger and only marginally to capacities on the experiential side. As with moral patiency, however, the empirical story is more complicated than this simple hypothesis – call it agentialism – suggests. Note, however, that just as experientialism does not entail that moral patiency is solely determined by psychological patiency, agentialism does not entail that moral agency is solely

211

212

      

determined by psychological agency. Agentialism requires only that psychological agency is the most heavily weighted feature in the feature space associated with the concept of moral agency. (Stronger formulations of agentialism are possible, but we will not consider them here.) Correlational evidence for agentialism comes from Gray et al.’s (2007) study, in which participants made both comparative judgments of characters’ mental capacities and comparative judgments of their moral responsibility for a hypothetical transgression. Factor analysis of the data revealed that moral responsibility was strongly correlated with agentic capacities but only weakly correlated with experiential ones, suggesting that psychological agency contributes more to moral agency than psychological patiency does. A good deal of experimental evidence supports this hypothesis. First, a variety of studies have shown that judgments of blame are sensitive to whether a behavior is intentional and freely chosen (Alicke, 2000; Cushman, 2008; Guglielmo et al., 2009; Monroe et al., 2014). Similarly, multiple studies have shown that reduced psychological agency is causally linked to reduced moral responsibility. In one study, for example, a fictional character with a severe learning disability was attributed less moral responsibility for immoral behavior than his cognitively typical counterpart (Gray & Wegner, 2009, Study 1b). In another study, blame and punishment judgments of a fictional character who had committed a violent crime were mitigated by information that the offender suffered from psychotic delusions due to schizophrenia (de Vel-Palumbo et al., 2021). Likewise, recent evidence suggests that the mitigating effect of a history of childhood abuse and neglect on judgments of blame for antisocial behavior is mediated by perceived deficits in socioemotional functioning, as measured by symptoms of posttraumatic stress disorder (Robbins & Alvear, 2023, Study 3). Both individually and collectively, these findings provide support for the hypothesis that psychological agency is a key determinant of moral responsibility. There is also some (albeit limited) evidence that psychological patiency influences moral responsibility. In one study, for example, a fictional character was judged less responsible for stealing a car when described as unusually sensitive to pain – suggesting that the contribution of psychological patiency to moral responsibility, unlike the contribution of psychological agency, may sometimes be negative, rather than positive (Gray & Wegner, 2009, Study 3a). An alternative explanation of this finding, however, is that the highly painsensitive character was attributed less responsibility because he was seen as suffering from a mental disorder that impaired his psychological agency. As noted earlier, however, there is more to the attribution of mindedness than the attribution of mental capacities, given that mind perception in the broad sense also involves the attribution of mental traits. (The distinction between capacities and traits is critical here, just as it was in the context of our earlier discussion of moral patiency, where a similar point was made about the limitations of the experience–agency model.) A full account of how mind perception contributes to the attribution of moral agency needs to go beyond Gray et al.’s (2007) experience–agency model (which focuses exclusively on the

Moral Categorization and Mind Perception

representation of mental capacities), just as it does in the case of moral patiency. For example, perception of traits associated with Fiske et al.’s (2002) warmth– competence model appears to affect perception of moral agency. Female victims of domestic abuse are blamed more for their victimization when described as low in warmth (Capezza & Arriaga, 2008), and high-status individuals, who are seen as high in competence and low in warmth, are punished more severely for the same transgression than their low-status counterparts (Fragale et al., 2009). In similar fashion, traits associated with the HU dimension of Haslam’s (2006) model of dehumanization, such as refinement and civility, have been shown to influence the attribution of moral agency. In one study, for example, individuals high in HU traits were judged to be more worthy of blame and punishment than their low-HU counterparts, whereas the presence of HN traits, such as emotional responsiveness and interpersonal warmth, had no effect on these judgments (Bastian et al., 2011, Study 2). In another study, higher scores on a composite measure of dehumanization, incorporating both HU and HN traits, predicted more severe blame and punishment judgments for the perpetrators of various criminal offenses, ranging in seriousness from financial fraud to mass murder (Bastian et al., 2013, Study 2). A further influence on the attribution of moral responsibility is the perception of moral character (Pizarro & Tannenbaum, 2012). Evidence from multiple studies suggests that individuals of bad moral character are seen as more responsible and more deserving of blame and punishment for immoral actions than individuals with good moral character (Nadler, 2012; Nadler & McDonnell, 2012; Schwartz et al., 2022). In one study, participants were randomly assigned to one of four conditions, in each of which they read a story about an accident in which the protagonist lost control while downhill skiing and collided with another skier on the slope, causing their death (Nadler, 2012, Experiment 1). In the “good character” condition, the protagonist was described as hardworking, responsible, reliable, and generous, and in the “bad character” condition, he was described as lazy, irresponsible, unreliable, and selfish. In the “low recklessness” condition, the protagonist was described as confident that he could avoid hitting anyone on the slope, and in the “high recklessness” condition he was described as knowing there was a risk of hitting someone but not caring. Results showed that the protagonist was seen as more responsible, more blameworthy, and deserving of more severe punishment, for the killing when described as having a bad moral character, regardless of whether he was acting recklessly. Similar results were obtained from a study using a vignette about a woman whose dogs escaped from her yard and killed a small child. Regardless of whether the protagonist was aware of the risk posed by her dogs, participants in the bad character condition judged her more harshly than participants in the good character version (Nadler & McDonnell, 2012, Experiment 3). As we saw in the case of moral patiency, attributions of moral agency are systematically affected by perception of an individual’s mental capacities and traits. Evidence of a causal link in the opposite direction is sparse by

213

214

      

comparison, though there is some evidence that perceiving an individual as a moral agent in a given context attenuates the perception of their psychological patiency in that context.11 In one study, participants attributed less pain sensitivity to a fictional character engaged in fraudulent business activity when described as playing a leading role in the fraud, rather than a supporting role (Gray & Wegner, 2009, Study 3c). As predicted, participants attributed more blame to the main perpetrator of the fraud, suggesting that the intended manipulation was successful. Without conducting a mediation analysis, however, one cannot rule out the possibility that the effect of the manipulation on perception of the character’s moral agency in the fraud scenario was mediated by perception of their psychological patiency in that scenario, rather than the effect of the manipulation on the character’s psychological patiency being mediated by perception of their moral agency. It may have been, for example, that the character playing a leading role in the fraud was seen as less sensitive to pain than the character playing a supporting role because of differences in personality (e.g., levels of dominance, confidence, and risk aversion), rather than a difference in moral agency (Arico, 2012).

9.4 Conclusion The ability to perceive other minds plays a fundamental role in social cognition, and nowhere is its fundamentality more evident than in the study of moral cognition (Gray et al., 2012). Unsurprisingly, the role of mind perception in moral categorization is multifaceted and complex, and the processes and mechanisms underlying it are far from completely understood. Nonetheless, our overview of research on the topic suggests a few major themes. First, none of the standard models of mind perception – whether two-dimensional or threedimensional, capacity-based or trait-based – has the resources necessary to capture the full range of phenomena linking attributions of moral status with attributions of mindedness. The reason for this is that none of these models encompasses the full range of mental features (both capacities and traits) that influence the perception of moral patiency and moral agency. Second, attributions of moral patiency and attributions of moral agency are sensitive to mind perception in different ways. For instance, while both agentic and experiential capacities and traits contribute in a positive way to moral patiency, and agentic capacities and traits contribute in a positive way to moral agency, there is some evidence that experiential capacities and traits have the opposite effect on moral agency. Third, there is some evidence that categorizing an entity as a moral

11

These results have been construed as evidence for the claim that moral agency and moral patiency are inversely causally related, as per the moral typecasting hypothesis (Gray & Wegner, 2009, 2011). As noted earlier, this construal is problematic because it elides the distinction between moral patiency (a normative concept) and psychological patiency (a descriptive concept).

Moral Categorization and Mind Perception

patient or moral agent affects how the mind of that entity is perceived (not just the other way around), but evidence of these effects is in relatively short supply. The empirical literature on moral categorization reviewed in this chapter is rich and interesting. Still, it has limitations, one of which is especially noteworthy: the reliance on indirect measures of moral patiency and moral agency. In studies of moral patiency, for example, perceptions of moral patiency are typically assessed by asking questions like: “To what extent would it be morally wrong to harm X?” rather than questions like: “To what extent does X deserve to be protected from harm?” The distinction between these questions is important, because (as noted in Section 9.2) moral patiency implies the potential to be morally wronged, not just the potential to be the target of a morally wrong action (which might be wrong in virtue of the harm done to some other individual). Thus, participants’ responses to questions about the moral wrongness of harming an individual are only an indirect indicator of the perceived moral patiency of that individual. In studies of moral agency, perceived moral agency is typically measured by asking about the extent to which an individual deserves blame or punishment for an action, or is responsible for the action and its outcome. The issue here is that attributions of moral responsibility for an action are sensitive to factors that need not affect perceptions of moral agency, at least insofar as moral agency – unlike blame or responsibility – is a relatively stable property of individuals, invariant across contexts of action. Hence, attributions of blame and punishment are only an indirect, approximate indicator of perceived moral agency. Future research on moral categorization would benefit from the employment of more direct measures of moral patiency and moral agency in addition to the indirect measures commonly in use. As with other topics in moral psychology, the study of moral categorization originates with theorizing by philosophers. This is true of research on moral patiency, which is deeply informed by the contrast between utilitarian and deontological perspectives in normative ethics, and research on moral agency, which reflects the influence of philosophical thinking about moral responsibility. What experimental studies in this area reveal is the plurality of factors that figure into attributions of both moral patiency and moral agency, including factors often overlooked by normative theorists, such as character traits (e.g., interpersonal warmth, harmfulness, and intellectual refinement). The normative significance of these factors, however, is unclear. It may well be that intuitive thinking about moral categories of the sort revealed by experimental studies is not a reliable guide to the structure of these categories, in which case it would be risky to use the results of those studies to constrain normative theory (Greene, 2008). By contrast, it seems that reflective thinking about moral categories should be informed, at least to some extent, by the patterns of attribution observed in empirical research. Normative theorizing about moral categories in a way that is appropriately sensitive to empirical evidence is (or ought to be) a central task for philosophers working in this area. Research on moral categorization also has profound implications for the law. Consider the influence of characterological information on the attribution of

215

216

      

blame and responsibility and hence on the perception of moral agency (given that such attributions apply only to moral agents). Empirical evidence suggests, for example, that the outcome in a legal proceeding is likely to be worse for a defendant who is perceived by the judge or jury as having a bad moral character in virtue of a prior record of offenses from which an inference to bad character is naturally drawn (Nadler, 2012). Highlighting the negative experiential effects of a crime on its victims will likely have the same effect. Introducing biographical information about a defendant with an extensive history of suffering at the hands of others, by contrast, will tend to have the opposite effect (Robbins & Litton, 2018), as will information about the defendant’s cognitive limitations (Gray & Wegner, 2009). Whether these effects make sense in normative terms is an important topic for legal theory, just as it is for moral philosophy (Greene & Cohen, 2004). Moral categorization is a central topic of investigation in the moral psychology of artificial intelligence (Bonnefon et al., 2024; Ladak et al., 2023). In general, it appears that mind perception contributes as much to the attribution of moral status to artificial agents as it does in the case of biological agents, and in similar ways. For example, the attribution of moral agency to robots is sensitive to perception of their capacities for intentional action, free choice, and the appreciation of moral considerations for acting, just as it is in the case of humans (Bigman et al., 2019). The attribution of moral agency to robots also appears to be influenced by perception of their experiential capacities but differently than it does in the human case, where the effect may sometimes be negative rather than positive. Evidence of this asymmetry comes from a vignette-based study in which a fictional robot was judged more responsible for causing harm in a sacrificial moral dilemma scenario when described as having affective states rather than lacking them, possibly as a result of the affective robot’s being humanized (Nijssen et al., 2023). Explaining this and other human–robot symmetries in the attribution of moral agency, and the application of moral categories more generally, is an active area of research in moral psychology – and one in which thinking about mind perception will no doubt continue to play an essential role.

Acknowledgments Many thanks to my coeditor, Bertram Malle, and three anonymous reviewers, for thoughtful and constructive feedback on earlier versions of this chapter.

References Abele, A. E., & Wojciszke, B. (2007). Agency and communion from the perspective of self versus others. Journal of Personality and Social Psychology, 93(5), 751–763. Akechi, H., & Hietanen, J. K. (2021). Considering victims’ minds in the evaluation of harmful agents’ moral standing. Social Cognition, 39(4), 489–503.

Moral Categorization and Mind Perception

Alicke, M. D. (2000). Culpable control and the psychology of blame. Psychological Bulletin, 126(4), 556–574. Anderson, R. A., Crockett, M. J., & Pizarro, D. A. (2020). A theory of moral praise. Trends in Cognitive Sciences, 24(9), 694–703. Arico, A. (2010). Folk psychology, consciousness, and context effects. Review of Philosophy and Psychology, 1(3), 371–393. Arico, A. (2012). Breaking out of moral typecasting. Review of Philosophy and Psychology, 3(3), 425–438. Arico, A., Fiala, B., Goldberg, R. F., & Nichols, S. (2011). The folk psychology of consciousness. Mind & Language, 26(3), 327–352. Arpaly, N. (2003). Unprincipled virtue. Oxford University Press. Barrett, J. L., & Keil, F. C. (1996). Conceptualizing a nonnatural entity: Anthropomorphism in God concepts. Cognitive Psychology, 31(3), 219–247. Bastian, B., Costello, K., Loughnan, S., & Hodson, G. (2012). When closing the human– animal divide expands moral concern: The importance of framing. Social Psychological and Personality Science, 3(4), 421–429. Bastian, B., Crimston, C. R., Klebl, C., & van Lange, P. A. M. (2023). The moral significance of protecting environmental and cultural objects. PLoS ONE, 18(2), Article e0280393. Bastian, B., Denson, T. F., & Haslam, N. (2013). The roles of dehumanization and moral outrage in distributive justice. PLoS ONE, 8(4), Article e61842. Bastian, B., Laham, S. M., Wilson, S., Haslam, N., & Koval, P. (2011). Blaming, praising, and protecting our humanity: The implications of everyday dehumanization for judgments of moral status. British Journal of Social Psychology, 50(3), 469–483. Bastian, B., Loughnan, S., Haslam, N., & Radke, H. R. M. (2012). Don’t mind meat? The denial of mind to animals used for human consumption. Personality and Social Psychology Bulletin, 38(2), 247–256. Bentham, J. (1970). Introduction to the principles of morals and legislation. Clarendon Press. (Original work published 1789) Bernstein, M. H. (1998). On moral considerability: An essay on who morally matters. Oxford University Press. Bigman, Y. E., Waytz, A., Alterovitz, R., & Gray, K. (2019). Holding robots responsible: The elements of machine morality. Trends in Cognitive Sciences, 23(5), 365–368. Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences, 18(2), 227–247. Bonnefon, J.-F., Rahwan, I., & Shariff, A. (2024). The moral psychology of artificial intelligence. Annual Review of Psychology, 75, 653–675. Brambilla, M, & Leach, C. W. (2014). On the importance of being moral: The distinctive role of morality in social judgment. Social Cognition, 32(4), 397–408. Bratanova, B., Loughnan, S., & Bastian, B. (2011). The effect of categorization as food on the perceived moral standing of animals. Appetite, 57(1), 193–196. Buckwalter, W., & Phelan, M. (2013). Function and feeling machines: A defense of the philosophical conception of subjective experience. Philosophical Studies, 166(2), 349–361. Callicott, J. B. (1980). Animal liberation. Environmental Ethics, 2(4), 311–338. Capezza, N. M., & Arriaga, X. B. (2008). Why do people blame victims of abuse? The role of stereotypes of women on perceptions of blame. Sex Roles, 59(11), 839–850.

217

218

      

Carruthers, P. (1992). The animals issue: Moral theory in practice. Cambridge University Press. Chalmers, D. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219. Cushman, F. (2008). Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment. Cognition, 108(2), 353–380. DeGrazia, D. (2008). Moral status as a matter of degree? Southern Journal of Philosophy, 46(2), 181–198. de Vel-Palumbo, M., Schein, C., Ferguson, R., Chang, M. X-L., & Bastian, B. (2021). Morally excused but socially excluded: Denying agency through the defense of mental impairment. PLoS ONE, 16(6), Article e0252586. Epley, N., & Waytz, A. (2010). Mind perception. In S. T. Fiske, D. T. Gilbert, & G. Lindzey (Eds.), Handbook of social psychology (pp. 498–541). Wiley. Fischer, J. M., & Ravizza, M. (1998). Responsibility and control: A theory of moral responsibility. Cambridge University Press. Fiske, S. T., Cuddy, A. J. C., & Glick, P. (2006). Universal dimensions of social cognition: Warmth and competence. Trends in Cognitive Sciences, 11(2), 77–83. Fiske, S. T., Cuddy, A. J. C., Glick, P., & Xu, J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82(6), 878–902. Fragale, A. R., Rosen, B., Xu, C., & Merideth, I. (2009). The higher they are, the harder they fall: The effects of wrongdoer status on observer punishment recommendations and intentionality attributions. Organizational Behavior and Human Decision Processes, 108(1), 53–65. Friedman, M. (2013). How to blame people responsibly. Journal of Value Inquiry, 47(3), 271–284. Goodpaster, K. E. (1978). On being morally considerable. Journal of Philosophy, 75(6), 308–325. Goodwin, G. P. (2015). Experimental approaches to moral standing. Philosophy Compass, 10(12), 914–926. Goodwin, G. P., Piazza, J., & Rozin, P. (2014). Moral character predominates in person perception and evaluation. Journal of Personality and Social Psychology, 106(1), 148–168. Gray, H. M., Gray, K., & Wegner, D. (2007). Dimensions of mind perception. Science, 315(5812), 619. Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101–124. Gray, K., & Wegner, D. (2009). Moral typecasting: Divergent perceptions of moral agents and moral patients. Journal of Personality and Social Psychology, 96(3), 503–520. Gray, K., & Wegner, D. (2011). To escape blame, don’t be a hero – Be a victim. Journal of Experimental Social Psychology, 47(2), 516–519. Greene, J. (2008). The secret joke of Kant’s soul. In W. Sinnott-Armstrong (Ed.), Moral psychology: Vol. 3. The neuroscience of morality: Emotion, brain disorders, and development (pp. 35–80). MIT Press. Greene, J., & Cohen, J. (2004). For the law, neuroscience changes nothing and everything. Philosophical Transactions of the Royal Society of London B, 359(1451), 1775–1785.

Moral Categorization and Mind Perception

Guglielmo, S., Monroe, A. E., and Malle, B. F. (2009). At the heart of morality lies folk psychology. Inquiry, 52(5), 449–466. Gunnthorsdottir, A. (2001). Physical attractiveness of an animal species as a decision factor for its preservation. Anthrozoös, 14(4), 204–215. Haslam, N. (2006). Dehumanization: An integrative review. Personality and Social Psychology Review, 10(3), 252–264. Haslam, N., & Loughnan, S. (2014). Dehumanization and infrahumanization. Annual Review of Psychology, 65, 399–423. Huebner, B. (2010). Commonsense concepts of phenomenal consciousness: Does anyone care about functional zombies? Phenomenology and the Cognitive Sciences, 9(1), 133–155. Jack, A. I., & Robbins, P. (2012). The phenomenal stance revisited. Review of Philosophy and Psychology, 3(3), 383–403. Kant, I. (1998). Groundwork of the metaphysics of morals (M. Gregor, Ed. and Trans.). Cambridge University Press. (Original work published 1785) Kant, I. (1996). The metaphysics of morals (M. Gregor, Ed. and Trans.). Cambridge University Press. (Original work published 1797) Khamitov, M., Rotman, J. D., & Piazza, J. (2016). Perceiving the agency of harmful agents: A test of the dehumanization versus moral typecasting accounts. Cognition, 146(1), 33–47. Klebl, C., Luo, Y., & Bastian, B. (2022). Beyond aesthetic judgment: Beauty increases moral standing through perceptions of purity. Personality and Social Psychology Bulletin, 48(6), 954–967. Klebl, C., Luo, Y., Tan, N. P, Ping Ern, J. T., & Bastian, B. (2021). Beauty of the Beast: Beauty as an important dimension in the moral standing of animals. Journal of Environmental Psychology, 75, Article 101624. Knobe, J., & Prinz, J. (2008). Intuitions about consciousness: Experimental studies. Phenomenology and the Cognitive Sciences, 7(1), 67–83. Kozachenko, H. H., & Piazza, J. (2021). How children and adults value different animal lives. Journal of Experimental Child Psychology, 210, Article 105204. Ladak, A., Loughnan, S., & Wilks, M. (2023). The moral psychology of artificial intelligence. Current Directions in Psychological Science, 33(1), 27–34. Loughnan, S., Haslam, N., & Bastian, B. (2010). The role of meat consumption in the denial of moral status and mind to meat animals. Appetite, 55(1), 156–159. Malle, B. F. (2019). How many dimensions of mind perception really are there? In A. K. Goel, C. M. Seifert, & C. Freska (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (pp. 2268–2274). Cognitive Science Society. Monroe, A. E., Dillon, K. D., & Malle, B. F. (2014). Bringing free will down to earth: People’s psychological concept of free will and its role in moral judgment. Consciousness and Cognition, 27, 100–108. Nadler, J. (2012). Blaming as a social process: The influence of character and moral emotion on blame. Law and Contemporary Problems, 75(2), 1–31. Nadler, J., & McDonnell, M.-H. (2012). Moral character, motive, and the psychology of blame. Cornell Law Review, 97(2), 255–304. Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83(4), 435–450. Nijssen, S. R. R., Müller, B. C. N., Bosse, T., & Paulus, M. (2023). Can you count on a calculator? The role of agency and affect in judgments of robots as moral agents. Human-Computer Interaction, 38(5–6), 400–416.

219

220

      

Opotow, S. (1993). Animals and the scope of justice. Journal of Social Issues, 49(1), 71–85. Phelan, M., Arico, A., & Nichols, S. (2013). Thinking things and feeling things: On an alleged discontinuity in folk metaphysics of mind. Phenomenology and the Cognitive Sciences, 12(4), 703–725. Piazza, J., Landy, J. F., & Goodwin, G. P. (2014). Cruel nature: Harmfulness as an important, overlooked dimension in judgments of moral standing. Cognition, 131(1), 108–124. Piazza, J., & Loughnan, S. (2016). When meat gets personal, animals’ minds matter less: Motivated use of intelligence information in judgments of moral standing. Social Psychological and Personality Science, 7(8), 867–874. Pizarro, D., & Tannenbaum, D. (2012). Bringing character back: How the motivation to evaluate character influences judgments of moral blame. In M. Mikulincer & P. R. Shaver (Eds.), The social psychology of morality: Exploring the causes of good and evil (pp. 91–108). American Psychological Association. Raz, J. (1984). On the nature of rights. Mind, 93(370), 194–214. Robbins, P., & Alvear, F. (2023). Deformative experience: Explaining the effects of adversity on moral evaluation. Social Cognition, 41(5), 415–446. Robbins, P., & Jack, A. I. (2006). The phenomenal stance. Philosophical Studies, 127(1), 59–85. Robbins, P., & Litton, P. (2018). Crime, punishment, and causation: The effect of etiological information on the perception of moral agency. Psychology, Public Policy, and Law, 24(1), 118–127. Schein, C., & Gray, K. (2018). The Theory of Dyadic Morality: Reinventing moral judgments by redefining harm. Personality and Social Psychology Review, 22(1), 32–70. Schtulman, A., & Lindeman, M. (2016). Attributes of God: Conceptual foundations of a foundational belief. Cognitive Science, 40(3), 635–670. Schwartz, F., Djeriouat, H., & Trémolière, B. (2022). Agents’ moral character shapes people’s moral evaluations of accidental harm transgressions. Journal of Experimental Social Psychology, 102, Article 104378. Singer, P. (1989). All animals are equal. In T. Regan and P. Singer (Eds.), Animal rights and human obligations (pp. 215–226). Oxford University Press. Strawson, P. F. (1962). Freedom and resentment. Proceedings of the British Academy, 48, 187–211. Sytsma, J., & Machery, E. (2009). How to study folk intuitions about phenomenal consciousness. Philosophical Psychology, 22(1), 21–35. Sytsma, J., & Machery, E. (2010). Two conceptions of subjective experience. Philosophical Studies, 151(2), 299–327. Sytsma, J., & Machery, E. (2012). The two sources of moral standing. Review of Philosophy and Psychology, 3(3), 303–324. Todd, P. (2019). A unified account of the moral standing to blame. Noûs, 53(2), 347–374. Vetter, B. (2013). ‘Can’ without possible worlds: Semantics for anti-Humeans. Philosophers’ Imprint, 13(16), 1–27. Vollmer, F. (1993). A theory of traits. Philosophical Psychology, 6(1), 67–79. Ward, A. F., Olsen, A. S., & Wegner, D. M. (2013). The harm-made mind: Observing victimization augments attributions of minds to vegetative patients, robots, and the dead. Psychological Science, 24(8), 1437–1445.

Moral Categorization and Mind Perception

Waytz, A., Gray, K., Epley, N., & Wegner, D. M. (2010). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14(8), 383–388. Weisman, K., Dweck, C. S., & Markman, E. M. (2017). Rethinking people’s conceptions of mental life. Proceedings of the National Academy of Sciences, 114(43), 11374–11379. Weisman, K., Legare, C. H., Smith, R. E., Dzokoto, V. A., Aulino, F., Ng, E., Dulin, J. C., Ross-Zehnder, N., Brahinsky, J. D., & Luhrmann, T. M. (2021). Similarities and differences in concepts of mental life among adults and children in five cultures. Nature Human Behaviour, 5(10), 1358–1368.

221

10 Moral Emotions Are They Both Distinct and Good? Pascale Sophie Russell

The study of moral emotions is thriving, with an upsurge in research on the topic. However, a key question that needs to be answered is: What makes moral emotions unique? Specifically, it is important to understand the distinct qualities of specific emotions and the degree to which an emotion can be moral. It is essential to consider this question, as moral emotions are believed to influence both our judgments and actions. In terms of emotions being related to moral judgments, three claims have been made in the literature: 1) emotions are associated with moral judgments; 2) emotions amplify moral judgments; and 3) emotions moralize nonmoral acts (Avramova & Inbar, 2013). Evidence to date is mainly supportive of the first claim – in other words, emotions are at the very least associated with moral judgments. Focusing on the relationship between emotions and action, some researchers have argued that a person’s moral emotions are better predictors of the person’s (moral) action than are other moral phenomena, such as moral reasoning (for reviews, see Haidt, 2001, 2003; Teper et al., 2015). Given that emotions may play a crucial role in both moral action and judgment, this suggests emotions are important to morality. Therefore, it is necessary to understand whether a given emotion is moral and whether moral emotions are distinct from nonmoral emotions. Unsurprisingly, it is difficult to decipher what makes a moral emotion distinct from a nonmoral emotion, since there are outstanding debates about what is a nonmoral emotion (Barrett & Russell, 2015; Cowen & Keltner, 2017; Ekman, 1999), what the scope of morality is (Ellemers et al., 2019), what a moral judgment is (Malle, 2021), and what constitutes a moral action (Teper et al., 2015). A careful examination of these issues is beyond the scope of this chapter, but it is nevertheless necessary to keep these issues in mind when evaluating whether and to what degree a given emotion is moral.

10.1 What Makes Moral Emotions Unique? In terms of determining whether a given emotion is moral, previous definitions have focused either on unique elicitors of moral emotions (e.g., certain norm violations) and/or unique consequences of moral emotions (e.g., prosocial behavior). For example, Tangney et al. (2007) have argued that moral emotions respond to violations of norms that are supported by groups and whole societies. Therefore, moral emotions are crucial to social functioning 222

Moral Emotions: Are They Both Distinct and Good?

because individuals often feel socially shared emotions in reaction to various events that are moral. This definition focuses more on the elicitors of certain moral emotions, with moral action being a byproduct. By contrast, Haidt (2003) defines moral emotions as “those emotions that are linked to the interests or welfare either of society as a whole or at least of persons other than the judge or agent” (p. 853). According to this definition, a prototypical moral emotion has two features. First, prototypical moral emotions have “disinterested elicitors,” meaning that a situation does not need to directly involve or impact an individual to trigger an emotional response. Specifically, Haidt argued that “the more an emotion tends to be triggered by such disinterested elicitors, the more it can be considered a prototypical moral emotion” (p. 854). Second, moral emotions are associated with prosocial tendencies, meaning that moral emotions are likely to motivate actions that will benefit others. Thus, Haidt’s definition focuses on both elicitors and outcomes that make a moral emotion unique. Expanding on these previous definitions of moral emotions, it is important to consider in more depth who is benefiting from the emotion and the consequences of specific moral emotions. Cohen-Chen et al. (2020) have argued emotions should be distinguished based on “whether they feel good” and/or “whether they do good.” Evidently, emotions that make us feel uncomfortable can lead to extremely favorable outcomes. For example, anger can be considered a prototypical moral emotion according to Haidt’s (2003) definition and can have both very negative consequences in the case of aggression and very positive sociomoral consequences, ultimately leading to corrective behaviors, negotiation, and reconciliation. Thus, anger can be good for relationships and groups. CohenChen et al.’s (2020) conceptualization was used to explain emotions in conflict but can be applied here to moral emotions as well. In other words, we should be questioning ultimately whether certain moral emotions “do good” for individuals, groups, and societies, which is more important than whether emotions make us “feel good,” as emotions do not have to be a positive feeling or experience to lead to positive outcomes. However, we should broaden the scope of outcomes of moral emotions, by considering the impact of moral emotions on our thoughts and perceptions and how these affect others, in addition to actions and action tendencies. Thus, there needs to be a shift in focus from the elicitors of moral emotions to the social consequences of feeling moral emotions, which includes both cognitive and behavioral consequences for both the self and others. The current chapter will analyze the different families of moral emotions (i.e., emotions within a family are similar but should have some differences) along the lines of whether they do good for others, both in terms of behavioral responses (e.g., promoting engagement, helping, and approach) and socialcognitive processes (e.g., open-mindedness, flexible thinking, and connectedness). The four main families of moral emotions include the other-condemning emotions (e.g., contempt, anger, and disgust); the self-conscious emotions (e.g., shame and guilt); the other-suffering emotions (e.g., compassion and empathy); and the other-praising emotions (e.g., awe and elevation) (Haidt, 2003). I will examine the typical consequences of these specific emotions, not just the situations that typically elicit them (see Table 10.1 for a summary).

223

224

          

Table 10.1 Summary of moral emotions criteria Degree criteria

Elicitors

Contempt

Community violations; seeing someone as beneath you, not measuring up and/or being incompetent. Autonomy violations; blameworthy and/or harmful actions, which are often performed intentionally. Divinity violations; moral violations that demonstrate bad character and/or are despicable; bodily norm violations. Self-moral failures, which focus on the action itself and/or prescriptive moral violations.



Social exclusion and nonnormative collective action.





Aggression and retaliation but also reparative behavior and normative collective action.





Avoidance and purification, some evidence for nonnormative collective action and aggression.







Shame

Self-moral failures, which focus on the person and/or proscriptive moral violations.



Empathy

Feeling the same emotion as another and/or understanding what they are feeling.



Compassion

Feeling concerned for another person’s suffering.



Reparative behavior, social improvement, and collective action but not always when shame is accounted for. Avoidance, denial, and withdrawal, some evidence for social support and other positive consequences when the situation is repairable. Reconciliation, forgiveness, helping, and humanizing behaviors; increase in positive attitudes and seeing similarities with others, or self-other overlap; decrease in hostile action, aggression, stereotyping, and prejudice. However, we often avoid experiencing empathy and empathy failures are frequent. Compassion shares most of empathy’s positive outcomes, such as helping and humanizing behaviors. However, we often experience compassion fade.

Anger

Disgust

Guilt

Consequences

Degree criteria

Emotion







Moral Emotions: Are They Both Distinct and Good?

225

Table 10.1 (cont.) Degree criteria

Emotion

Elicitors

Awe

Things that are vast, transcend previous experiences, or exceed expectancies.



Elevation

Witnessing acts of uncommon goodness or moral beauty.



Consequences Need for accommodation and connection, critical thinking, decrease in selfishness and increase in positive feelings, helping, well-being, environmental concern, and charitable giving; perceiving that time is slowing down; more willingness to associate with others with opposing views. Helping behaviors, imitation of positive role models, desire to share more overlap with others, and decrease in prejudice.

Degree criteria ✔



Note. The tabulated emotions have been argued to be moral to some degree. In the chapter I evaluate whether certain emotions can be moral and to what degree. ◌ ¼ Criteria partially met and ✔¼ Criteria fully met.

10.2 Other-Condemning Emotions Anger, disgust, and contempt are other-condemning emotions, sometimes referred to as morally condemning emotions (Haidt, 2003), or the hostility triad (Izard, 1991). Thus, we experience these emotions when we think that someone else (or a group) has engaged in some form of (moral) wrongdoing. These emotions have been considered to be basic or primary emotions (Ekman, 1999), which implies that they are universally experienced and have unique facial expressions. By contrast, others have argued that emotions are socially constructed (Averill, 1983; Barrett & Russell, 2015; Parrott, 2001); thus, there are no universal or basic emotions. If we take disgust, for example, recent evidence suggests that this emotion is more likely to be caused by social learning (Aznar et al., 2021; Rottman et al., 2018) than previous research suggests (Bloom, 2004; Danovitch & Bloom, 2009). As a result, the “basicness” of these morally condemning emotions is questionable.

10.2.1 Contempt Within the moral realm, research on the CAD triad hypothesis maps the three emotions (contempt, anger, disgust) to three distinct moral violations (community, autonomy, divinity) (Rozin et al., 1999). Specifically, it was found that

226

          

community violations are associated with contempt, autonomy violations are associated with anger, and divinity violations are associated with disgust. However, of the three other-condemning emotions, it is questionable whether contempt is distinct from anger and disgust. For example, some have argued that contempt is a form of disgust (for a review see Fischer & Giner-Sorolla, 2016). Even when examining the CAD triad hypothesis (Rozin et al., 1999), it was found that contempt often overlapped with anger and disgust and was triggered by norm violations in multiple moral domains (Fischer & GinerSorolla, 2016; P. S. Russell et al., 2013). There are also known methodological issues with measuring contempt, such as that the facial expression for contempt is less clear than that of anger or disgust (J. A. Russell, 1991). Additionally, for self-report measures, English speakers do not always understand what the term “contempt” means (Ekman et al., 1991). Related to this point, people less frequently think of contempt as an emotion; thus, it is less accessible (Fehr & Russell, 1984). If we first focus on the elicitors of contempt, within hierarchical societies contempt is elicited when an individual sees another individual as beneath them and not even worthy of strong feelings such as anger (Fischer & Giner-Sorolla, 2016). In more egalitarian societies, contempt is seen as an expression that an individual does not measure up (Haidt, 2003). Research has also identified that we experience contempt when we judge someone to be incompetent (Hutcherson & Gross, 2011). As mentioned previously, the CAD hypothesis links contempt with ethics of community, which includes concerns such as caring that a certain hierarchy exists and that everyone has certain roles within society that they must fulfill (Rozin et al., 1999). Contempt can be directed to the person as a whole being, or their actions (Malle et al., 2018). In terms of the experience of contempt, this emotion is said to be much cooler than anger and disgust (Izard, 1977; Rozin et al., 1999). Some have even questioned whether contempt is a sentiment (i.e., a standing attitude) rather than an experienced state emotion (for reviews see Fischer & Giner-Sorolla, 2016 and Malle et al., 2018). In terms of consequences, contempt can be associated with cognitive changes, in which an individual is treated as having less worth within future interactions (Oatley & Johnson-Laird, 1996). For the behavioral tendencies and actions, evidence has been mixed in terms of whether contempt is associated with avoidance and/or attack tendencies (Malle et al., 2018). For example, it has been found that in the short term and long term contempt can result from unresolved anger and can lead to social exclusion behaviors (Fischer & Roseman, 2007). Also, recent evidence indicates that disgust may be a better predictor than contempt for nonnormative collective action tendencies, such as violent protest (Noon, 2019). Disgust has also been shown to be a better predictor of dehumanizing beliefs and action tendencies than contempt (Giner-Sorolla & Russell, 2020). In summary, contempt seems to be a less straightforward moral emotion, in terms of its elicitors and consequences. Additionally, it may be part of the experience of disgust and anger, or more

Moral Emotions: Are They Both Distinct and Good?

similar to an emotion like hatred, due to its longevity. In short, even though contempt is relevant to morality, it may not be a distinct emotion.

10.2.2 Anger Next, I turn to the moral nature of anger. As a moral emotion, anger probably has the most long-standing history in the field, besides that of empathy. Mounting evidence demonstrates that anger can often be a moral emotion for two reasons. First, there seem to be common triggers of anger, which are often linked to moral situations and contexts (see Lomas, 2019, and P. S. Russell & Giner-Sorolla, 2013, for a review). These contextual factors can operate as mitigating circumstances intensifying or reducing anger depending on the situation at hand. Second, the common belief that anger is a negative emotion that leads to aggression (i.e., it does bad) is questionable, with growing research refuting this assumption. Specifically, evidence indicates that anger can lead to positive outcomes in some circumstances. Below I outline the evidence for both reasons for designating anger as a typically moral emotion. Over decades of research, a clear connection has been made between anger and its cognitive elicitors, which are often linked to moral situations. Anger has been linked with the appraisals of goal blockage, other blame, and unfairness (Cova et al., 2013; Lazarus, 1991; Roseman et al., 1996; Smith & Ellsworth, 1985; Wranik & Scherer, 2010). In the moral realm, anger is elicited in response to actual or symbolic harm (Rozin et al., 1999) and especially intentional harm (Cova et al., 2013; P. S. Russell & Giner-Sorolla, 2011). Anger has also been associated with attributions of responsibility and blame (Alicke, 2000; Goldberg et al., 1999; Tetlock et al., 2007), in which there is a cyclical relationship between anger and these appraisals. Finally, anger can also be reduced if it is felt that the behavior was carried out in the service of a greater good (Darley et al., 1978). Thus, all of these appraisals or elicitors are directly related to evaluations of a moral situation and its consequences. Focusing on the consequences, anger is an approach-related emotion associated with appetitive motivations, as some positive emotions are (Carver & Harmon-Jones, 2009). The social function of anger is attained by “forcing a change in another person’s behavior,” in hopes of achieving a better outcome (Fischer & Roseman, 2007, p. 104). Whether or not the behavioral consequence of anger is hostile, it can nevertheless be argued that anger, in general, motivates individuals to approach the cause of their anger. Numerous studies have highlighted aggression as a common response to feeling angry, whether verbal and/or physical (Izard, 1977). In many instances, people are motivated to get back at individuals perceived as having wronged them (Haidt, 2003; Izard, 1977; Plutchik, 1980; Shaver et al., 1987). Anger encourages the person experiencing the emotion to either punish or rebuke the person who has offended them (Haidt, 2003; Nussbaum, 2004). But social cohesion or reparation is a more common consequence of anger than previously thought (Averill, 1983; Fischer & Roseman, 2007). It has been found that anger inspires persons to

227

228

          

engage in reparative behaviors, such as talking things over with the transgressor, particularly in the long term (Fischer & Roseman, 2007; Weber, 2004). Anger has also been shown to be a key motivator for collective action, particularly normative collective action, such as signing petitions and engaging in peaceful protest (Sasse et al., 2020; Tausch et al., 2011). The reason anger can lead to such different behaviors is that anger varies depending on the current context. That is, it is a contextually or situationally dependent emotion, which is evident by the mitigating factors that can influence whether anger is experienced, and its intensity, as reviewed in this section earlier. Not only does the context impact the experience of anger, but it also impacts how people respond to their anger. For example, relationships between transgressors and victims influence both the intensity of anger and the resulting actions (Fischer & Roseman, 2007; Kuppens et al., 2004, 2007; Weber, 2004). By contrast, contempt and disgust are less likely to be elicited by those who are close to us and are more likely to elicit rejection consistently, whereas anger is more likely to occur in close relationships and groups and elicit variable behavioral responses (Fischer & Roseman, 2007; Hutcherson & Gross, 2011). Anger seems to play a crucial role in both interpersonal relationships, as well as social and group contexts (Cottrell & Neuberg, 2005), by eliciting approach behaviors that can include hostility, but generally anger can encourage reform or change, especially in the long term. Another factor that may influence one’s anger, and associated response, is social accountability, that is, whether individuals feel that their actions will impact others influences their anger (Averill, 1983). Thus, when persons feel accountable, they will be less likely to respond automatically and thoughtlessly to their anger. It has been argued that social accountability reduces the impact of anger (Lerner et al., 1998). Therefore, persons are motivated to respond appropriately and constructively to their anger because if they do not it can have extremely negative consequences for them and others (Izard, 1977). Evidence surrounding both accountability and the nature of relationships suggests that anger is not just elicited in the moment but can lead individuals to consider how their anger, and associated response, may impact future relationships with other individuals and groups. Thus, anger focuses on the future and can elicit long-term change, which results in positive outcomes. In summary, evidence suggests that anger is typically a moral emotion due to both its moral elicitors and positive consequences, such as reconciliation.

10.2.3 Disgust Next, I discuss the controversial emotion of disgust. Disgust is an emotion that has captured the attention of many researchers. However, within the literature, there is still debate as to whether disgust is a moral emotion at all, whether it is similar to or different from core disgust, and whether it overlaps with anger and contempt. If we first look at the individual or personal level of disgust, rather than the moral or social realm specifically, we can see that theorists have

Moral Emotions: Are They Both Distinct and Good?

struggled to capture what elicits disgust, resulting in tautological explanations. For example, appraisals that elicit disgust include “distasteful stimuli” (Ortony et al., 1988) and “poisonous ideas” (Lazarus, 1991). This debate concerning what elicits disgust also extends into the moral realm. One of the key questions within this family is whether disgust is a distinct emotion and how far into the moral realm it extends. There are four main positions regarding what moral disgust is: 1) the general morality/character position, 2) the metaphorical use position, 3) the purity position, and 4) the bodily norm position (P. S. Russell & Giner-Sorolla, 2013). Here I will review the two extreme ends of this debate: the general morality/character position and the bodily norm position. Even though these two positions conflict in scope, they demonstrate that disgust is a personor object-focused emotion that contrasts with anger, a situational or contextfocused emotion. Before covering these two positions, for comparison purposes, I will briefly review the purity and metaphorical use positions. The purity position argues that disgust is elicited by purity or divinity violations, such as cleaning a toilet with a national flag or eating one’s pet dog (Horberg et al., 2009; Rozin et al. 1999). However, there are very few purity violations that do not also involve bodily norm violations (e.g., sexual behaviors) and/or core disgust elicitors, such as bodily fluids or blood (see P. S. Russell & GinerSorolla, 2013 for a review of the issue). Thus, because of this overlap in violations, the bodily norm position may encapsulate the purity position. In contrast, the metaphorical use position argues that when disgust is expressed in the moral realm, individuals are just using the term “disgust” to express their true feeling of anger (Nabi, 2002; Royzman et al., 2014). Due to the importance of examining parallels between the general morality position and the bodily norm violation, I will now focus on these two positions. First, according to the general morality or character position, disgust can be elicited by a range of immoral actions or violations, such as cheating and unfairness. In support of this position, one of the most common definitions of disgust has been proposed by Rozin et al. (1993). They argue that the core function of disgust is the avoidance of ingesting contaminating or offensive objects in the mouth. Extending further, disgust has evolved to include sociomoral elicitors in which disgust is used as a form of social control. At the sociomoral level, disgust is elicited in response to individuals who appear as if they cannot give back to society and/or have deep character flaws. Based on this general morality hypothesis, individuals or groups can elicit disgust when they have done something that is morally wrong or does not fit in with their society. Supporting this view, Jones and Fitness (2008) argue that individuals are physically repulsed by moral transgressors who use deception and/or abuse their power. Therefore, according to this definition, an individual can be deemed as disgusting if they have engaged in despicable behavior. Both accounts, Rozin et al. (1993) and Jones and Fitness (2008), make it difficult to distinguish moral disgust from anger by associating disgust broadly to most norm violations and/ or deceptive behavior. These definitions are then problematic because anger is just as likely to arise in these situations, making it difficult to distinguish anger

229

230

          

and disgust’s individual effects. More recently, it has been argued that disgust is elicited by someone who has a bad character or has done something that shows bad character. For example, Giner-Sorolla and Chapman (2017) found that disgust is elicited by bad character, whereas anger focuses on the event. The researchers demonstrated this across several studies by varying violation types (i.e., indicative of bad character or not) and manipulating relevant factors in an experimental design (i.e., harmful desire and harmful consequences). They found that the desire to cause harm (an indicator of bad character) was predictive of disgust, while harmful consequences were more closely related to anger. Conceptually, the triggers of moral disgust, according to the general morality or bad character position, seem to be similarly tautological as the triggers of nonmoral core disgust, since these elicitors just connote that a person or action is really bad. Additionally, one problem with research in support of this position is that most studies still primarily rely on self-report of emotion terms or facial endorsement, that is, participants responding to whether an emotion expression corresponds to how they are feeling (P. S. Russell & Giner-Sorolla, 2013). This is particularly problematic if researchers are asking people to report their “moral disgust,” as they may be artificially increasing the importance of this term (P. S. Russell et al., 2013). Relatedly, it has been found that when the physical sensations or action tendencies of disgust are not measured, anger, rather than disgust, is elicited by divinity violations unrelated to the body or pathogens (Royzman et al., 2014). In contrast to the previous arguments, according to the bodily norm position, disgust has a very specific function, which is to govern norms regarding the body, particularly norms about sexual behaviors and eating (e.g., bestiality, incest, and pedophilia). In these contexts, disgust tends to be elicited from a categorical judgment, as to whether the behavior is taboo or not. In these contexts, disgust appears to be an unreasoning emotion that gives rise to inflexible thoughts and behaviors, namely avoidance and purification (P. S. Russell & Giner-Sorolla, 2013). People also find it difficult to justify their disgust in these contexts, instead providing tautological reasons, such as: “It’s just disgusting.” By contrast, in response to other socio-moral violations, such as harm and unfairness, disgust appears to heavily overlap with anger and does not appear to have the same detrimental consequences as disgust experienced in reaction to bodily norm violations (such as incest). Among consequences, disgust encourages avoidance, purification, and expulsion of objects, other individuals, or groups (see P. S. Russell & Giner-Sorolla, 2013, for a review). Recent evidence also indicates that disgust may be associated with indirect aggression (Tybur et al., 2020) and nonnormative collective action (Noon, 2019). The reason why we should care whether something is truly “morally” disgusting (or whether a different emotion is elicited, such as anger) is because disgust is a “sticky” emotion. For example, Rozin and colleagues have found that disgusting qualities can be transferred to different objects based on the laws

Moral Emotions: Are They Both Distinct and Good?

of sympathetic magic (Rozin et al., 1986; Rozin et al., 1990; Rozin et al., 1992; Rozin & Nemeroff, 2002). The first law of sympathetic magic holds that “once in contact, always in contact”; therefore, disgusting qualities cannot be eliminated once they have been transferred (e.g., a sweater worn by Hitler or someone with AIDS will remain disgusting) (Rozin et al., 1986; Rozin et al., 1992). The second law of similarity holds that “the image equals the object.” This law can explain why an object that is similar in shape to an inherently disgusting object would also be deemed disgusting (e.g., chocolate that is in the shape of dog poop). These laws of sympathetic magic also imply that the effects of contagion are insensitive to dose (e.g., it doesn’t matter how long the sweater was worn by someone). It has been found that individuals engage in avoidance and purification behaviours when disgusting qualities are transferred to a previously neutral object (e.g., Rozin et al., 1986). Additionally, when asked to explain these behaviors, persons admitted that they could not come up with reasons and could not deny that their behaviors were based on irrational thoughts. This evidence suggests that core disgust can have transference or contagion effects. Evidence also suggests that moral contagion effects can occur via disgust (Eskine et al., 2013). Specifically, Eskine and colleagues found that after direct or indirect contact with someone who engaged in an immoral transgression (e.g., lying or cheating), people experienced more guilt following contact, suggesting a moral transfer effect. This effect was moderated by disgust sensitivity (i.e., an individual difference in the propensity to experience disgust); in other words, those with higher levels of disgust sensitivity were more likely to experience the moral transfer of guilt than those with lower levels of disgust sensitivity. However, the researchers did not measure whether feelings of state disgust were experienced by participants and/or the original transgressor, which is necessary for future research to establish whether moral disgust truly transfers interpersonally, that is, between individuals. This is important since recent evidence suggests that these moral contagion effects may occur because of reputation concerns, not necessarily because of disgust (Kupfer & GinerSorolla, 2021). Specifically, these authors found that participants avoided morally tainted objects because of concerns about how they would appear to others if the object was on public display, more so than from concerns about coming into contact with the disgusting objects. Another reason to be concerned with whether something is actually morally disgusting is that disgust has been found to have an automatic influence on moral judgment. For example, Wheatley and Haidt (2005) elicited unconscious disgust using hypnosis, which made moral judgments more severe. Similarly, Schnall et al. (2008) found that disgust from an outside source, that is, ambient disgust, had the same effect on moral judgments. However, a meta-analysis suggests that these effects may be smaller than previously thought, or even nonexistent (Landy & Goodwin, 2015). Indeed, large-scale replication studies have failed to replicate the original effect, when disgust is elicited by taste (Ghelfi et al., 2020). This ambiguous evidence regarding whether disgust impacts or amplifies our moral judgments casts doubt on the claim that disgust

231

232

          

is directly connected to moral concerns. In summary, disgust may sometimes co-occur with moral concerns, but its standing as a moral emotion is open to question.

10.3 Self-Conscious Emotions As with the other-condemning emotions, researchers have questioned the distinctiveness of the self-conscious emotions. Additionally, there has been mixed evidence in terms of whether the self-conscious emotions are always associated with positive consequences. Emotions that belong to this family of (moral) emotions include shame, regret, guilt, pride, and embarrassment. These emotions are secondary in nature, meaning they are more complex, normally less automatic, and require self-awareness (Haidt, 2003; Tangney et al., 2007). We typically feel these emotions when we have experienced some kind of selffailure (for example, feeling as if you have not achieved something or failed to act in the way that you should have), which can be moral in nature (Tracy & Robins, 2006). We feel these emotions when we have either engaged in some moral wrongdoing (e.g., shame and guilt) or when we have acted in a morally superior way (e.g., pride). Extending further, we can feel these self-conscious emotions when reflecting on our own group’s present or past behavior (Branscombe & Doosje, 2004; Lickel et al., 2011). This section will focus on comparing shame and guilt’s elicitors and consequences. Evidence regarding their distinctiveness is mixed. There is an unresolved debate as to whether shame is always detrimental, and guilt always beneficial, as originally proposed by Tangney and colleagues (2007). I will not discuss pride here because it is elicited when we feel that we have done something good (see Tracy & Robins, 2007 for a review), which is different from the triggers of shame and guilt. Embarrassment and regret appear to have too much overlap with shame and guilt methodologically and conceptually, often being used as synonymous terms with shame and guilt, respectively (e.g., Lickel et al., 2005; Noon, 2019). Thus, embarrassment and regret will not be evaluated here either. One of the most prominent outstanding issues is whether shame and guilt are the same emotion (see Teroni & Deonna, 2008 for a review), as Tompkins (1963) famously proposed. There are some important similarities between shame and guilt. First, they are often considered to be complex emotions that are uniquely experienced by humans and require some form of self-awareness or reflective thought (Tangney et al., 2007). Second, the self is the focus event, providing immediate feedback on or punishment for one’s behavior (Tangney et al., 2007). In other words, people feel bad about what they have done. Third, they are often triggered by self-failures (Gausel & Leach, 2011). Finally, these emotions typically develop later and are believed to be secondary emotions, as they are tied to more complex goals and behaviors (Izard, 1971; Tangney & Dearing, 2002).

Moral Emotions: Are They Both Distinct and Good?

By contrast, there are some notable differences between shame and guilt. It has been found that shame relates to proscriptive morality (i.e., what we should not do, avoidance) and guilt relates to prescriptive morality (i.e., what we should do, approach) (Sheikh & Janoff-Bulman, 2010). This relationship was found at the trait level, where the authors found a positive correlation between the Behavioural Inhibition System and shame proneness and between the Behavioural Approach System and guilt proneness. Also, at the state level, priming a proscriptive orientation was found to increase shame, and priming a prescriptive orientation was found to increase guilt. Finally, when the authors manipulated the type of violations, proscriptive violations predicted feelings of shame and prescriptive violations predicted feelings of guilt. People also frequently anticipate that they will feel either shame or guilt. In the extended theory of planned behavior, anticipated regret or guilt and moral norms are additional factors that can explain whether we engage in certain behaviors (Rivis et al., 2009). A recent systematic review found that in the context of women’s reactions to breastfeeding, which is often perceived as a moralized issue, women commonly anticipate and experience shame and guilt about the way they choose to feed their baby and about public breastfeeding (P. S. Russell et al., 2021). Women experience guilt when they feel as if they have not acted in the way that they should have – for example, if they have not reached their feeding goals, or if they feel like a bad mother. This evidence suggests that shame and guilt are focused on different kinds of injunctive norms. Additionally, shame and guilt show some unique parallels with disgust and anger, respectively, which suggests that shame and guilt are distinct. For instance, shame and disgust are believed to be avoidance emotions, while guilt and anger are approach emotions (Leach, 2017; P. S. Russell & Giner-Sorolla, 2013). It has also been argued that shame and disgust are bodily-focused emotions, while guilt and anger are focused on harm and fairness (Nussbaum, 2004). Like the other-condemning emotions, self-conscious emotions can be experienced in the moment as states and can exist as dispositions or traits (i.e., shame or guilt proneness; Cohen et al., 2011). Recent evidence has also shown that core disgust sensitivity and contamination concerns are related to shame proneness whereas moral disgust sensitivity is related to guilt proneness (Terrizzi Jr. & Shook, 2020). Research has also found that anger and disgust can socially cue guilt and shame, respectively (Giner-Sorolla & Espinosa, 2011). Specifically, it was found that after exposure to an angry expression, participants reported feeling more guilt, while after exposure to a disgusted facial expression, participants reported feeling more shame. Due to parallels between disgust and shame, this evidence may suggest that, like disgust, shame is a less typical moral emotion. By contrast, like anger, guilt may be a more typical moral emotion. When examining differences between shame and guilt, until recently, shame has been positioned as a bad or detrimental emotion, in comparison to guilt. In terms of elicitors, some have argued that shame and guilt are triggered by

233

234

          

similar types of moral violations, but what differs is the appraisal of the situation or wrongdoing (Tangney & Dearing, 2002; Tangney et al., 2007). Specifically, these findings indicate that shame is more focused on global negative beliefs about the self (“I am bad”), while guilt is more focused on the action or event (“I did a bad thing”). Additionally, it has been found that shame is associated with internal, stable, and uncontrollable attributions (i.e., lack of ability as the cause of self-failure), while guilt is associated with internal, unstable, and controllable attributions (i.e., lack of effort as the cause of selffailure; Tracy & Robins, 2006). It has also been argued that shame is triggered by concerns of image or reputation (Sznycer, 2019). Others have contended that an important distinction between the two emotions is that shame relates to values whereas guilt is associated with norm violations (Teroni & Deonna, 2008). These findings do not fully align with the categorical distinction between global self versus action appraisals originally proposed by Tangney but instead suggest more variability in terms of the elicitors of these emotions. An additional distinction that requires more attention is whether guilt and shame relate to different behaviors or consequences. The general assumption is that shame is an avoidance emotion linked with hiding, denying, and escaping (Tangney & Dearing, 2002), while guilt is an approach emotion linked with reparative and confession behaviors (Lickel et al., 2005; Tangney et al., 2007). Shame proneness (i.e., an individual difference or disposition to feel shame more intensely) is also more closely related to detrimental outcomes related to the self, such as poor self-esteem, depression, and eating disorders (see Tangney et al., 2007 for a review). Additionally, shame proneness increases the likelihood of engaging in risky behaviors, while guilt has the opposite effect (Tangney et al., 2007). In comparison to shame, guilt has a long-standing history of being tied to compensatory behaviors or apologies (Doosje et al., 1998) and collective action (Becker et al., 2011; Tausch et al., 2011). Guilt has also been found to relate to social improvement (Gausel & Leach, 2011). However, guilt has also be linked to less functional behaviors, such as self-punishment when someone feels they have done something wrong (Inbar et al., 2013). Additionally, some evidence has shown that after controlling for shame, guilt does not always have such a strong association with reparations and apologies as other evidence suggests (Giner-Sorolla et al., 2011; Iyer et al., 2007). Scholars have also suggested that shame may have more diverse relationships with behavior and motivation than previously thought (Gausel & Leach, 2011). Specifically, in their review, Gausel and Leach (2011) found that, in response to moral self-failures, when an individual focuses on specific events/attributions this appraisal triggers feelings of shame and the need to self-improve. In contrast, when an individual has experienced moral self-failure and is focused on the global self, this appraisal triggers feelings of inferiority and defensive behaviors, such as avoidance. Therefore, according to this view, it is feelings of inferiority, not shame, that result in negative defensive behaviors. Further clarifying the role of guilt and shame in constructive

Moral Emotions: Are They Both Distinct and Good?

behaviors, a meta-analysis by Leach and Cidam (2015) identified that shame is more likely to be associated with positive outcomes when the situation seems reparable, in terms of either cause or consequence, while guilt is associated with positive outcomes regardless of how reparable the situation is. Related to this point, when moral shame (triggered by one’s actions violating moral norms) and image shame (triggered by a tarnished social image) are distinguished from one another, this distinction provides evidence against the claim that shame always leads to negative behavioral effects (Rees et al., 2013). Specifically, in this research, it was found that image shame triggered from an in-group’s historical transgression (e.g., Germans’ role in the Holocaust) was associated with social distance from an unrelated victimized minority group (i.e., foreigners). By contrast, moral shame was found to be associated with support for foreigners. Similarly, it has been found that even longitudinally, moral shame and image shame are predictive of different types of behaviors, with image shame being related to negative behaviors and moral shame being related to positive behaviors (Allpress et al., 2014). In this research, it was also found that guilt was inconsistently related to positive behaviors. This evidence suggests that shame may not always lead to negative outcomes in response to moral self-failures; thus, the damage to the self and social relations may not always occur. As a result, shame can be a functional moral emotion when the situation seems reparable. Also, like anger, the behavioral responses that are associated with guilt vary, but the conclusion seems to be the same as for anger, namely, that the overall response to guilt can be positive under certain circumstances. Cumulatively, the evidence suggests that when shame is focused on moral norms specifically, rather than one’s image or reputation, it can be considered a moral emotion, and that guilt is normally a moral emotion, due to its elicitors and consequences.

10.4 Positive Emotions Up to this point, the chapter has focused on the moral status of several negative emotions, examining the similarities and differences between emotions within the moral families. It is now important to examine whether any positive emotions – that is, emotions that typically feel and/or do good – can be moral emotions. Also, it is important to examine whether these emotions can be considered separate constructs. Specifically, I will now turn toward comparing two other-suffering emotions (empathy and compassion) and two otherpraising emotions (awe and elevation). Gratitude can also be considered to belong to the latter family but will not be discussed here because it is elicited by the perception that someone else has done something good or beneficial for the individual experiencing gratitude (Haidt, 2003). Thus, the self is more involved in the experience of gratitude than with the experience of awe and elevation, which are more focused on the object or another individual.

235

236

          

10.4.1 Empathy and Compassion Empathy and compassion are generally considered to be other-suffering emotions, meaning they get us to focus on others and care for others. Like anger, empathy is an established emotion in the field of moral psychology (Haidt, 2003). At face value, many consider empathy to be central to morality. However, a central problem is that there have been far too many definitions of what empathy is, which makes it difficult to study empathy. For example, a review of prior literature has identified 43 different definitions of what empathy is (Cuff et al., 2016). Additionally, among these definitions of empathy, there is a large amount of overlap with the constructs of compassion and sympathy. Specifically, empathy can be defined as feeling the same or similar emotion as another individual or group, or understanding how another person is feeling (Decety & Jackson, 2004; Eisenberg et al., 2006). It is recognized that there are then two components of empathy: affective empathy, which is feeling the same emotion as another person, and cognitive empathy, which is understanding what another individual or group is feeling (Cuff et al., 2016). However, the latter (cognitive empathy) overlaps with the cognitive construct of perspective taking, that is, understanding what another individual or group is thinking, which is not an emotion. Compassion is conceptualized as an emotion we feel when we are concerned for another person’s suffering (Goetz et al., 2010). Overlapping with compassion, sympathy is defined as feeling compassionate or concerned for another individual’s or group’s current state (Eisenberg, 1988). Therefore, compassion seems to subsume feelings of sympathy. Compassion is believed to be different from empathy in that individuals are not experiencing the same emotion as the target person (Tangney et al., 2007). However, there is still the overlap for both empathy and compassion, in that we are concerned or focused on what another individual or group is feeling. There is also overlap in the consequences of compassion and empathy, in that both emotions have been linked to positive social outcomes. Generally, empathy (and also perspective taking) is believed to be essential for social relations, encouraging prosocial behaviors and discouraging hostile action (Batson & Ahmad, 2009). Empathy facilitates the humanizing of others; thus, it can be seen to oppose other-condemning emotions, particularly disgust and contempt, which encourage dehumanization (Giner-Sorolla & P. S. Russell, 2019). For example, in intractable conflicts, empathy and compassion have been shown to reduce aggression, increase positive attitudes as well as helping behavior, and increase the desire for reconciliation and forgiveness (Klimecki, 2019). Empathy can lead to helping behaviors, situational attributions, and seeing more similarities with others, or self-other overlap (Batson & Ahmad, 2009). Empathy can also reduce stereotyping, prejudice, and hostile action (Batson & Ahmad, 2009). However, other evidence has found that people may find it difficult to experience empathy, or may even avoid it, because it is physically, emotionally, and cognitively taxing (Cameron et al., 2019; Hodges & Klein, 2001). Additionally,

Moral Emotions: Are They Both Distinct and Good?

we often experience “empathy failures” when people are dissimilar to us and/or are rivals (Bloom, 2017; Zaki & Cikara, 2015). Compassion has similar positive effects as empathy. For example, compassion has been shown to enhance moral expansiveness, that is, including more beings in our moral circle (Crimston et al., 2022). However, “compassion fade” can occur (i.e., compassion can be reduced or eliminated) when there are multiple victims rather than a single victim (Västfjäll et al., 2014). Thus, in conclusion, there is considerable overlap between the other-suffering emotions, and they do not always lead to the best social outcomes, either cognitively or behaviorally. As a result, even though empathy and compassion can be moral, as they often have positive consequences, there are instances when people do not feel these emotions despite typical triggers.

10.4.2 Awe and Elation This then leads to the final type of moral emotions, the other-praising emotions. However, it is questionable whether the other-praising emotions of elevation and awe are unique moral emotions or rather fall under the umbrella term of “kama muta” (see Bartos, et al., 2020; Zickfeld et al., 2017). Kama muta is similar to feeling moved, and the most typical feature of this experience is the heightened sense of communal sharing (Fiske, 2020). It is also described as an emotion that elicits physiological sensations like those of elevation and awe, such as chills and a warm feeling in the chest (Fiske, 2020). By contrast, some have argued that elevation, awe, and admiration are distinct in terms of what elicits them, how they are experienced, and their consequences (Haidt, 2003; Onu et al., 2016). Admiration is an other-focused emotion (Onu et al., 2016). However, admiration is distinct in that it is believed by some to be elicited when we see someone exceed expectations of skill or talent (Algoe & Haidt, 2009; Onu et al., 2016). Thus, admiration is often focused on competency rather than morality, and as such, it seems less relevant to morality. It is also questionable whether it is elicited distinctly from the other other-praising emotions of elevation and awe, or whether it is just part of these experiences, as there is overlap in terms of elicitors and consequences. Admiration has been linked with prosocial outcomes and has been shown to reduce prejudice. Specifically, admiration facilitates social change (Sweetman et al., 2013). Admiration also underlies reductions in both sexual and racial prejudice through intergroup contact (Seger et al., 2017). Since some have questioned whether admiration is a moral emotion at all, and admiration appears to overlap with awe and elevation, only the latter two emotions will be compared in further detail. Both awe and elevation are believed to operate as a “hive switch,” encouraging people to be less selfish and more prosocial, by broadening attention and regard for others (Haidt, 2012; Pohling & Diessner, 2016). In terms of the experience or elicited bodily sensations, they are both associated with feeling moved in some way and described as feeling warm in the chest, tingly, and

237

238

          

having goosebumps (Algoe & Haidt, 2009). Awe is elicited in response to perceived vastness or by things that transcend previous experiences, or more specifically exceed our expectancies (Gocłowska et al., 2023). It can be elicited by a range of objects, including nature, landscapes, art, music, and religious experiences. Keltner and Haidt (2003) postulated that the awe experience can be categorized into five different kinds or flavors: threat, beauty, ability, virtue, and supernatural causality. What is striking about the flavors of awe (Keltner & Haidt, 2003) is that ability overlaps with admiration’s elicitors and virtue overlaps with elevation’s elicitors (described below), which captures the clear overlap and co-occurrence of these emotions. Thus, there is overlap with awe and elevation, and with awe and admiration. What is also apparent from awe elicitors or flavors is that the experience of awe is not always entirely positive, since it is connected to feelings of threat (Chaudhury et al., 2021). Finally, awe is rarely elicited by what other people are doing or have done, or to morality but instead can be triggered by physical objects (e.g., nature). By contrast, elevation is elicited by witnessing acts of uncommon goodness or moral beauty, that is, someone acting in an exceptionally moral way (Haidt, 2003). More specifically, elevation is triggered when we see someone else assist another individual who is “poor, sick, or stranded in a difficult situation” (Thomson & Siegel, 2017, p. 629). As a result, it could be argued that elevation is a mixed emotion, as even though it is primarily positive, there is a negative undertone to elevation, as it involves rising above some negative experience. It also has the potential to trigger self-comparisons that are not always positive, as we may not feel that we are good enough in comparison to the hero that elicits elevation. Prior research has already identified that the experience of elevation is moderated by one’s own moral identity (Aquino et al., 2009). Specifically, it was found that those that are higher in moral identity are more likely to experience elevation intensely. This suggests that when unflattering self-comparisons are possible, those with higher moral identity may be more threatened, and elevation is less likely to be experienced, or may even backfire. In terms of consequences, awe is characterized by numerous positive outcomes (see Gottlieb et al., 2018, and Keltner & Haidt, 2003, for reviews). First, awe triggers a need for accommodation to change the current circumstances one has witnessed, for example, by eliciting the need to include more beings in our moral or social circle. Second, awe increases critical thinking, promotes consideration of additional perspectives, and can trigger feelings of humility. Third, awe decreases selfishness and triggers the need to connect with others. Fourth, awe increases positive feelings and well-being. Awe has also been shown to increase environmental concern (Yang et al., 2018) and charitable giving (Guan et al., 2019). Awe also triggers the belief that time is slowing down and, as a result, awe increases one’s willingness to dedicate more time to others (Rudd et al., 2012). Recently, awe has been shown to reduce ideological conviction and increase willingness to associate with others with opposing views (Stancatio & Keltner, 2019). Therefore, even though awe in some instances can be a nonsocial rather than moral emotion based on its elicitors, it can result in numerous

Moral Emotions: Are They Both Distinct and Good?

positive social outcomes, which foster connectedness and social harmony. Therefore, in terms of doing good, awe does seem to be a good candidate for a moral emotion. Like awe, elevation has also been shown to have numerous positive outcomes. Elevation encourages helping behaviors (Schnall & Roper, 2012; Van de Vyver & Abrams, 2015), imitation of positive role models (Diessner et al., 2013), and the desire to share more overlap with others (see Pohling & Diessner, 2016; Thomson & Siegel, 2017; Van de Vyver & Abrams, 2015, for reviews). Haidt has argued that elevation operates in opposition to disgust in social relations (Haidt, 2003; Lai et al., 2014). For example, previous research found that elevation reduces sexual prejudice but not racial prejudice, which the authors argue was explained by disgust being the basis of sexual prejudice (Lai et al., 2014). However, recent research that has aimed to replicate this effect found that admiration is also effective at reducing sexual prejudice (Bartos, et al., 2020). Interestingly, elevation was found to be positively associated with disgust in one of the studies. This latter result may support the idea that elevation is a mixed emotion and the idea that, to see the true benefit of elevation, negative emotions such as disgust need to be diminished. Future research may focus on elevation and awe as prejudice reduction tools, to gain a better understanding of when, where, and why these emotions lead to positive social outcomes. In summary, a common feature of awe and elevation is that they are triggered by witnessing something exceptional or unusual. But what differentiates these emotions is their focus: Elevation focuses on morality and awe often focuses on natural objects and scenes, such as landscapes. Thus, in terms of elicitors, awe shows less of an obvious connection with morality. However, both emotions are related to feelings of warmth, through the consequences that they typically elicit and their experience (i.e., feeling chills and goosebumps). It could also be argued that admiration is commonly triggered within both elevation and awe experiences. For example, when we witness someone engage in a selfless act (i.e., a situation that can trigger elevation), it is virtually impossible not to feel admiration as well. The same is true for awe, in that when viewing an exceptional beauty scene or art, we also come to admire the space that we are in or the piece of artwork. Thus, in this family of emotions, there is a large amount of overlap in terms of both elicitors and consequences. From this one could conclude that these emotions fall under the umbrella term of “kama muta” (i.e., feeling moved) and thus are equally beneficial in terms of promoting positive social outcomes. Thus, while both elevation and awe are relevant to morality because of their consequences, the elicitors of awe are not strongly related to morality.

10.5 Conclusion To conclude, this chapter provided an extension of previous models of moral emotions, by highlighting the importance of examining both cognitive

239

240

          

and behavioral consequences of emotions when determining the degree to which an emotion can be moral. Even though there have been some notable findings in terms of the likely consequences of the moral emotions examined in this chapter, it is evident that further research is needed on this topic, shifting focus away from what elicits moral emotions. The unique components of four different emotion families have been examined in this chapter: other-condemning, self-conscious, other-suffering, and other-praising (see Table 10.1 for summary). Within the other-condemning emotion family, contempt overlaps considerably with anger and disgust. Disgust seems to be associated with morality, but according to the qualities of moral emotions proposed here, it is not typically a moral emotion, due to its negative consequences. By contrast, anger is often a moral emotion regarding both its elicitors and consequences. Future research should endeavor to focus on the positive consequences of anger and when these types of effects can be cultivated. Among the self-conscious emotions of shame and guilt, neither emotion shows a straightforward path to positive outcomes. Of the two emotions, however, it still seems that guilt is more likely to lead to moral consequences and improved social relations. For empathy and compassion, since there is still so much ambiguity in defining what these emotions even are, it is difficult to determine whether they are moral emotions and have positive consequences. Also, as reviewed here, there is growing evidence of the potential negative impact of experiencing empathy or compassion. Finally, elevation and awe show considerable overlap in their consequences, often leading to prosocial outcomes, but there are differences between their elicitors, in which awe is rarely triggered by moral situations but elevation is often elicited by moral situations that require selfcomparisons that can backfire. In summary, from this analysis of the moral emotions, there appears to be more overlap and ambiguities for the positive emotions than for the negative emotions. It is also evident that anger and guilt are the best candidate moral emotions in terms of the tendency to foster improved social relations, which aligns with previous analysis of moral emotions. In comparison, some have positioned emotions like empathy, compassion, shame, and disgust as being essential to morality. However, other evidence suggests that the moral character of these emotions is questionable. Hopefully, this review will encourage further research on both the cognitive and behavioral consequences of moral emotions.

References Algoe, S. B., & Haidt, J. (2009). Witnessing excellence in action: The ‘other-praising’ emotions of elevation, gratitude, and admiration. The Journal of Positive Psychology, 4(2), 105–127. Alicke, M. D. (2000). Culpable control and the psychology of blame. Psychological Bulletin, 126(4), 556–574.

Moral Emotions: Are They Both Distinct and Good?

Allpress, J. A., Brown, R., Giner-Sorolla, R., Deonna, J. A., & Teroni, F. (2014). Two faces of group-based shame: Moral shame and image shame differentially predict positive and negative orientations to ingroup wrongdoing. Personality and Social Psychology Bulletin, 40(10), 1270–1284. Aquino, K., Freeman, D., Reed, A. II, Lim, V. K. G., & Felps, W. (2009). Testing a social-cognitive model of moral behavior: The interactive influence of situations and moral identity centrality. Journal of Personality and Social Psychology, 97(1), 123–141. Averill, J. (1983). Studies on anger and aggression: Implications for theories of emotion. American Psychologist, 38(11), 1145–1160. Avramova, Y. R., & Inbar, Y. (2013). Emotion and moral judgment. WIREs Cognitive Science, 4(2), 169–178. Aznar, A., Tenenbaum, H., & Russell, P. S. (2021). Is moral disgust socially learned? Emotion, 23(1), 289–301. Barrett, L., & Russell, J. (2015). The psychological construction of emotion. Guilford Publications. Bartos, , S. E., Russell, P. S., & Hegarty, P. (2020). Heroes against homophobia: Does elevation uniquely block homophobia by inhibiting disgust? Cognition and Emotion, 34(6), 1123–1142. Batson, C. D., & Ahmad, N. Y. (2009). Using empathy to improve intergroup attitudes and relations. Social Issues and Policy Review, 3(1), 141–177. Becker, J. C., Tausch, N., & Wagner, U. (2011). Emotional consequences of collective action participation: Differentiating self-directed and outgroup-directed emotions. Personality and Social Psychology Bulletin, 37(12), 1587–1598. Bloom, P. (2004). Descartes’ baby. Basic Books. Bloom, P. (2017). Empathy and its discontents. Trends in Cognitive Sciences, 21(1), 24–31. Branscombe, N. R., & Doosje, B. (2004). Collective guilt: International perspectives. Cambridge University Press. Cameron, C. D., Hutcherson, C. A., Ferguson, A. M., Scheffer, J. A., Hadjiandreou, E., & Inzlicht, M. (2019). Empathy is hard work: People choose to avoid empathy because of its cognitive costs. Journal of Experimental Psychology: General, 148(6), 962–976. Carver, C. S., & Harmon-Jones, E. (2009). Anger is an approach-related affect: Evidence and implications. Psychological Bulletin, 135(2), 183–204. Chaudhury, S. H., Garg, N., & Jiang, Z. (2021). The curious case of threat-awe: A theoretical and empirical reconceptualization. Emotion, 22(7), 1653–1669. Cohen, T. R., Wolf, S. T., Panter, A. T., & Insko, C. A. (2011). Introducing the GASP scale: A new measure of guilt and shame proneness. Journal of Personality and Social Psychology, 100(5), 947–966. Cohen-Chen, S., Pliskin, R., & Goldenberg, A. (2020). Feel good or do good? A valence–function framework for understanding emotions. Current Directions in Psychological Science, 29(4), 388–393. Cottrell, C. A., & Neuberg, S. L. (2005). Different emotional reactions to different groups: A sociofunctional threat-based approach to “prejudice.” Journal of Personality and Social Psychology, 88(5), 770–789. Cova, F., Deonna, J., & Sander, D. (2013). The emotional shape of our moral life: Anger-related emotions and mutualistic anthropology. Behavioral and Brain Sciences, 36(1), 86–87.

241

242

          

Cowen, A. S., & Keltner, D. (2017). Self-report captures 27 distinct categories of emotion bridged by continuous gradients. Proceedings of the National Academy of Sciences, 114(38), E7900–E7909. Crimston, C. R., Blessing, S., Gilbert, P., & Kirby, J. N. (2022). Fear leads to suffering: Fears of compassion predict restriction of the moral boundary. British Journal of Social Psychology, 61(1), 345–365. Cuff, B. M. P., Brown, S. J., Taylor, L., & Howat, D. J. (2016). Empathy: A review of the concept. Emotion Review, 8(2), 144–153. Danovitch, J., & Bloom, P. (2009). Children’s extension of disgust to physical and moral events. Emotion, 9(1), 107–112. Darley, J. M., Klosson, E. C., & Zanna, M. P. (1978). Intentions and their contexts in the moral judgments of children and adults. Child Development, 49(1), 66–74. Decety, J., & Jackson, P. L. (2004). The functional architecture of human empathy. Behavior Cognitive Neuroscience Review, 3(2), 71–100. Diessner, R., Iyer, R., Smith, M. M., & Haidt, J. (2013). Who engages with moral beauty? Journal of Moral Education, 42(2), 139–163. Doosje, B., Branscombe, N. R., Spears, R., & Manstead, A. S. R. (1998). Guilty by association: When one’s group has a negative history. Journal of Personality and Social Psychology, 75(4), 872–886. Eisenberg, N. (1988). Empathy and sympathy: A brief review of the concepts and empirical literature. Anthrozoös, 2(1), 15–17. Eisenberg, N., Fabes, R. A., & Spinrad, T. L. (2006). Prosocial development. In N. Eisenberg, W. Damon, & R. M. Lerner (Eds.), Handbook of child psychology: Vol. 3. Social, emotional and personality development (pp. 646–718). Wiley. Ekman, P. (1999). Basic emotions. John Wiley & Sons Ltd. Ekman, P., O’Sullivan, M., & Matsumoto, D. (1991). Contradictions in the study of contempt: What’s it all about? Reply to Russell. Motivation and Emotion, 15(4), 293–296. Ellemers, N., van der Toorn, J., Paunov, Y., & van Leeuwen, T. (2019). The psychology of morality: A review and analysis of empirical studies published from 1940 through 2017. Personality and Social Psychology Review, 23(4), 332–366. Eskine, K. J., Novreske, A., & Richards, M. (2013). Moral contagion effects in everyday interpersonal encounters. Journal of Experimental Social Psychology, 49(5), 947–950. Fehr, B., & Russell, J. A. (1984). Concept of emotion viewed from a prototype perspective. Journal of Experimental Psychology: General, 113(3), 464–486. Fischer, A., & Giner-Sorolla, R. (2016). Contempt: Derogating others while keeping calm. Emotion Review, 8(4), 346–357. Fischer, A. H., & Roseman, I. J. (2007). Beat them or ban them: The characteristics and social functions of anger and contempt. Journal of Personality and Social Psychology, 93(1), 103–115. Fiske, A. P. (2020). The lexical fallacy in emotion research: Mistaking vernacular words for psychological entities. Psychological Review, 127(1), 95–113. Gausel, N., & Leach, C. W. (2011). Concern for self-image and social image in the management of moral failure: Rethinking shame. European Journal of Social Psychology, 41(4), 468–478. Ghelfi, E., Christopherson, C. D., Urry, H. L., Lenne, R. L., Legate, N., Fischer, M. A., Wagemans, F. M. A., Wiggins, B., Barrett, T., Bornstein, M., de Haan, B.,

Moral Emotions: Are They Both Distinct and Good?

Guberman, J., Issa, N., Kim, J., Na, E., O’Brien, J., Paulk, A., Peck, T., Sashihara, M., & Sullivan, D. (2020). Reexamining the effect of gustatory disgust on moral judgment: A multilab direct replication of Eskine, Kacinik, and Prinz (2011). Advances in Methods and Practices in Psychological Science, 3(1), 3–23. Giner-Sorolla, R., & Chapman, H. A. (2017). Beyond purity: Moral disgust toward bad character. Psychological Science, 28(1), 80–91. Giner-Sorolla, R., & Espinosa, P. (2011). Social cuing of guilt by anger and of shame by disgust. Psychological Science, 22(1), 49–53. Giner-Sorolla, R., Piazza, J., & Espinosa, P. (2011). What do the TOSCA guilt and shame scales really measure: Affect or action? Personality and Individual Differences, 51(4), 445–450. Giner-Sorolla, R., & Russell, P. S. (2019). Not just disgust: Fear and anger also relate to intergroup dehumanization. Collabra: Psychology, 5(1), Article 58. Gocłowska, M. A., Elliot, A. J., van Elk, M., Bulska, D., Thorstenson, C. A., & Baas, M. (2023). Awe arises in reaction to exceeded rather than disconfirmed expectancies. Emotion, 23(1), 15–29. Goetz, J. L., Keltner, D., & Simon-Thomas, E. (2010). Compassion: An evolutionary analysis and empirical review. Psychological Bulletin, 136(3), 351–374. Goldberg, J. H., Lerner, J. S., & Tetlock, P. E. (1999). Rage and reason: The psychology of the intuitive prosecutor. European Journal of Social Psychology, 29(5–6), 781–795. Gottlieb, S., Keltner, D., & Lombrozo, T. (2018). Awe as a scientific emotion. Cognitive Science, 42(6), 2081–2094. Guan, F., Chen, J., Chen, O., Liu, L., & Zha, Y. (2019). Awe and prosocial tendency. Current Psychology, 38(4), 1033–1041. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834. Haidt, J. (2003). The moral emotions. In R. J. Davidson, K. R. Scherer, & H. H. Goldsmith (Eds.), Handbook of affective sciences (pp. 852–870). Oxford University Press. Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Vintage Books. Hodges, S. D., & Klein, K. J. K. (2001). Regulating the costs of empathy: The price of being human. The Journal of Socio-Economics, 30(5), 437–452. Horberg, E. J., Oveis, C., Keltner, D., & Cohen, A. B. (2009). Disgust and the moralization of purity. Journal of Personality and Social Psychology, 97(6), 963–976. Hutcherson, C. A., & Gross, J. J. (2011). The moral emotions: A social-functionalist account of anger, disgust, and contempt. Journal of Personality and Social Psychology, 100(4), 719–737. Inbar, Y., Pizarro, D. A., Gilovich, T., & Ariely, D. (2013). Moral masochism: On the connection between guilt and self-punishment. Emotion, 13(1), 14–18. Iyer, A., Schmader, T., & Lickel, B. (2007). Why individuals protest the perceived transgressions of their country: The role of anger, shame and guilt. Personality and Social Psychology Bulletin, 33(4), 572–587. Izard, C. E. (1971). The face of emotion. Appleton-Century-Crofts. Izard, C. E. (1977). Differential emotions theory. In C. E. Izard (Ed.), Human emotions (pp. 43–66). Springer US.

243

244

          

Izard, C. E. (1991). The psychology of emotions. Springer Science & Business Media. Jones, A., & Fitness, J. (2008). Moral hypervigilance: The influence of disgust sensitivity in the moral domain. Emotion, 8(5), 613–627. Keltner, D., & Haidt, J. (2003). Approaching awe, a moral, spiritual, and aesthetic emotion. Cognition and Emotion, 17(2), 297–314. Klimecki, O. M. (2019). The role of empathy and compassion in conflict resolution. Emotion Review, 11(4), 310–325. Kupfer, T. R., & Giner-Sorolla, R. (2021). Reputation management as an alternative explanation for the “contagiousness” of immorality. Evolution and Human Behavior, 42(2), 130–139. Kuppens, P., Van Mechelen, I., & Meulders, M. (2004). Every cloud has a silver lining: Interpersonal and individual differences determinants of anger-related behavior. Personality and Social Psychology Bulletin, 30(12), 1550–1564. Kuppens, P., Van-Mechelen, I., Smits, D. J. M., De Boek, P., & Ceulemans, E. (2007). Individual differences in patterns of appraisal and anger experience. Cognition and Emotion, 21(4), 689–713. Lai, C. K., Haidt, J., & Nosek, B. A. (2014). Moral elevation reduces prejudice against gay men. Cognition & Emotion, 28(5), 781–794. Landy, J. F., & Goodwin, G. P. (2015). Does incidental disgust amplify moral judgment? A meta-analytic review of experimental evidence. Perspectives on Psychological Science, 10(4), 518–536. Lazarus, R. S. (1991). Emotion and adaptation. Oxford University Press. Leach, C. W. (2017). Understanding shame and guilt. In L. Woodyat, E. L. Worthington, Jr., M. Wenzel, & B. J. Griffin (Eds.), Handbook of the psychology of selfforgiveness (pp. 17–28). Springer International Publishing AG. Leach, C. W., & Cidam, A. (2015). When is shame linked to constructive approach orientation? A meta-analysis. Journal of Personality and Social Psychology, 109(6), 983–1002. Lerner, J. S., Goldberg, J. H., & Tetlock, P. E. (1998). Sober second thought: The effects of accountability, anger, and authoritarianism on attributions of responsibility. Personality and Social Psychology Bulletin, 24(6), 563–574. Lickel, B., Schmader, T., Curtis, M., Scarnier, M., & Ames, D. R. (2005). Vicarious shame and guilt. Group Processes and Intergroup Relations, 8(2), 145–157. Lickel, B., Steele, R. R., & Schmader, T. (2011). Group-based shame and guilt: Emerging directions in research. Social and Personality Psychology Compass, 5(3), 153–163. Lomas, T. (2019). Anger as a moral emotion: A “bird’s eye” systematic review. Counselling Psychology Quarterly, 32(3–4), 341–395. Malle, B. F. (2021). Moral judgments. Annual Review of Psychology, 72, 293–318. Malle, B. F., Voiklis, J., & Kim, B. (2018). Understanding contempt against the background of blame. In M. Mason (Ed.), The moral psychology of contempt (pp. 79–105). Rowman & Littlefield. Nabi, R. L. (2002). The theoretical versus the lay meaning of disgust: Implications for emotion research. Cognition and Emotion, 16(5), 695–703. Noon, D. (2019). The antecedent and functions of group-based moral emotions. [Doctoral thesis, University of Surrey]. Nussbaum, M. C. (2004). Hiding from humanity: Disgust, shame, and the law. Princeton University Press.

Moral Emotions: Are They Both Distinct and Good?

Oatley, K., & Johnson-Laird, P. N. (1996). The communicative theory of emotions: Empirical tests, mental models, and implications for social interaction. In L. L. Martin & A. Tesser (Eds.), Striving and feeling: Interactions among goals, affect, and self-regulation (pp. 363–393). Lawrence Erlbaum Associates, Inc. Onu, D., Kessler, T., & Smith, J. R. (2016). Admiration: A conceptual review. Emotion Review, 8(3), 218–230. Ortony, A., Clore, G. L., & Collins, A. (1988). The cognitive structure of emotions. Cambridge University Press. Parrott, W. G. (Ed.). (2001). Emotions in social psychology. Psychology Press. Plutchik, R. (1980). A general psychoevolutionary theory of emotion. In R. Plutchik & H. Kellerman (Eds.), Theories of emotion (pp. 3–33). Academic Press. Pohling, R., & Diessner, R. (2016). Moral elevation and moral beauty: A review of the empirical literature. Review of General Psychology, 20(4), 412–425. Rees, J. H., Allpress, J. A., & Brown, R. (2013). Nie wieder: Group-based emotions for in-group wrongdoing affect attitudes toward unrelated minorities. Political Psychology, 34(3), 387–407. Rivis, A., Sheeran, P., & Armitage, C. J. (2009). Expanding the affective and normative components of the theory of planned behavior: A meta-analysis of anticipated affect and moral norms. Journal of Applied Social Psychology, 39(12), 2985–3019. Roseman, I. J., Antoniou, A. A., & Jose, P. E. (1996). Appraisal determinants of emotions: Constructing a more accurate and comprehensive theory. Cognition and Emotion, 10(3), 241–277. Rottman, J., DeJesus, J., & Greenebaum, H. (2019). Developing disgust: Theory, measurement, and application. In V. LoBue, K. Pérez-Edgar, & K. A. Buss (Eds.), Handbook of emotional development (pp. 283–309). Springer Nature Switzerland AG. Royzman, E., Atanasov, P., Landy, J. F., Parks, A., & Gepty, A. (2014). CAD or MAD? Anger (not disgust) as the predominant response to pathogen-free violations of the divinity code. Emotion, 14(5), 892–907. Rozin, P., Haidt, J., & McCauley, C. R. (1993). Disgust. In M. Lewis, J. M. Haviland-Jones, & L.F. Barrett (Eds), Handbook of emotions (pp. 575–594). Guilford Press. Rozin, P., Lowery, L., Imada, S., & Haidt, J. (1999). The CAD triad hypothesis: A mapping between three moral emotions (contempt, anger, disgust) and three moral codes (community, autonomy, divinity). Journal of Personality and Social Psychology, 76(4), 574–586. Rozin, P., Markwith, M., & Nemeroff, C. (1992). Magical contagion beliefs and fear of AIDS 1. Journal of Applied Social Psychology, 22(14), 1081–1092. Rozin, P., Markwith, M., & Ross, B. (1990). The sympathetic magical law of similarity: Nominal realism and neglect of negatives in response to negative labels. Psychological Science, 1(6), 383–384. Rozin, P., Millman, L., & Nemeroff, C. (1986). Operation of the laws of sympathetic magic in disgust and other domains. Journal of Personality and Social Psychology, 50(4), 703–712. Rozin, P., & Nemeroff, C. (2002). Sympathetic magical thinking: The contagion and similarity ‘heuristics.’ Cambridge University Press. Rudd, M., Vohs, K. D., & Aaker, J. (2012) Awe expands people’s perception of time, alters decision making, and enhances well-being. Psychological Science, 23(10), 1130–1136.

245

246

          

Russell, J. A. (1991). Negative results on a reported facial expression of contempt. Motivation and Emotion, 15(4), 281–291. Russell, P. S., & Giner-Sorolla, R. (2011). Moral anger, but not moral disgust, responds to intentionality. Emotion, 11(2), 233–240. Russell, P. S., & Giner-Sorolla, R. (2013). Bodily moral disgust: What it is, how it is different from anger, and why it is an unreasoned emotion. Psychological Bulletin, 139(2), 328–351. Russell, P. S., Piazza, J., & Giner-Sorolla, R. (2013). CAD revisited: Effects of the word moral on the moral relevance of disgust (and other emotions). Social Psychological and Personality Science, 4(1), 62–68. Russell, P. S., Smith, D. M., Birtel, M. D., Hart, K. H., & Golding, S. E. (2021). The role of emotions and injunctive norms in breastfeeding: A systematic review and meta-analysis. Health Psychology Review, 16(2), 257–279. Sasse, J., Halmburger, A., & Baumert, A. (2020). The functions of anger in moral courage: Insights from a behavioral study. Emotion, 22(6), 1321–1335. Schnall, S., Haidt, J., Clore, G. L., & Jordan, A. H. (2008). Disgust as embodied moral judgment. Personality and Social Psychology Bulletin, 34(8), 1096–1109. Schnall, S., & Roper, J. (2012). Elevation puts moral values into action. Social Psychological and Personality Science, 3(3), 373–378. Seger, C. R., Banerji, I., Park, S. H., Smith, E. R., & Mackie, D. M. (2017). Specific emotions as mediators of the effect of intergroup contact on prejudice: Findings across multiple participant and target groups. Cognition and Emotion, 31(5), 923–936. Shaver, P., Schwartz, J., Kirson, D., & O’Connor, C. (1987). Emotion knowledge: Further exploration of a prototype approach. Journal of Personality and Social Psychology, 52(6), 1061–1086. Sheikh, S., & Janoff-Bulman, R. (2010). The “shoulds” and “should nots” of moral emotions: A self-regulatory perspective on shame and guilt. Personality and Social Psychology Bulletin, 36(2), 213–224. Smith, C. A., & Ellsworth, P. C. (1985). Patterns of cognitive appraisal in emotion. Journal of Personality and Social Psychology, 48(4), 813–838. Stancato, D., & Keltner, D. (2019). Awe, ideological conviction, and perceptions of ideological opponents. Emotion, 21(1), 61–72. Sweetman, J., Spears, R., Livingstone, A., & Manstead, A. (2013). Admiration regulates social hierarchy: Antecedents, dispositions, and effects on intergroup behavior. Journal of Experimental Social Psychology, 49(3), 534–542. Sznycer, D. (2019). Forms and functions of the self-conscious emotions. Trends in Cognitive Sciences, 23(2), 143–157. Tangney, J. P., & Dearing, R. L. (2002). Shame and guilt. Guilford Press. Tangney, J. P., Stuewig, J., & Mashek, D. J. (2007). Moral emotions and moral behavior. Annual Review of Psychology, 58(1), 345–372. Tausch, N., Becker, J. C., Spears, R., Christ, O., Saab, R., Singh, P., & Siddiqui, R. N. (2011). Explaining radical group behavior: Developing emotion and efficacy routes to normative and nonnormative collective action. Journal of Personality and Social Psychology, 101(1), 129–148. Teper, R., Zhong, C. B., & Inzlicht, M. (2015). How emotions shape moral behavior: Some answers (and questions) for the field of moral psychology. Social and Personality Psychology Compass, 9(1), 1–14.

Moral Emotions: Are They Both Distinct and Good?

Teroni, F., & Deonna, J. A. (2008). Differentiating shame from guilt. Consciousness and Cognition: An International Journal, 17(3), 725–740. Terrizzi Jr., J. A., & Shook, N. J. (2020). On the origin of shame: Does shame emerge from an evolved disease-avoidance architecture? Frontiers in Behavioral Neuroscience, 14, Article 19. Tetlock, P. E., Visser, P. S., Singh, R., Polifroni, M., Scott, A., Elson, S. B., Mazzocco, P., & Rescober, P. (2007). People as intuitive prosecutors: The impact of social control goals on attributions of responsibility. Journal of Experimental Social Psychology, 43(2), 195–209. Thomson, A. L., & Siegel, J. T. (2017). Elevation: A review of scholarship on a moral and other-praising emotion. The Journal of Positive Psychology, 12(6), 628–638. Tompkins, S. S. (1963). Affect, imagery, consciousness: II. The Negative Affects. Springer. Tracy, J. L., & Robins, R. W. (2006). Appraisal antecedents of shame and guilt: Support for a theoretical model. Personality and Social Psychology Bulletin, 32(10), 1339–1351. Tracy, J. L., & Robins, R. W. (2007). The psychological structure of pride: A tale of two facets. Journal of Personality and Social Psychology, 92(3), 506–525. Tybur, J. M., Molho, C., Cakmak, B., Cruz, T. D., Singh, G. D., & Zwicker, M. (2020). Disgust, anger, and aggression: Further tests of the equivalence of moral emotions. Collabra: Psychology, 6(1), Article 34. Van de Vyver, J., & Abrams, D. (2015). Testing the prosocial effectiveness of the prototypical moral emotions: Elevation increases benevolent behaviors and outrage increases justice behaviors. Journal of Experimental Social Psychology, 58, 23–33. Västfjäll, D., Slovic, P., Mayorga, M., & Peters, E. (2014). Compassion fade: Affect and charity are greatest for a single child in need. PLoS ONE, 9, Article e100115. Weber, H. (2004). Explorations in the social construction of anger. Motivation and Emotion, 28, 197–219. Wheatley, T., & Haidt, J. (2005). Hypnotic disgust makes moral judgments more severe. Psychological Science, 16(10), 780–784. Wranik, T., & Scherer, K. R. (2010). Why do I get angry? A componential appraisal approach. In M. Potegal, G. Stemmler, & C. Spielberger (Eds.), International handbook of anger (pp. 243–266). Springer. Yang, Y., Hu, J., Jing, F., & Nguyen, B. (2018). From awe to ecological behavior: The mediating role of connectedness to nature. Sustainability, 10(7), Article 2477. Zaki, J., & Cikara, M. (2015). Addressing empathic failures. Current Directions in Psychological Science, 24(6), 471–476. Zickfeld, J. H., Schubert, T. W., Seibt, B., & Fiske, A. P. (2017). Empathic concern is part of a more general communal emotion. Frontiers in Psychology, 8, Article 723.

247

11 The Benefits and Costs of Empathy in Moral Decision Making Jean Decety There is general agreement that empathy is a central aspect of our humanity. Indeed, empathy plays a vital role in our interpersonal life, from bonding between parents and child, to enhancing affiliation among conspecifics, to understanding others’ subjective psychological states. Empathy motivates various kinds of prosocial behaviors, such as comforting and helping. It can also, in certain contexts, inhibit interpersonal aggression. Empathy increases trust, rapport, and affinity. There is a functional relation between empathy and guilt. Empathy can promote collective action by enhancing other-regarding motives and reducing self-regarding concerns, thus fostering cohesiveness and cooperation within human societies. However, contrary to what is commonly assumed, empathy is not always a driver of moral behavior. Here, morality is viewed as a set of biological and cultural adaptations, including values, norms, and practices, that evolved to regulate selfishness and facilitate cooperation (Curry, 2016). The wealth of empirical findings from behavioral and social sciences demonstrates a complex relationship between morality and empathy (Decety & Cowell, 2014). Indeed, at times, empathy can interfere with morality by introducing partiality toward an individual, countering the moral principle of justice for all. Empathy is less likely to be felt for groups than for identifiable victims (Västfjäll et al., 2014). It gives higher priority to friends than strangers. Empathy is parochial, favoring in-group over out-group members (Bruneau et al., 2017). However, empathy can provide the emotional fire and the impetus to relieve a victim’s suffering. It can counter rationalization and derogation (Decety & Cowell, 2015). All of these examples, whether they are drawn from laboratory experiments or from real-world situations, reveal a complex functional relationship between affect, cognition, empathy, and moral decision making. Empathy is costly, in that it draws upon attentional and emotional resources, but it is also beneficial in maintaining social relationships and serving the needs of others (DeSteno, 2015). The empathy that we experience as a balance of these costs and benefits is not always under our control. It involves unconscious mechanisms to tune its responsiveness. While we may deliberately choose whether or not to feel empathy for a stranger, caring for our kin, close friends, and folks we associate with is unavoidable, almost like an impulse (Hodges & Klein, 2001). However, some have argued that being empathetic can also result

248

Empathy in Moral Decision Making

from motivated choices to prioritize and balance competing goals within specific social contexts (Cameron, 2018). In this chapter, I propose that empathy is a dynamic interpersonal phenomenon that encompasses three interacting functional components: (1) Emotional contagion (affect sharing or emotional empathy), which is a quasi-instantaneous way to acquire and share social information. Such transmission of information between individuals is an adaptive evolutionary mechanism for individuals in danger; (2) Empathic concern (sympathy or compassion), which piggybacks on the caring motivation, a specific biological adaptation that is both narrow in scope and yet highly flexible; and (3) Perspective taking, the capacity to make inferences about and represent one’s own and others’ intentions, emotions, beliefs, and motives. I draw on evolutionary theory, psychology, neuroscience, and behavioral economics to demonstrate that emotional contagion is unconsciously socially modulated. Empathic concern, by contrast, is relatively selective with regards to the input to which it responds and particularly sensitive to stimuli that have been important in the evolutionary past. As a corollary, the degree to which we experience empathy is partly constrained by information-processing biases that channel certain kinds of environmental input selected by the ecological pressures tied to our evolutionary history. These limits express themselves in unconscious, rapid, almost automatic tendencies to care more for some people but less for others, or for one person and not for many. Understanding the ultimate causes and proximate mechanisms of empathy allows characterizing the kind of information that gets prioritized as input and the kinds of behaviors they prompt as output. It also contributes to identification of its limits and which situational factors exacerbate empathic failure, which is essential if we want to mitigate our cognitive biases. Together, this knowledge is useful at a theoretical level as well as at a practical level: It provides information about how to reframe situations to activate alternative evolved systems in ways that promote normative moral conduct compatible with our current societal aspirations. As a first step, I first describe the architecture of empathy and how it serves a motivational function to value others’ welfare.

11.1 The Architecture of Empathy The word “empathy” has been used as an umbrella under which definitions vary enormously. This makes it difficult to determine which psychological function empathy relates to and which role it plays in morality (Batson, 2009). Differentiating conceptualizations is therefore necessary because they reflect distinct psychological processes that vary widely in their phenomenology, functions, and evolved biological mechanisms. Moreover, inconsistent definitions of empathy have a negative impact on both research and practice,

249

250

      

especially in the domains of law, medicine, education, and decision making (Decety, 2020). Phenomenologically, the notion of empathy reflects an ability to perceive and be sensitive to the emotional states of others, often combined with a motivation to care about their well-being. This definition, although useful in interpersonal communication, remains vague in the specification of the underlying psychological mechanisms and their biological instantiation. Progress carried out over the past decades in social neuroscience has greatly contributed to clarifying the functions of empathy and their underlying component processes. This discipline is based on a resolutely interdisciplinary enterprise including evolutionary biology, behavioral ecology, neurobiology, psychology, anthropology, sociology, and behavioral economics, and on the vertical integration of multiple levels of analysis, from the molecular to the socio-cultural context (Cacioppo & Decety, 2011). Theoretical and empirical work from social neuroscience converge to characterize empathy as a multidimensional phenomenon reflecting a capacity to share, understand, and respond to others’ emotions. Empathy comprises several evolved functional components that are emotional (sharing affect with another), cognitive (understanding the other’s subjective state), and motivational (feeling concerned for another) (Decety & Jackson, 2004). These components flexibly interact with one another and operate by way of automatic (bottom-up) and controlled (top-down) processes. Yet they can be dissociated, as they rely on partially separable information-processing neural systems in the brain and underlie different psychological functions (Shdo et al., 2018). This model of empathy combines both representational aspects and processes involved in decision making.

11.2 The Adaptive Value of Empathy To properly understand empathy and its contribution to moral decision making, we must obtain both ultimate and proximate explanations. Ultimate explanations are concerned with the fitness consequences of a trait or behavior – the why question. Proximate explanations address the way in which that functionality is achieved – the how question. Proximate causes are important, but they only tell part of the story. Ultimate explanations go below the surface, focusing on evolutionary functions. Ultimate and proximate explanations are not the opposite ends of a continuum, and we should not choose between them (Scott-Phillips et al., 2011); though distinct from one another, they are complementary. An important aspect to keep in mind is that adaptations must be understood in terms of survival and reproduction in the historical environments and ecological constraints in which they were selected. Many of our cognitive biases are heuristics – that is, simple, approximate, efficient rules or algorithms, learned or hard-coded by evolutionary processes. Our decision biases, errors, and

Empathy in Moral Decision Making

misjudgments are not necessarily flaws. Rather, they are design features with which natural selection has equipped Homo sapiens to make decisions in ways that consistently enhanced our hominid ancestors’ inclusive fitness (Kenrick & Griskevicius, 2013). While these heuristics generally promote utility, they are fallible in predictable ways, and they can misfire in our contemporary socioecological context. The essence of empathy, and its primary form across many species, is the communication of an emotional state from one individual to another. Affective signaling and communication between conspecifics contribute to inclusive fitness by facilitating coordination and cohesion, increasing defense against predators, and bonding individuals to one another within a social group. It is a widespread phenomenon in a great many species (Mendl et al., 2010). Discriminating and communicating emotions to conspecifics (at least on the main dimensions of valence and intensity) allows the facilitation and the regulation of social interactions. When emotions are transmitted from one individual to the next by vocal, facial, or chemical channels, it leads to information transfer and accelerated coordination between group members, and it facilitates decision making (Briefer, 2018). This spontaneous transfer of internal states is fundamental for survival, and social group cohesion. However, affect sharing does not lead to one single kind of decision making when it comes to moral judgment or conduct (Loewenstein & Small, 2007). Moreover, our capacity to experience affect, important in guiding our judgments, decisions, and driving our behavior, is limited. Many situations do not induce much distress in the observer. Some of the failures to experience empathy to others in distress or in need could be a result of not cognitively representing their situation and suffering in a meaningful way (Slovic, 2007). The capacity for perspective taking, which may be unique to our species, can expand the scope of affect sharing. Importantly, attention seems to be a necessary requirement for empathic feelings. One study placed participants in a position of reacting empathically to children in need of help and manipulated their ability to visually attend to a single victim or being distracted by several others (Dickert & Slovic, 2009). Empathy responses were lower and reaction times were longer when the photo of a child was presented with distractor photos. When information about children is processed in a way that fosters vivid representations, affective reactions are stronger than when this information is processed in a detached, abstract, or intangible way. Behavioral economics studies have shown that people donate much more after reading the story of one victim than a story about many victims (Small et al., 2007). Identified, single victims arouse empathy and personal distress to a greater extent than statistical victims. This effect has been suggested to account for the failure to bring meaning to abstractly represented large numbers or statistical victims, as compared to identifiable victims, and may explain why disasters that cost a large number of lives seem to evoke less of a helping response than disasters that befall an individual (Fetherstonhaugh et al., 1997; Västfjäll et al., 2014).

251

252

      

Affective information influences decision processes and subsequent costly behavioral responses. For example, Kogut and Ritov (2005) asked participants how much money they would give to help develop a drug that would save the life of one child or eight children. They found that participants were willing to donate the same amount. But, when the single child’s name, age, and picture were shown, the donations dramatically increased for the single child, and this effect was mediated by the participants’ reported empathy. Showing the photo of a young child has a great impact in evoking strong emotions. On September 2, 2015, the photo of a Syrian child lying face-down on a Turkish beach filled social media and the front pages of newspapers worldwide. This photo of a single child had more impact than statistical reports of hundreds of thousands of deaths of Syrians fleeing the civil war, including on donations to the Red Cross (Slovic et al., 2017). It is as if people who had been unmoved by the rising death toll in Syria suddenly appeared to care much more after seeing this photograph.

11.3 Proximate Mechanisms of Affect Sharing Emotional contagion that leads to affect sharing is an important determinant of prosocial behavior. However, it can produce different decisions, depending on intra- and interpersonal factors, and social context. Resultant motivations can even lead to a lack of response to be of assistance. In highly arousing situations, people who are oversensitive may become upset and distressed. Emotional distress may result in withdrawing from the stressor, resulting in decreasing prosocial behavior or in helping the other merely to reduce one’s own discomfort (Tice et al., 2001). The reduction of personal distress can be a form of emotional regulation by motivating actions that make oneself feel better. Thus, how emotional contagion can elicit either an egoistic or an altruistic motivation remains an important question but is difficult to distinguish. This is an ongoing debate in social psychology (see Cialdini et al., 1997 vs. Batson et al., 1981). The neurobiological mechanisms of emotional contagion in nonhuman animals, and in humans, are not entirely understood, with the exception of the contagion of stress and pain. The former results in the activation of the autonomic nervous system and hypothalamic-pituitary-adrenocortical axis (Engert et al., 2019). In rodents, perceiving a conspecific in physical distress can facilitate social approach and helping behavior (Langford et al., 2010). Rats help cage mates escape from a transparent restrainer, and the helping rat engages in such prosocial behavior even if it does not gain any social reward from it (Bartal et al., 2011). Blocking emotional contagion with an anxiolytic agent in these rats inhibits their helping behavior, which demonstrates the importance of some level of vicarious distress to prompt the prosocial response (Ben-Ami Bartal et al., 2016). Another series of experiments involved a pool of water in which one rat was made to swim for its life, while another rat was in a cage adjacent to it (Sato et al., 2015). The results showed that rats quickly

Empathy in Moral Decision Making

learned to open the door to rescue their cage mates from the pool of water. Importantly, the rats did not open the door when the cage mate was not in distress. Overall, these results indicate that the decision to open the door to liberate the cage mate was elicited by processing distress cues. Prairie voles match their anxiety-related behavior and corticosterone response of the stressed cage mate (Burkett et al., 2016). In that study, exposure to a stressed familiar cage mate increased activity in neurons located in the anterior cingulate cortex (ACC) of the observer animal and led to grooming and licking behavior directed toward that conspecific. The ACC contains multisensory neurons that respond both when a rodent experiences pain and while witnessing another conspecific experiencing pain. Deactivating this region with muscimol microinjections impairs the social transmission of distress and impedes prosocial approach behavior (Carrillo et al., 2019). Infusing an oxytocin receptor antagonist into this region also eliminates the partner-directed prosocial response (Burkett et al., 2016). Oxytocin is a neuropeptide mainly produced in the hypothalamus. It makes social information more salient by connecting brain areas involved in processing social information and helps link those areas to the reward system. Importantly, consoling behavior occurred only between those voles who were familiar with each other but not strangers. This suggests that the behavior is not simply a reaction to aversive cues but modulated by social cues of familiarity. In humans too, and in accordance with evolutionary theory, perceived similarity or closeness between people increases the degree to which emotional contagion takes place and leads to prosociality. The perceived overlap between self and other is an important predictor of helping behavior and motivates empathic concern (Cialdini et al., 1997). People display higher levels of prosocial behavior toward others who are similar to them, are members of their group, share their political attitudes, or favor one individual in need rather than many, and they do so because they experience higher levels of empathetic concern under these conditions (Dovidio & Banfield, 2015). Numerous functional magnetic resonance imaging (fMRI) studies have demonstrated that empathy relies on overlapping processing of personal and vicarious experience, or shared neural representations (Decety & Sommerville, 2003; Lockwood, 2016). In particular, the perception and even imagination of another person suffering leads to an increase in neuro-hemodynamic activity in a restricted network of brain regions that are also involved in the first-hand experience of pain (Figure 11.1). These regions include the periaqueductal gray (PAG), insula, and ACC. This latter region contains multisensory neurons and belongs to the medial pain system that processes the affective aspects of nociceptive information (Lamm et al., 2011). It is important to note that there is no complete overlap between neural representations engaged in pain processing and those engaged in the vicarious experience of pain (Krishnan et al., 2016). The same is true for vicarious neural representations of pleasure and reward. A meta-analysis of functional neuroimaging studies of rewarding outcomes in social contexts found that both vicarious and personal rewards activate the

253

254

      

Figure 11.1 Brain circuits associated with different functional components of empathy

ventromedial prefrontal cortex (vmPFC) and amygdala, and the latter also engage the nucleus accumbens and regions involved in theory of mind (Morelli et al., 2015). The implication of both shared and nonshared neural representations in vicarious experience is not surprising, given the different sensory inputs during personal and vicarious experience (Lockwood, 2016).

11.4 Affect Sharing Is Socially Modulated The vicarious experience and neural response to others’ joys and sorrows is not automatic. Rather, it is modulated by beliefs, attitudes, prejudices, and group coalitions. Imagining a loved one in physical pain is associated with greater signal increase in the insula and ACC than imagining a stranger in pain (Cheng et al., 2010). Witnessing a rival’s failure triggers a subjective feeling of pleasure parametrically reflected by neural activity in the rewards system (Cikara & Fiske, 2013). Stronger emotional reactions and associated neural responses are elicited when witnessing the pain of someone from one’s own ethnic group than when observing pain from an out-group member (ContrerasHuerta et al., 2013; Xu et al., 2009). This bias to the pain expressed by otherrace individuals changes over time and is mitigated by familiarity of contact with people of the out-group. One such study recruited Chinese students who had first arrived in Australia within the past six months to five years and assessed their level of contact with other ethnic groups across various contexts (Cao et al., 2015). During fMRI scanning, participants were shown videos of own-race/other-race individuals, as well as own-group/other-group individuals, expressing pain. The typical group bias in neural responses to observed pain was evident, whereby neural activation was greater for pain in own-race compared to other-race people. Critically, the response increased significantly with the level of contact participants reported with people of the other ethnic groups. The perception of another person in distress or pain is modulated by competitive social contexts. For instance, in a competitive interaction, a

Empathy in Moral Decision Making

competitor’s pain leads to positive emotions in oneself, whereas perceiving of the competitor’s joy results in distress (Lanzetta & Englis, 1989). This effect occurs very early during the perception of emotional expression, as demonstrated by a study using event-related potentials (ERPs) and a card game (Yamada et al., 2011). In that experiment, participants played a card game under the belief that they were doing so jointly with another player who sat in an adjoining room and whose smiles and frowns in response to winning or losing in the game could be observed on a computer screen. Depending upon the experimental condition, the other player’s facial expressions conveyed one of two opposing values to the participant. In the empathic condition, her emotional expressions were congruent with the participant’s outcome (win or loss), whereas in the counter-empathic one, they signaled incongruent outcomes. Results revealed that counterempathic responses are associated with modulation of early sensory processing (~170 ms after stimulus onset) of emotional cues. In a neuroeconomics study, participants were engaged in a sequential prisoner’s dilemma game with confederate individuals who were playing the game either fairly or unfairly (Singer et al., 2006). Following this behavioral manipulation, participants were scanned while watching fair and unfair players in pain. Compared to the observation of fair players, participants’ observation of unfair players in pain led to significantly reduced activation in brain areas coding the affective components of pain. Another study showed that the failures of an in-group member, like a fellow Red Sox fan, are experienced as painful and are associated with increased neural response in the ACC and insula, whereas failures of a rival out-group member, like a fellow Yankees fan, give a sense of pleasure, which is associated with rewardrelated signal augmentation in the striatum (Cikara et al., 2011). This absence of vicarious experience for rivals’ pain should not be understood as an empathic failure. Rather, this reflects an adaptive response in competitive situations and social coalitions. Humans are spontaneously tribal. The tendency to favor in-group over out-group, especially when resources are scarce, has been observed in children before their second birthday (Jin & Baillargeon, 2017). Mathematical modeling of social evolution as well as anthropological observations indicate that intragroup motivation to be invested in their own members’ welfare coevolved with intergroup competition over valuable resources. An optimal condition under which genetically encoded hyperprosociality can propagate is, paradoxically, when groups are in conflict. In line with cultural group selection theory (Richerson et al., 2010), it has been proposed that, during the late Pleistocene, groups with higher numbers of prosocial individuals cooperated more effectively and thus outcompeted others (Marean, 2015). This synergy between cooperation and competition, which shapes our prosocial preferences, can be observed in both laboratory experiments and in the workplace (Francois et al., 2018). Affect sharing is moderated by attitudes and prejudices toward people. For instance, one fMRI study demonstrated that the vicarious response is intensified or reduced by a priori attitudes toward individuals on video clips expressing the same pain intensity (Decety et al., 2010). Study participants were more sensitive

255

256

      

to the facial expressions of pain of individuals who were described as infected with the acquired immunodeficiency syndrome (AIDS) as the result of a blood transfusion (thus clearly victims of a lack of medical foresight) than to the pain of individuals who were described to have contracted AIDS as the result of their illicit drug addiction and the sharing of needles (people often seen as responsible for their behavior). Moreover, controlling for both explicit and implicit AIDS biases, the more participants blamed these individuals, the less subjective pain they attributed to them as compared with healthy controls. People easily distinguish between in-group members and outsiders. Social identity formation drives people to adopt arbitrary markers to signal their group membership. It can thus be expected that knowing the religious affiliation of someone suffering affects the vicarious response in the observer. One study recruited Christian and atheist participants who were all Han Chinese in Beijing and thus were highly similar in terms of facial features (Huang & Han, 2014). Event-related potentials, small voltages generated in neurons in response to specific events or stimuli, were recorded while participants viewed pain and neutral expressions of Chinese faces that were marked (with a symbol on a necklace) as Christians or atheist. The religious/irreligious identifications significantly modulated the ERPs’ amplitudes (200 ms after stimulus onset) to pain expressions, with larger amplitudes when an observer and a target shared religious (or irreligious) beliefs. Similarly, a simple difference in a single-word text label on a hand in pain, indicating the person’s religious affiliation (Hindu, Christian, Jewish, Muslim, Scientologist, or atheist), seems sufficient to modulate neural activity in the observer and can be predicted by the observer’s own religion (Vaughn et al., 2018). In that study, the brain response was larger when participants viewed a painful event occurring to a hand labeled with their own religion (in-group) than to a hand labeled with a different religion (out-group). Importantly, the size of this bias correlated positively with the magnitude of participants’ dispositional empathy. Group biases have evolved for their adaptive functional roles. They encourage us to be kind to in-group members, who are likely to reciprocate, and at times to be hostile to out-group members, especially when resources are scarce. However, such biases are also the source of prejudice that conflict with our current social and political environment, and particularly the principle of justice for all. Vicarious neural responses to others’ suffering are thus highly flexible and are dependent on sociomoral values (shared beliefs). Moral values exert a powerful motivational force that varies both in direction and intensity, guide the differentiation of just from unjust courses of action, and direct behavior toward desirable outcomes (Higgins, 2015). For instance, people who are sensitive to animal suffering and become vegetarian for ethical reasons show a greater neural response when exposed to photos depicting animals suffering compared to omnivore participants (Filippi et al., 2010). Notably, vegan and vegetarian participants have greater neural activation while looking at photos of animals suffering than photos of humans suffering.

Empathy in Moral Decision Making

While affect sharing or emotional empathy is often portrayed as facilitating prosociality, affiliation, rapport, and linking, it doesn’t necessarily mean that it promotes morality. As discussed earlier in this chapter, it is unconsciously and rapidly modulated by various social factors that are evolutionarily advantageous, such as similarity of many kinds, including kinship, group memberships, and shared political attitudes or religious beliefs. The social context, the nature of the situation, and the characteristics of the person in need not only affect assessments of costs and rewards and the decisions about whether to engage in prosocial behavior but also shape empathic experiences (Dovidio & Banfield, 2015).

11.5 Empathic Concern Empathic concern, also known as sympathy or compassion, is interwoven yet distinct from affect sharing, although the latter can elicit the former. Generalized parental nurturance seems the most likely evolutionary basis of empathic concern. In humans, the motivation for parental care is far more flexible and future-oriented than in any other mammalian species (Batson, 2014; Zahn-Waxler et al., 2018). At the ultimate level, caring for offspring is a biological necessity. Our survival as a species would be strongly compromised without it. Kin selection is the main force driving the evolution of parental care (Hamilton, 1964). Both natural and sexual selection have led to the emergence of a motivation state that leads individuals to care for and promote the welfare of offspring. Without sufficiently close genetic relatedness and an appropriate ratio of benefits to costs, caretaking and other cooperative propensities that do not directly increase the helper’s own reproductive success would not have evolved. Compared to other primates, human offspring are born more prematurely and more dependent, requiring exceptional care. This has been possible because Homo sapiens ancestors were cooperative breeders, also known as alloparenting. Caring for individuals other than one’s biological offspring seems to be a universal behavior among humans (Kenkel et al., 2017). In other apes, once youngsters are weaned, they are basically nutritionally independent. But in the case of early hominids, alloparental care and provisioning set the stage for infants to develop in new ways. Alloparental assistance allows mothers to conserve energetical resources, remain safer from predators, and live longer (Hrdy, 2014). This pressure to care for vulnerable offspring gave rise to several adaptations such as powerful responses to distress vocalizations, neotenous traits, and classes of attachment-related behaviors between caregiver and offspring, including empathic concern (Goetz et al., 2010). Empathic concern has emerged as the affective component of a caregiving system, selected to raise vulnerable offspring to the age of viability, thus ensuring that genes are more likely to be passed on (Goetz et al., 2010). This motivational component of empathy relies on subcortical circuits that originally evolved to support

257

258

      

parental caregiving and can be engaged for vulnerable and distressed others more generally (Vekaria et al., 2020). At the proximate level, the caring motivation arises from a set of biological mechanisms located in the brainstem, hypothalamus, ventral pallidum, dorsal raphe nucleus, vmPFC, and the bed nucleus of the stria terminalis (Kenkel et al., 2017). The caring motivation triggers oxytocin release that counteracts the effects of stress and encourages us to approach others and tend to their needs, and also the dopaminergic reward system, which mediates feelings of subjective pleasure when nurturing and helping. That is why it feels good to help and care. Neural activity in the mesolimbic reward circuit predicts donations to orphans depicted in photographs (Genevsky et al., 2013). In one fMRI study, even when subjects were forced to pay a tax to a local food bank, these reward pathways were activated – albeit not as much as when subjects chose to donate voluntarily some of their cash to the food bank (Harbaugh et al., 2007). Valuing offspring is a highly positive experience in nonhuman animals (Ferris, 2014). In humans too, infant cues such as smiling or crying expressions are powerful motivators of parental behavior, activating dopamine-associated brain reward circuits. Increased activation of the mesolimbic reward pathway, including the nucleus accumbens (Strathearn et al., 2009), and higher levels of oxytocin (Gordon et al., 2010) are found in mothers and fathers in response to their infants’ cues. It has long been known, since Konrad Lorenz’s notion of “Kindchenschema,” that neotenous characteristics, such as babyish faces, a big head, small nose, and big eyes, elicit social approach and caretaking behavior. These infantile physical characteristics, also known as neotenous cues, signal vulnerability and were favored by natural selection to facilitate provision of care. Adults with baby faces are perceived to have childlike traits – to be naïve, weak, warm, and honest. These neotenous cues inspire caretaking, protection, and compassion. These characteristics can also sway criminal sentencing and imprisonment decisions. Johnson and King (2017) conducted an analysis of a random sample of 1,200 men who had been convicted of felony crimes in the Minneapolis-St. Paul metropolitan area in 2009, including their booking photos. The results showed that baby-faced individuals were significantly less likely to be incarcerated, even after controlling for other relevant case characteristics. It is thus not all that surprising that the convicted terrorist, Dzhokhar Tsarnaev, whose action killed 3 people and injured 260 during the Boston Marathon in 2013, has received a striking amount of sympathy (Rosin, 2013). Thus, caution should be in order regarding the role of empathy in criminal justice. The proximate neural mechanisms of empathic concern are partially distinct from the mechanisms of affect sharing. In one fMRI study, participants listened to true biographies describing a range of human suffering such as children born with congenital disease, adults struggling with cancer, experiences of homelessness and other hardships (Ashar et al., 2017). Participants were asked to provide moment-by-moment ratings of empathic concern and emotional distress while

Empathy in Moral Decision Making

listening to these biographies. Empathic concern was associated with neural response in the striatum and vmPFC, whereas emotional distress was related with neural response in the insula and the somatosensory cortex. Another neuroimaging study reported that individuals with high dispositional empathic concern are more likely to engage in altruistic behavior, and this relationship was mediated by neural activity in the vmPFC and ventral striatum, regions involved in the reward anticipation circuit and the subjective valuation process (FeldmanHall et al., 2015). The neurophysiological circuits for caring first evolved in the context of mother–infant relationships and subsequently became extended to others in groups of closely related individuals. A variety of kin-recognition mechanisms or heuristics have evolved to facilitate behavioral tendency to care and help (Neyer & Lang, 2003). Kin recognition is characterized by highly automatic, heuristic cue-based processes, such as familiarity or proximity, that are sometimes fallible. The fact that humans possess additional, more cognitive means of assessing kinship does not rule out the role of these earliest adaptations. The evolution of increasingly complex psychological mechanisms occurs by adding to, rather than replacing, previous mechanisms and this without any guarantee of optimality (Jacob, 1977). Behavioral genetics studies demonstrate that highly related people are more similar to each other on a variety of attitudes, values, and personality characteristics, and such similarities are used as kinship cues (Park et al., 2008). Thus, one can expect that empathic concern is more readily triggered when cues of similarity between self and other are salient. These cues are not limited to physical appearance and familiarity such as ethnicity, language, and accent; they include many dimensions of human social categorization and social identity, such as values, opinions, attitudes, and personality traits. Of course, this does not mean that empathic concern is solely a product of perceived similarity of the other to the self. Humans can feel empathic concern for a wide range of others in need, even dissimilar others, as long as they value their welfare (Batson et al., 2005). Furthermore, the neotenous characteristics that elicit attention, social approach, and caregiving do so regardless of kinship. Empathic concern is a powerful motivator of costly prosocial behaviors (Batson, 2009), especially for members of one’s own social group. People tend to display more empathic concern toward in-group members and are more sensitive to perceived harmful behaviors committed by out-group members. Across cultural contexts (e.g., Americans vs. Arabs), research indicates that parochial empathy is a strong predictor of altruism and passive harm toward out-groups (Bruneau et al., 2017). For example, individuals respond with more empathic concern when they perceive interpersonal harm perpetrated by someone from their own university as compared with when the perpetrator is from a different university, within the same country (Australia), and this reaction was associated with a neural response in the vmPFC (Molenberghs et al., 2016). A recent study using a large national sample documented that high levels of dispositional empathic concern were predictive of social polarization (Simas et al., 2020). The authors also showed that individuals high in empathic concern

259

260

      

disposition expressed greater partisan bias in evaluating contentious political events. Taken together, empathic concern accounts for a positive emotional state associated with a motivation to care for the welfare of others. However, empathic concern is unconsciously influenced by various signals such as neotenous cues, interpersonal factors, and intergroup contexts, and may in certain situations motivate out-group hostility.

11.6 Perspective Taking The capacity for perspective taking is the ability to put oneself in the place of someone else while recognizing their point of view, experiences, and beliefs. It is often invoked as a remedy for some of the empathy biases that, as I have discussed, influence moral decision making. In general, perspective taking often refers to understanding that another person has a different mental state than the observer, a construct that largely overlaps with theory of mind. Being exposed to narrative fiction spontaneously triggers perspective taking. Several studies with children and adults have demonstrated that reading stories fosters an understanding of other people, using implicit perspective taking, and correlates with better empathy and theory of mind (Mar, 2018; Mumper & Gerrig, 2017). Two ways people understand another’s subjective perspective are 1) using situational and dispositional factors to model the other’s perspective and 2) projecting themselves into the other (Ames, 2004). Thus, perspective taking as a mental simulation requires executive functions, including attention, working memory, and inhibitory control. The related projection-and-correction account of simulation (Gordon, 2021) is comparable to the anchoring and adjustment heuristic proposed by Epley et al. (2004). These authors proposed that “individuals adopt others’ perspectives by initially anchoring on their own perspective, and then subsequently, and effortfully accounting for differences between themselves and others until a plausible estimate is reached” (Epley et al., 2004, p. 328). There is evidence from cognitive neuroscience in support of the simulation theory, in that understanding what others experience partly relies on our own projections of what we would think and feel in similar situations (Steinbeis, 2016). While this process relies on shared neural representations between self and other, the perceiver must also maintain a self–other distinction (Decety & Sommerville, 2003). Results from brain imaging and lesion studies in neurological patients converge in a number of regions and circuits implicated in perspective taking. For instance, Ruby and Decety (2004) presented participants with short sentences depicting real-life situations that induce social emotions such as guilt, envy, pride, or embarrassment (e.g., someone opens the bathroom door that you have forgotten to lock), as well as emotionally neutral situations. They asked participants to imagine how they would feel in those situations and how their mother would feel in those situations. Regions

Empathy in Moral Decision Making

involved in emotional processing were similarly activated in the conditions that included emotionally laden situations for both self and other perspectives, including the amygdala and the temporal poles. Importantly, adopting the other’s perspective led to a specific neural response in the temporoparietal junction (TPJ) as well as the vmPFC. The TPJ plays a key role the sense of agency (Ruby & Decety, 2001) and computations in the social domain that require self–other distinction. The right TPJ is activated when participants mentally simulate actions from someone else’s perspective but not from their own (Ruby & Decety, 2001) or imagine painful experiences (Jackson et al., 2006; Lamm et al., 2007) but not when they imagined these situations for themselves. The TPJ, because of its anatomical characteristics and connectivity, plays a pivotal role in self–other processing. Evidence from functional neuroimaging studies indicates that the TPJ is systematically associated with perspective-taking tasks, theory of mind, and detection of intentional agents in the environment (Carter & Huettel, 2013; Decety & Lamm, 2007). More recent work, using repetitive transcranial magnetic stimulation, demonstrates that the TPJ is causally involved in the spontaneous attribution of mental states (Bardi et al., 2017). Its temporary inhibition disrupts the updating of internally (self ) and externally (other) generated representations. There are two distinct ways in which people can take the perspective of suffering others. One form is thinking about how a suffering other feels, or “imagine-other” perspective taking; the other form is imagining oneself in the suffering other’s shoes, or “imagine-self” perspective taking (Buffone et al., 2017). Research in social psychology (e.g., Batson et al., 2003) has documented this distinction by showing that the imagine-other perspective evokes empathic concern or compassion, whereas imagine-self perspective taking induces both empathic concern and personal distress (i.e., a self-oriented aversive emotional response). In participants asked to either adopt an imagine-self or an imagineother perspective while watching people experiencing somatic pain, neural response was detected in neural circuits involved in the first-hand experience of pain (Jackson et al., 2006, Lamm et al., 2007), except in individuals with psychopathy, who have a profound lack of empathy (Decety et al., 2013). However, the imagine-self perspective led to higher activity in brain areas involved in the affective response to threat and pain, including the amygdala and ACC. Consistently, the imagine-self perspective led to a potentially debilitating physiological state of threat, compared to an imagine-other perspective during active pursuit of a helping goal (Buffone et al., 2017). In addition, this effect was mediated by perceiving the helping task as more demanding, suggesting that imagining self may increase the perceived difficulty of providing help. Though it may be mentally taxing and energy costly, perspective taking has several positive consequences for downstream inter-group relations. For instance, adopting the perspective of an out-group member leads to a decrease in the use of explicit and implicit stereotypes for that individual and to more positive evaluations of that group as a whole (Galinsky & Moskowitz, 2000). Feelings of empathic concern induced by perspective taking can lead to valuing

261

262

      

the welfare of an out-group target. This is what Oliner and Oliner (1988) found from interviewing 436 individuals who were involved in rescue activity of Jews in Nazi Europe, at great risk to themselves. Most of them frequently began with concern for a specific individual for whom compassion was felt – often individuals known previously. Importantly, while 37 percent were characteristically empathetic – centered on the needs of others, with emotions of compassion and sympathy, 52 percent were primarily normocentric – having strong feelings of obligation to a social reference group that imposed normative standards and social values on their behavior. Another 11 percent acted largely from autonomously derived moral principles (Allison, 1990). Perspective taking can boost empathic concern and influence how we value the welfare of a person. Thus, one can use empathic concern to increase valuing another and elicit prosocial behavior. While this can be a very good thing, it can also create problems for the moral principle of justice. For example, in one experiment, college students were told about a 10-year-old girl named Sheri Summers who had a fatal disease and was waiting in line for treatment that would relieve her pain (Batson et al., 1995). Participants learned that they could move her to the front of the waiting list. When simply asked what to do, most participants acknowledged that she had to wait because other more needy children were ahead of her. But if the participants were first asked to imagine what Sheri felt, they tended to choose to move her up, putting her ahead of children who were presumably more deserving. Here, empathy was more powerful than fairness, leading to a decision that most of us would see as unfair. Empathy triggered by perspective taking can produce myopia in the same way as egoistic self-interest. The idea that perspective taking boosts empathic concern has recently been challenged by a meta-analysis examining whether individuals who received instructions to imagine the feelings of a distressed person experience more empathic concern than do individuals who receive no instructions or who receive instructions to remain objective (McAuliffe et al., 2020). The authors found that empathy was greater when people were told to imagine the feelings of the needy person when compared to the condition where people were told to remain objective and detached. However, and more surprisingly, the study also found that individuals who were deliberately instructed to imagine how a suffering individual is feeling did not experience more empathic concern than subjects who received no instructions at all. Overall, this meta-analysis does not support the view that one can increase empathic concern by imagining what the other person is experiencing. However, the fact that people seem better at suppressing their empathy than they are at amplifying it (Zaki, 2014) suggests that we are walking around with naturally high amounts of empathy already.

11.7 Empathy Cannot Replace Reasoning in Moral Judgment Empathy is a complex, multifaceted construct that encompasses affect sharing, perspective taking, and a motivated concern for others’ well-being.

Empathy in Moral Decision Making

These functional components often work in concert, yet each is implemented in specific brain circuits. This has important implications for moral reasoning and decision making. At the most basic level, emotions are attention-getting and supplement the information provided by rational belief and inference. Perspective taking can be used to adopt the subjective viewpoint of others, and this can facilitate the extent to which an observer understands that a victim experiences harm or distress. Conversely, affect sharing in reaction to the plight of another may be foundational for motivating prosocial behaviors and moral condemnation (Patil et al., 2017). Yet affect sharing, elicited by emotional contagion or by perspective taking, may also lead to personal distress, the aversive affect arising in response to others’ suffering, which does not necessarily lead to prosocial behavior and may even cloud our moral judgment. The aversion to harming others is an integral part of the moral sense, underlying deeply rooted moral intuitions across societies (Haidt & Joseph, 2004). Asking individuals to simulate harmful actions such as discharging a gun into someone else is sufficient to generate an aversive response accompanied with autonomic nervous system changes (Cushman et al., 2012). Such a reaction emerges very early in development and is considered as a necessary foundation of morality (Decety & Cowell, 2018). Experiencing an aversive emotional reaction to the anticipation of harming someone plays a critical role in moral judgment. This aversion can partially stem from the bad outcome due to empathic concern for the victim’s suffering, which causes personal distress in the observer or elicits feelings of guilt. Some studies have documented that low dispositional levels in empathic concern reduce harm aversion, which leads to an increased propensity to endorse utilitarian moral judgments in sacrificialharm dilemmas (Gleichgerrcht & Young, 2013). It is also reasoning that guides moral progress and abstract principles, such as the idea that all humans are worthy of dignity and respect. The My Lai massacre in March 1968 provides a pertinent illustration of the powerful impact of moral principles. It was one of the most horrific incidents of violence committed against unarmed civilians during the Vietnam War. A company of American soldiers brutally killed 500 women, children, and old men in the village of My Lai. US Army officers covered up the carnage for a year before it was reported in the American press, thanks to helicopter pilot Hugh Thompson, sparking a firestorm of international outrage. In this incident, according to Blader and Tyler (2002, p. 242), “most soldiers involved did not feel an emotional connection to the civilians, whom they regarded as being, or at least aiding, the enemy.” But not all soldiers participated. What, therefore, stopped some soldiers from killing civilians? One important factor was the soldiers’ view that killing civilians was a morally inappropriate behavior in which they should not engage (Blader & Tyler, 2002). Those soldiers who held these abstract moral values about what is just were less likely to engage in killing civilians, irrespective of whether they knew, liked, or empathized with the particular civilians they encountered.

263

264

      

11.8 What We Have Learned Understanding the ultimate and proximate mechanisms of empathy elucidates the information that is prioritized as input and the behaviors prompted as output. Knowing our cognitive biases and their evolutionary origins is critical if we want to make better moral decisions. Explaining human behavior does not equate to justifying it or defending it. But if we want to improve our society, we need an accurate understanding of human nature rather than a denial of it. Moral decision making guided by empathy alone is not optimal, especially when dealing with large groups or when individuals are engaged in competition. However, empathy can create a strong motivation to act. Empathy and morality are neither systematically opposed to one another, nor inevitably complementary. Empathy alone is powerless in the face of rationalization and denial. Our saving grace is our ability to generalize and to direct our empathy through the use of reason and deliberation, as well as our capacity to cooperate with other people, create coalitions, and organize ourselves around any reliable sign, value, or idea that is our saving grace.

Acknowledgments I am grateful to Paul Slovic for his feedback on an earlier version of this chapter. Two anonymous reviewers and Bertram Malle provided numerous valuable comments and suggestions.

References Allison, P. (1990). The altruistic personality: Rescuers of Jews in Nazi Europe. Public Opinion Quarterly, 54(3), 442–444. Ames, D. R. (2004). Inside the mind reader’s tool kit: Projection and stereotyping in mental state inference. Journal of Personality and Social Psychology, 87(3), 340–353. Ashar, Y. K., Andrews-Hanna, J. R., Dimidjian, S., & Wager, T. D. (2017). Empathic care and distress: Predictive brain markers and dissociable brain systems. Neuron, 94(6), 1263–1273. Bardi, L., Six, P., & Brass, M. (2017). Repetitive TMS of the temporo-parietal junction disrupts participant’s expectations in a spontaneous theory of mind task. Social Cognitive and Affective Neuroscience, 12(11), 1775–1782. Bartal, I. B. A., Decety, J., & Mason, P. (2011). Empathy and pro-social behavior in rats. Science, 334(6061), 1427–1430. Batson, C. D. (2009). These things called empathy: Eight related but distinct phenomena. In J. Decety & W. Ickes (Eds.), The social neuroscience of empathy (pp. 3–15). MIT Press. Batson, C. D. (2014). The empathy-altruism hypothesis: Issues and implications. In J. Decety (Ed.), Empathy: From bench to bedside (pp. 41–54). MIT Press.

Empathy in Moral Decision Making

Batson, C. D., Duncan, B. D., Ackerman, P., Buckley, T., & Birch, K. (1981). Is empathic emotion a source of altruistic motivation? Journal of Personality and Social Psychology, 40(2), 290–302. Batson, C. D., Klein, T. R., Highberger, L., & Shaw, L. L. (1995). Immorality from empathy-induced altruism: When compassion and justice conflict. Journal of Personality and Social Psychology, 68(6), 1042–1054. Batson, C. D., Lishner, D. A., Carpenter, A., Dulin, L., Harjusola-Webb, S., Stocks, E. L., Gale, S., Hassan, O., & Sampat, B. (2003). “. . . As you would have them do unto you”: Does imagining yourself in the other’s place stimulate moral action? Personality and Social Psychology Bulletin, 29(9), 1190–1201. Batson, C. D., Lishner, D. A., Cook, J., & Sawyer, S. (2005). Similarity and nurturance: Two possible sources of empathy for strangers. Basic and Applied Social Psychology, 27, 15–25. Ben-Ami Bartal, I., Shan, H., Molasky, N. M., Murray, T. M., Williams, J. Z., Decety, J., & Mason, P. (2016). Anxiolytic treatment impairs helping behavior in rats. Frontiers in Psychology, 7, Article 850. Blader, S. L., & Tyler, T. R. (2002). Justice and empathy: What motivates people to help others? In M. Ross & D. T. Miller (Eds.), The justice motive in everyday life (pp. 226–250). Cambridge University Press. Briefer, E. F. (2018). Vocal contagion of emotions in non-human animals. Proceedings of the Royal Society B: Biological Sciences, 285(1873), Article 20172783. Bruneau, E. G., Cikara, M., & Saxe, R. (2017). Parochial empathy predicts reduced altruism and the endorsement of passive harm. Social Psychological and Personality Science, 8(8), 934–942. Buffone, A. E., Poulin, M., DeLury, S., Ministero, L., Morrisson, C., & Scalco, M. (2017). Don’t walk in her shoes! Different forms of perspective taking affect stress physiology. Journal of Experimental Social Psychology, 72, 161–168. Burkett, J. P., Andari, E., Johnson, Z. V., Curry, D. C., de Waal, F. B., & Young, L. J. (2016). Oxytocin-dependent consolation behavior in rodents. Science, 351(6271), 375–378. Cacioppo, J. T., & Decety, J. (2011). Social neuroscience: Challenges and opportunities in the study of complex behavior. Annals of the New York Academy of Sciences, 1224(1), 162–173. Cameron, C. D. (2018). Motivating empathy: Three methodological recommendations for mapping empathy. Social and Personality Psychology Compass, 12(11), Article e12418. Cao, Y., Contreras-Huerta, L. S., McFadyen, J., & Cunnington, R. (2015). Racial bias in neural response to others’ pain is reduced with other-race contact. Cortex, 70, 68–78. Carrillo, M., Han, Y., Migliorati, F., Liu, M., Gazzola, V., & Keysers, C. (2019). Emotional mirror neurons in the rat’s anterior cingulate cortex. Current Biology, 29(8), 1301–1312. Carter, R. M., & Huettel, S. A. (2013). A nexus model of the temporal–parietal junction. Trends in Cognitive Sciences, 17(7), 328–336. Cheng, Y., Chen, C., Lin, C. P., Chou, K. H., & Decety, J. (2010). Love hurts: An fMRI study. Neuroimage, 51(2), 923–929. Cialdini, R. B., Brown, S. L., Lewis, B. P., Luce, C., & Neuberg, S. L. (1997). Reinterpreting the empathy altruism relationship: When one into one equals oneness. Journal of Personality and Social Psychology, 73, 481–494.

265

266

      

Cikara, M., Botvinick, M. M., & Fiske, S. T. (2011). Us versus them: Social identity shapes responses to intergroup competition and harm. Psychological Science, 22, 306–313. Cikara, M., & Fiske, S. T. (2013). Their pain, our pleasure: Stereotype content and schadenfreude. Annals of the New York Academy of Sciences, 1299(1), 52–59. Contreras-Huerta, L. S., Baker, K. S., Reynolds, K. J., Batalha, L., & Cunnington, R. (2013). Racial bias in neural empathic responses to pain. PLoS ONE, 8(12), Article e84001. Curry, O. S. (2016). Morality as cooperation: A problem-centred approach. In T. K. Shackelford & R. D. Hansen (Eds.), The evolution of morality (pp. 27–51). Springer. Cushman, F., Gray, K., Gaffey, A., & Mendes, W. B. (2012). Simulating murder: The aversion to harmful action. Emotion, 12(1), 2–7. Decety, J. (2020). Empathy in medicine: What it is, and how much we really need it. American Journal of Medicine, 133, 561–566. Decety, J., Chen, C., Harenski, C., & Kiehl, K. A. (2013). An fMRI study of affective perspective taking in individuals with psychopathy: Imagining another in pain does not evoke empathy. Frontiers in Human Neuroscience, 7, Article 489. Decety, J., & Cowell, J. M. (2014). The complex relation between morality and empathy. Trends in Cognitive Sciences, 18(7), 337–339. Decety, J., & Cowell, J. M. (2015). Empathy, justice, and moral behavior. American Journal of Bioethics – Neuroscience, 6(3), 3–14. Decety, J., & Cowell, J. M. (2018). Interpersonal harm aversion as a necessary foundation for morality: A developmental neuroscience perspective. Development and Psychopathology, 30(1), 153–164. Decety, J., Echols, S. C., & Correll, J. (2010). The blame game: The effect of responsibility and social stigma on empathy for pain. Journal of Cognitive Neuroscience, 22(5), 985–997. Decety, J., & Jackson, P. L. (2004). The functional architecture of human empathy. Behavioral and Cognitive Neuroscience Reviews, 3(2), 71–100. Decety, J., & Lamm, C. (2007). The role of the right temporoparietal junction in social interaction: How low-level computational processes contribute to meta-cognition. The Neuroscientist, 13(6), 580–593. Decety, J., & Sommerville, J. A. (2003). Shared representations between self and other: A social cognitive neuroscience view. Trends in Cognitive Sciences, 7(12), 527–533. DeSteno, D. (2015). Compassion and altruism: How our minds determine who is worthy of help. Current Opinion in Behavioral Sciences, 3, 80–83. Dickert, S., & Slovic, P. (2009). Attentional mechanisms in the generation of sympathy. Judgment and Decision Making, 4, 297–306. Dovidio, J. F., & Banfield, J. C. (2015). Prosocial behavior and empathy. In J. D. Wright (Ed.), International encyclopedia of the social and behavioral science, (2nd ed., Vol. 19, pp. 216–220). Elsevier. Engert, V., Linz, R., & Grant, J. A. (2019). Embodied stress: The physiological resonance of psychosocial stress. Psychoneuroendocrinology, 105, 138–146. Epley, N., Keysar, B., Van Boven, L., & Gilovich, T. (2004). Perspective taking as egocentric anchoring and adjustment. Journal of Personality and Social Psychology, 87(3), 327–339.

Empathy in Moral Decision Making

FeldmanHall, O., Dalgleish, T., Evans, D., & Mobbs, D. (2015). Empathic concern drives costly altruism. NeuroImage, 105, 347–356. Ferris, C. F. (2014). Using awake animal imaging to understand neural circuits of emotion: Studies ranging from maternal care to aggression. In J. Decety & Y. Christen (Eds.), New frontiers in social neuroscience (pp. 111–126). Springer. Fetherstonhaugh, D., Slovic, P., Johnson, S. M., & Friedrich, J. (1997). Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14, 283–300. Filippi, M., Riccitelli, G., Falini, A., Di Salle, F., Vuilleumier, P., Comi, G., & Rocca, M. A. (2010). The brain functional networks associated to human and animal suffering differ among omnivores, vegetarians and vegans. PLOS ONE, 5(5), Article e10847. Francois, P., Fujiwara, T., & Van Ypersele, T. (2018). The origins of human prosociality: Cultural group selection in the workplace and the laboratory. Science Advances, 4(9), Article e2201. Galinsky, A. D., & Moskowitz, G. B. (2000). Perspective-taking: Decreasing stereotype expression, stereotype accessibility, and in-group favoritism. Journal of Personality and Social Psychology, 78(4), 708–724. Genevsky, A., Västfjäll, D., Slovic, P., & Knutson, B. (2013). Neural underpinnings of the identifiable victim effect: Affect shifts preferences for giving. The Journal of Neuroscience, 33(43), 17188–17196. Gleichgerrcht, E., & Young, L. (2013). Low levels of empathic concern predict utilitarian moral judgment. PLoS ONE, 8(4), Article e60418. Goetz, J. L., Keltner, D., & Simon-Thomas, E. (2010). Compassion: An evolutionary analysis and empirical review. Psychological Bulletin, 136, 351–375. Gordon, I., Zagoory-Sharon, O., Leckman, J. F., & Feldman, R. (2010). Oxytocin and the development of parenting in humans. Biological Psychiatry, 68(4), 377–382. Gordon, R. M. (2021). Simulation, predictive coding and the shared world. In M. Gilead & K. Ochsner (Eds.), The neural basis of mentalizing (pp. 237–255). Springer Nature. Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133, 55–66. Hamilton, W. D. (1964). The genetical evolution of social behaviour. Journal of Theoretical Biology, 7(1), 17–52. Harbaugh, W. T., Mayr, U., & Burghart, D. R. (2007). Neural responses to taxation and voluntary giving reveal motives for charitable donations. Science, 316(5831), 1622–1625. Higgins, E. T. (2015). What is value? Where does it come from? A psychological perspective. In T. Brosch & D. Sander (Eds.), Handbook of value: Perspectives from economics, neuroscience, philosophy, psychology, and sociology (pp. 43–63). Oxford University Press. Hodges, S. D., & Klein, K. J. K. (2001). Regulating the costs of empathy: The price of being human. The Journal of Socio-Economics, 30(5), 437–452. Hrdy, S. B. (2014). Development + social selection in the emergence of “emotionally” modern humans. In J. Decety & Y. Christen (Eds.), New frontiers in social neuroscience (pp. 57–91). Springer. Huang, S., & Han, S. (2014). Shared beliefs enhance shared feelings: Religious/irreligious identifications modulate empathic neural responses. Social Neuroscience, 9, 639–649.

267

268

      

Jackson, P. L., Brunet, E., Meltzoff, A. N., & Decety, J. (2006). Empathy examined through the neural mechanisms involved in imagining how I feel versus how you feel pain. Neuropsychologia, 44(5), 752–761. Jacob, F. (1977). Evolution and tinkering. Science, 196(4295), 1161–1166. Jin, K. S., & Baillargeon, R. (2017). Infants possess an abstract expectation of ingroup support. Proceedings of the National Academy of Sciences, 114(31), 8199–8204. Johnson, B. D., & King, R. D. (2017). Facial profiling: Race, physical appearance, and punishment. Criminology, 55(3), 520–547. Kenkel, W. M., Perkeybile, A. M., & Carter, C. S. (2017). The neurobiological causes and effects of alloparenting. Developmental Neurobiology, 77(2), 214–232. Kenrick, D. T., & Griskevicius, V. (2013). The rational animal: How evolution made us smarter than we think. Basic Books. Kogut, T., & Ritov, I. (2005). The identified victim effect: An identified group, or just a single individual? Journal of Behavioral Decision Making, 18, 157–167. Krishnan, A., Woo, C. W., Chang, L. J., Ruzic, L., Gu, X., López-Solà, M., Jackson, P. L., Pujol, J., Fan, J., & Wager, T. D. (2016). Somatic and vicarious pain are represented by dissociable multivariate brain patterns. eLife, 5, Article e15166. Lamm, C., Batson, C. D., & Decety, J. (2007). The neural basis of human empathy: Effects of perspective-taking and cognitive appraisal. Journal of Cognitive Neuroscience, 19, 42–58. Lamm, C., Decety, J., & Singer, T. (2011). Meta-analytic evidence for common and distinct neural networks associated with directly experienced pain and empathy for pain. Neuroimage, 54(3), 2492–2502. Langford, D. J., Tuttle, A. H., Brown, K., Deschenes, S., Fischer, D. B., Mutso, A., Root, K. C., Sotocinal, S. G., Stern, M. A., Mogil, J. S., & Sternberg, W. F. (2010). Social approach to pain in laboratory mice. Social Neuroscience, 5(2), 163–170. Lanzetta, J. T., & Englis, B. G. (1989). Expectations of cooperation and competition and their effects on observers’ vicarious emotional responses. Journal of Personality and Social Psychology, 56, 543–554. Lockwood, P. L. (2016). The anatomy of empathy: Vicarious experience and disorders of social cognition. Behavioural Brain Research, 311, 255–266. Loewenstein, G., & Small, D. (2007). The scarecrow and the tin man: The vicissitudes of human sympathy and caring. Review of General Psychology, 11, 112–126. Mar, R. A. (2018). Stories and the promotion of social cognition. Current Directions in Psychological Science, 27(4), 257–262. Marean, C. W. (2015). The most invasive species of all. Scientific American, 313(2), 32–39. McAuliffe, W. H., Carter, E. C., Berhane, J., Snihur, A. C., & McCullough, M. E. (2020). Is empathy the default response to suffering? A meta-analytic evaluation of perspective taking’s effect on empathic concern. Personality and Social Psychology Review, 24(2), 141–162. Mendl, M., Burman, O. H., & Paul, E. S. (2010). An integrative and functional framework for the study of animal emotion and mood. Proceedings of the Royal Society B: Biological Sciences, 277(1696), 2895–2904. Molenberghs, P., Gapp, J., Wang, B., Louis, W. R., & Decety, J. (2016). Increased moral sensitivity for outgroup perpetrators harming ingroup members. Cerebral Cortex, 26(1), 225–233.

Empathy in Moral Decision Making

Morelli, S. A., Sacchet, M. D., & Zaki, J. (2015). Common and distinct neural correlates of personal and vicarious reward: A quantitative meta-analysis. NeuroImage, 112, 244–253. Mumper, M. L., & Gerrig, R. J. (2017). Leisure reading and social cognition: A metaanalysis. Psychology of Aesthetics, Creativity, and the Arts, 11(1), 109–120. Neyer, F. J., & Lang, F. R. (2003). Blood is thicker than water: Kinship orientation across adulthood. Journal of Personality and Social Psychology, 84(2), 310–321. Oliner, S., & Oliner, P. (1988). The altruistic personality. Free Press. Park, J. H., Schaller, M., & Van Vugt, M. (2008). Psychology of human kin recognition: Heuristic cues, erroneous inferences, and their implications. Review of General Psychology, 12(3), 215–235. Patil, I., Calò, M., Fornasier, F., Cushman, F., & Silani, G. (2017). The behavioral and neural basis of empathic blame. Scientific Reports, 7(1), 1–14. Richerson, P. J., Boyd, R., & Henrich, J. (2010). Gene-culture coevolution in the age of genomics. Proceedings of the National Academy of Sciences, 107, 8985–8992. Rosin, H. (2013, April 29). Why all this maternal sympathy for Dzhokhar? Slate. https:// slate.com/human-interest/2013/04/maternal-sympathy-for-dzhokhar-tsarnaevwhat-s-it-about.html Ruby, P., & Decety, J. (2001). Effect of subjective perspective taking during simulation of action: A PET investigation of agency. Nature Neuroscience, 4(5), 546–550. Ruby, P., & Decety, J. (2004). How would you feel versus how do you think she would feel? A neuroimaging study of perspective taking with social emotions. Journal of Cognitive Neuroscience, 16, 988–999. Sato, N., Tan, L., Tate, K., & Okada, M. (2015). Rats demonstrate helping behavior toward a soaked conspecific. Animal Cognition, 18(5), 1039–1047. Scott-Phillips, T. C., Dickins, T. E., & West, S. A. (2011). Evolutionary theory and the ultimate–proximate distinction in the human behavioral sciences. Perspectives on Psychological Science, 6(1), 38–47. Shdo, S. M., Ranasinghe, K. G., Gola, K. A., Mielke, C. J., Sukhanov, P. V., Miller, B. L., & Rankin, K. P. (2018). Deconstructing empathy: Neuroanatomical dissociations between affect sharing and prosocial motivation using a patient lesion model. Neuropsychologia, 116, 126–135. Simas, E. N., Clifford, S., & Kirkland, J. H. (2020). How empathic concern fuels political polarization. American Political Science Review, 114(1), 258–269. Singer, T., Seymour, B., O’Doherty, J. P., Stephan, K. E., Dolan, R. J., & Frith, C. D. (2006). Empathic neural responses are modulated by the perceived fairness of others. Nature, 439, 466–469. Slovic, P. (2007). If I look at the mass I will never act: Psychic numbing and genocide. Judgment and Decision Making, 2, 79–95. Slovic, P., Västfjäll, D., Erlandsson, A., & Gregory, R. (2017). Iconic photographs and the ebb and flow of empathic response to humanitarian disasters. Proceedings of the National Academy of Sciences, 114(4), 640–644. Small, D. A., Loewenstein, G., & Slovic, P. (2007). Sympathy and callousness: The impact of deliberative thought on donations to identifiable and statistical victims. Organizational Behavior and Human Decision Processes, 102(2), 143–153. Steinbeis, N. (2016). The role of self–other distinction in understanding others’ mental and emotional states: Neurocognitive mechanisms in children and adults.

269

270

      

Philosophical Transactions of the Royal Society B: Biological Sciences, 371(1686), Article 20150074. Strathearn, L., Fonagy, P., Amico, J., & Montague, P. R. (2009). Adult attachment predicts maternal brain and oxytocin response to infant cues. Neuropsychopharmacology, 34(13), 2655–2666. Tice, D. M., Bratslavsky, E., & Baumeister, R. F. (2001). Emotional distress regulation takes precedence over impulse control: If you feel bad, do it. Journal of Personality and Social Psychology, 80(1), 53–67. Västfjäll, D., Slovic, P., Mayorga, M., & Peters, E. (2014). Compassion fade: Affect and charity are greatest for a single child in need. PLOS ONE, 9(6), Article e100115. Vaughn, D. A., Savjani, R. R., Cohen, M. S., & Eagleman, D. M. (2018). Empathic neural responses predict group allegiance. Frontiers in Human Neuroscience, 12, Article 302. Vekaria, K. M., O’Connell, K., Rhoads, S. A., Brethel-Haurwitz, K. M., Cardinale, E. M., Robertson, E. L., Walitt, B., VanMeter, J. W., & Marsh, A. A. (2020). Activation in bed nucleus of the stria terminalis (BNST) corresponds to everyday helping. Cortex, 127, 67–77. Xu, X., Zuo, X., Wang, X., & Han, S. (2009). Do you feel my pain? Racial group membership modulates empathic neural responses. Journal of Neuroscience, 29 (26), 8525–8529. Yamada, M., Lamm, C., & Decety, J. (2011). Pleasing frowns, disappointing smiles: An ERP investigation of counterempathy. Emotion, 11(6), 1336–1345. Zahn-Waxler, C., Schoen, A., & Decety, J. (2018). An interdisciplinary perspective on the origins of concern for others: Contributions from psychology, neuroscience philosophy and sociobiology. In N. Roughley & T. Schramme (Eds.), Forms of fellow feeling: Empathy, sympathy, concern and moral agency (pp. 184–215). Cambridge University Press. Zaki, J. (2014). Empathy: A motivated account. Psychological Bulletin, 140(6), 1608–1647.

PART III

Behavior

12 Prosociality Oriel FeldmanHall and Marc-Lluís Vives

In retrospect, it was a seminal moment in the psychology world: Stanley Milgram surprised even himself when he discovered just how obedient people can be. It was 1963 and Milgram was looking to answer questions about the cruel and immoral side of human nature: Could Nazi Germany repeat itself? Are people capable of following orders that profoundly conflict with their moral principles? Could normal, law-abiding citizens of New Haven, Connecticut be pushed to do gravely immoral acts? Milgram required his subjects to apply electric shocks to a second subject (a confederate) each time a wrong answer was given during a memory test. The result: 67 percent of them were willing to apply a level of shock sufficient to silence a screaming confederate. Following on the heels of World War II, Milgram’s experiments, which highlighted just how antisocial people can be, were particularly germane; however, issues relating to morality, prosociality, honesty, trust, and cooperation, have been, and will continue to be, relevant throughout humanity’s tenure. This is largely because the issues central to being prosocial revolve around our capacity for successful social living. Prosocial actions enable an individual to put the needs of another before their own and act in ways that increase the well-being of the other, typically at the expense of oneself (Batson & Powell, 2003; Eisenberg, 2014; Penner et al., 2005). From mundane acts, such as giving your taxi to a stranger because she is in a hurry, to more consequential behaviors, such as heroically donating organs to those in need (Marsh et al., 2014), our prosocial calculus enables society to function in a relatively seamless manner. Successful social living, however, is only possible when there is a quorum of individuals behaving in altruistic, cooperative, and trustworthy ways. This notion was not readily apparent in social psychology’s early research tradition, which, in its nascent years, primarily focused on documenting antisocial behavior. There was good reason for this focus. As social psychology was finding its footing as a field, the world had just been rocked by two world wars, in which more than 100 million people died. The realization that humanity’s darker side can be so easily uncovered was fresh, and researchers leveraged this insight to explore the depth of this antisocial capacity. Milgram’s obedience study (Milgram, 1963) and Zimbardo’s Stanford prison study (Haney et al., 1973) are classically evoked examples, and Sherif’s summer camp study in the late 1950s illustrates another storied case for how competition and group membership 273

274

             -        

rapidly unfold in the wild (Sherif et al., 1961). In the last two decades, however, there has been a seismic shift toward understanding the prosocial mind. If we can take Google’s software that counts frequency of words in written text as a barometer of topic interest, we would see that the word “prosocial” has exponentially grown since the beginning of the 2000s. Prosociality is trending. In this chapter, we leverage this newfound interest to provide a brief account of the historical relevance of prosocial behavior and the current research objectives within the fields of social psychology and economics. In the last few decades, the topic of prosociality has been covered by a number of different perspectives and research fields. Even within the domain of psychology, the approach has differed substantially depending on tradition, from investigating which personality traits correlate with prosociality (Graziano & Eisenberg, 1997) to describing the external environmental factors that influence prosocial actions (Latané & Darley, 1970). In the past, major reviews have focused on the many levels of analysis one could take, from micro to macro, which has helped guide at what level of abstraction a research question is pitched (Penner et al., 2005). Here we evoke an experimentalist framework, covering ground from a number of different subfields. We begin by discussing how early paradigms originating from behavioral economics were co-opted by social psychologists to investigate prosocial behavior in the laboratory. We then explain how more recent efforts have been leveraged to create paradigms that more adequately capture the tensions experienced in the real world. We then address the potential origins of human prosociality by briefly reviewing both primate and developmental work. We also highlight the data to date that have tried to take a more mechanistic approach to the question of what motivates people to be prosocial (e.g., empathy, risk attitudes, temporary affective states). No chapter on prosociality would be complete without a brief nod to the neuroscientific advances used to map the prosocial brain. Finally, we conclude with often overlooked topics and offer a few suggestions for future research avenues.

12.1 Studying Prosocial Behavior in the Lab Although entirely unintended, economists have a historical lead on other fields when it comes to examining prosocial behaviors in the laboratory. Economic games – which were largely created to explore negotiation tactics, the boundary conditions of “rationality,” and the strategies that destabilize the economic system – swiftly revealed that humans have a deeply ingrained prosocial tendency. Economic games typically involve the division of some good (in most cases, money) between different parties. Perhaps the most famous paradigm is the prisoner’s dilemma, created by Dresher and Flood in the late 1940s. The prisoner’s dilemma captures the nature of cooperation: A relatively good outcome is accomplished only by virtue of two parties deciding to collaborate, even though each individual will be better off if they defect. Since its creation, the prisoner’s dilemma has been used in hundreds of experiments,

Prosociality

leading to critical insights about the conditions that scaffold prosocial tendencies. For example, communication between parties facilitates cooperation (Sally, 1995; Wichman, 1970), cooperation is harder to maintain as the termination horizon gets closer (e.g., coalition governments in an election year; Embrey et al., 2018); and people routinely exhibit tit-for-tat strategies when cooperating (Axelrod & Hamilton, 1981). An early game-theoretic analysis of human interaction argued that being presented with the possibility of an economic gain linked to an antisocial option would likely lead to selfish behavior. However, one behavioral paradigm – called the ultimatum game (UG), originally designed by Güth et al. (1982) to test how people negotiate – revealed the exact opposite. In the UG, there are two players, a proposer and a responder. The proposer is endowed with an amount of money and must decide how to divide it between themselves and the other party. The responder can then accept or reject their offer. If the responder accepts, both parties receive the monetary split as proposed. If the responder rejects, neither party receives any money. What should we expect? Fifty years ago, an economist would have argued that the proposer will make the smallest possible offer (say, one cent), and the responder will accept this offer since some money is better than no money. As any social psychologist would have told you, this prediction is not borne out. First, proposers make relatively generous offers (somewhere between 30 and 50 percent of the monetary pie; Camerer, 2003), with an equal 50/50 split being the modal response. These proposers are not acting irrationally; responders who receive less than 20 percent of the pie typically reject the offer, punishing the proposer for behaving in a selfish manner (Cooper & Dutcher, 2011). That people were making offers that deviated substantially from the smallest possible amount rocked the proverbial boat. Why were people being so generous? One explanation is that the threat of punishment – and fear of not receiving any money – drives this altruistic behavioral pattern. Were proposers unwilling to make low offers out of fear of being rejected and thus not receiving any money? To account for this possibility, Forsythe et al. (1994) created the dictator game (DG), which tests this punishment account under stricter conditions. In the DG, there are also two parties. This time, however, the dictator makes the first – and only – move in the game. Because there is only one mover making an offer, the dictator need not fear rejection or punishment, since the receiver has no say in how the money is distributed. What happens in such an unfettered decision space? Proposers give, on average, 20 percent of the original endowment (Camerer, 2003; Engel, 2011). Referencing this often cited number, however, misses the fact that offers in the DG are bimodally distributed, which tells a very different story: People either give a great deal (half of the endowment) or do not give at all (Engel, 2011). In essence, a good chunk of humanity appears to be spontaneously generous. One critical component of behaving prosocially is that the actor typically incurs some cost. This can come in many forms, from monetary costs we have outlined to psychological vulnerabilities, which means that, at least for some

275

276

             -        

period of time, the actor is exposed to exploitation from another. This type of moral hazard is encapsulated in the trust game (Berg et al., 1995). In this game, one participant (the trustor) is endowed with money and has to decide whether to give money to the other participant (the trustee). Any amount that is given to the trustee is multiplied by a given factor (e.g., tripled or quadrupled). The trustee then has the option to reciprocate by giving some money back, or they can pocket all the money for themselves, leaving the trustor with nothing. In contrast with prior paradigms, the highest possible payoff is only obtained if the trustor invests all their money in the trustee – although only if the trustee reciprocates. The majority of people entrust the other person with at least a portion of their original endowment (Fehr & Fischbacher, 2003). Critically, the trustee tends to reciprocate, even in one-shot games where there is no chance of repeated play, and the percentage of money returned increases as a function of the amount of money originally trusted (Fehr et al., 1993; Hayashi et al., 1999). The more people trust, the more that trust is reciprocated. Money allocation tasks are another popular method for exploring altruistic tendencies or other-regarding preferences (Charness & Rabin, 2002). In the classic variant of this task, participants face a list of options that describe two monetary outcomes, one for themselves and one for a peer. Options vary in how good or bad they are for each person in the dyad, from giving all the money to the participant and nothing to the peer, to giving all the money to the peer and nothing to the participant. Results seem to align with stable dispositional tendencies, where altruists mostly give, and egoists mainly care about their own payoff (Murphy & Ackermann, 2014). Moreover, people tend to allocate money in ways that benefit the in-group over the out-group (Balliet et al., 2014; Ng, 1986). Together, research using these economic games paints a rosier picture about humanity’s prosocial tendency than some of the classic early social psychology research (e.g., Milgram and Zimbardo’s studies). Although there is elegance in the simplicity of these games (mechanistically narrowing the possible motives that can cause prosocial behavior), such reductionist approaches also come with downsides. First, each economic game measures only concerns related to distributive justice – how goods are divided among parties. However, prosocial behaviors can involve other moral motives that stretch beyond distributive justice, including concerns about the well-being of others. Efforts have been made toward creating richer paradigms that better capture other types of prosocial behaviors across contexts (Darley & Batson, 1973), such as modifications to the ultimatum game (FeldmanHall et al., 2014; Shalvi et al., 2011) or the creation of entirely new paradigms that sidestep economic game theory altogether (FeldmanHall et al., 2012; Lockwood et al., 2021; Lockwood et al., 2017). Second, in the original economic testbeds, researchers have artificially narrowed the option set (typically two options are presented: accept/reject and reciprocate/defect, etc.). For example, in the UG, a responder who receives an unfair offer has only one option to signal her disapproval – punish the perpetrator by rejecting the offer. Otherwise, accepting the offer signals that the responder is okay with receiving an unfair

Prosociality

offer. Offering a participant only two options certainly simplifies the problem; however, this forced-choice paradigm ignores many contextual factors and social tensions – tensions that we know are critical to shaping behavior in real life (FeldmanHall et al., 2012; Galizzi & Navarro-Martinez, 2019; Levitt & List, 2007). Outside the laboratory, punishment is rarely the only available option for a victim of a transgression. Although punishment is often considered to be antisocial, it can also be conceptualized as prosocial when it prevents immoral actions or enhances moral actions. For example, classic work in behavioral game theory illustrates how the simple threat of punishment in an economic game (i.e., being allowed to punish other players in the game, versus not having the option to punish) enhances cooperation by a magnitude of two (Boyd et al., 2010; Fehr & Gächter, 2000). In other research, allowing people the option to communicate with other players reduces the number of norm violations, such as treating people unfairly or free riding during a public goods game (Jolly & Chang, 2021). In essence, punishment directly or indirectly through gossiping, serves as an important social oil that helps to maintain or even boost moral actions. However, research reveals that the literature’s singular focus on punishment as a form of rebalancing the scales of justice or preventing immoral behavior might have been misleading. For example, punishment rates decrease substantially when people have other ways to communicate their disapproval, such as writing an emotional message to the perpetrator (Xiao & Houser, 2005). Moreover, when the UG is modified to include a wide variety of options for responding to an unfair offer (i.e., the justice game, FeldmanHall et al., 2014), participants rarely choose to reject (or accept), and there seems to be little desire for punishment altogether (FeldmanHall et al., 2014; Stallen et al., 2018). In fact, any option that incorporates punishment is not favored. Instead, compensation of the victim, which increases the victim’s payoff without punishing the perpetrator, is consistently chosen 65 percent of the time (Chavez & Bicchieri, 2013; FeldmanHall, Otto, & Phelps, 2018; Heffner & FeldmanHall, 2019; Mattan et al., 2020; Son et al., 2019). This strongly counters the classic notion that humans prefer to punish individuals who behave unfairly and instead illustrates that people will preferentially compensate themselves and others (Chavez & Bicchieri, 2013; FeldmanHall et al., 2014; Hershcovis & Bhatnagar, 2017) – without punishing the transgressor – even when punishment is completely free. In other words, so long as a victim’s needs are met, there seems to be little desire to punish a norm violator. There are other dimensions of prosociality, such as avoiding harm or care for others’ well-being, that stretch beyond distributive and retributive justice concerns. Our lab created a paradigm called the Pain versus Gain paradigm (PvG) that asks participants (Deciders) to choose between benefiting oneself financially or preventing physical harm to another, the Receiver (FeldmanHall et al., 2012). Deciders choose how much, if any, of a £20 endowment they would like to spend to prevent a series of painful electric shocks from reaching the

277

278

             -        

Receiver – shocks they would observe being administered. The more money Deciders give up, the lower the shock inflicted. Deciders could even give up all their money to ensure no shocks reached the Receiver. They could also do the opposite and keep all the money, ensuring the highest shocks are administered. Deciders make this decision many times, knowing that any money remaining at the end of the experiment is randomly multiplied for a potential maximum payout of £200. Results reveal that, on average, participants forego half their endowment to prevent a stranger from receiving electric shocks (FeldmanHall et al., 2012). Other labs that have subsequently used this paradigm find similar effects (Gallo et al., 2018), and in some cases, individuals are even willing to take the shocks to prevent harm from reaching another (Crockett et al., 2014). Across all these testbeds, the emerging picture is that humans routinely engage in a variety of prosocial acts. One critical, but understudied, aspect of prosociality is the role of uncertainty. This is largely a consequence of a concerted effort to remove any motivational ambiguity from economic games (i.e., if one is trying to establish a behavioral effect, free of confounds, it is better to be able to restrict the number of possible explanations for the effect). For example, in the DG, there is little doubt that an individual who offers 10 percent of the pie is behaving selfishly. Alas, in the real world, intentions and outcomes are not always so clear-cut (FeldmanHall & Shenhav, 2019). In fact, when there is the opportunity to plausibly deny immoral behavior, people readily do so. Cheating games offer a unique test of plausibly denying any wrongdoing (Fischbacher & Föllmi-Heusi, 2013; Gerlach et al., 2019). In the most paradigmatic example, people are asked to roll a die, the outcome of which will determine their monetary payoff (1 ¼ $1, 2 ¼ $2 and so on). The experimenter is never present, and thus a participant can report any number. At the individual level, it is impossible to determine if a participant is lying. However, at the group level, we can examine if the average outcome deviates from those expected by chance. Results reveal that people cheat but only by a little (Mazar et al., 2008). The degree to which one cheats is influenced by who is around: A corrupt partner can enhance cheating behaviors (Weisel & Shalvi, 2015) while a consistently honest out-group member can attenuate cheating behavior – it is desirable to appear more virtuous than a moral out-group member (Vives et al., 2022). Paradigms have also been created to try to capture real-world problems in which prosociality is a necessary antecedent for successfully completing the task. Perhaps the most emblematic example of this dynamic is the current, large-scale societal problem of stymieing climate change. A surge of experiments has created novel tasks that capture important dimensions of the climate change problem, including the exploitation of resources and the need for widespread cooperation across generations to combat such exploitation (Barfuss et al., 2020; Hauser et al., 2014; Milinski et al., 2008; Tavoni et al., 2011). By creating paradigms that map to real-world problems, researchers can gain necessary insight into how to build better interventions that can boost prosocial behaviors in the wild.

Prosociality

To summarize, the past few decades of research conducted in the laboratory have revealed just how common and widespread prosociality is, a behavior that occurs even when there are no possible gains – monetary or reputational (see Section 12.4 on motivations). It appears that humans are well equipped to behave prosocially. Where does this tendency come from? A common question has asked whether prosociality has an innate basis. We now turn to research that investigates the extent to which prosociality is present in other animals and also review the mechanisms posited to explain the evolution of prosociality.

12.2 Our Ancient Prosocial Roots: Apes and Other Mammals Prosocial behaviors and the need to cooperate to solve a collective problem are not uniquely human. There are many other mammals that coordinate with one another to promote their survival (e.g., in hunting). Evolutionary biologists have looked to our biological ancestors, the great apes, as well as other species, to understand whether human prosociality has an evolutionary origin. An examination of primate behavior and their capacity to relate, connect, and cooperate suggests that characteristics of human prosociality are also found in many of our nonhuman relatives. Traditionally, investigations regarding the capacity for prosocial behaviors are based on observation and field studies (De Waal, 2008). However, one of the most striking examples of altruistic, self-sacrificial behavior occurred in the laboratory, with Masserman’s 1964 experiment on rhesus monkeys (Masserman et al., 1964). Fifteen monkeys were trained to pull a cord to receive a piece of food. After training, another monkey was put into an adjacent cage. When the original monkey now pulled the cord to receive a piece of food, a painful electric shock was delivered to the nearby monkey. A number of monkeys consistently preferred to go hungry rather than shock another, with one even refraining from eating for 12 days, presumably to ensure that no shock was administered to the conspecific. A large and growing body of research indicates that nonhuman primates regularly practice prosocial actions (De Waal & Suchak, 2010). Take, for instance, the remarkable helping nature of chimps. Without apparent thought to personal gain, chimps have shown they are more than willing to help not only kin but even unfamiliar humans (Warneken et al., 2007). When researchers placed a stick beyond the reach of a human but within reach of the chimp, the chimps spontaneously helped the reaching human. Once the person stopped trying to reach for the stick and there was no longer a clear goal requiring a helping action, the chimps stopped assisting. Even costly helping revealed the same results: Chimps were willing to make the effort to climb high to help a person who was reaching for the stick. The difference between these examples and other well-documented prosocial behaviors – like food sharing, grooming, and coalition formation – is the presence of psychological altruism, or caring about another’s welfare enough to deliberately benefit the other simply for their sake. Food sharing, grooming, and other similar activities are all behaviors that

279

280

             -        

fall under biological (reciprocal) altruism (i.e., if you scratch my back, I will scratch yours), behaviors common among both primates and humans (Trivers, 1971). However, it is not always rose-colored glasses, as chimps can also display aggressive behaviors, and most primates, including humans, exhibit both antiand prosocial tendencies, depending on the context (e.g., aggression to a potential threat; Wilson & Wrangham, 2003). A major question centers on the existence of shared prosocial motives between human and primate. Evidence of shared prosocial behaviors would suggest that humans acquired preferences for specific actions and motives – such as concerns for equality – throughout their evolutionary history. In humans, concerns for equality lead to cooperative actions, and people are even willing to pay a price to reduce inequality (Fehr & Schmidt, 1999). A similar concern for equality is observed in nonhuman primates, especially for disadvantageous inequality. There is a famous video of a capuchin monkey happily eating a cucumber, which was given as a reward for completing a task. Once a nearby capuchin receives a grape (a much more coveted treat if you are a capuchin) for finishing the same task, the first, formerly happy capuchin takes their cucumber and throws it in the face of the researcher (Brosnan & De Waal, 2003). This strong aversion to inequality has been shown in chimpanzees (Brosnan, 2006) and even nonprimate species such as domestic dogs (Range et al., 2009). The modern human capacity to behave prosocially may have even deeper evolutionary roots that go beyond relatively sophisticated motives, such as inequality aversion (Meyza & Knapska, 2018; Meyza et al., 2017). Spontaneous altruistic behaviors have been observed in other animals as well, including rats (Bartal et al., 2011; Hernandez-Lallement et al., 2020) and mice (Burkett et al., 2016), who are willing to put off eating a tasty piece of chocolate to figure out how to free an entrapped cage mate. Even more surprising, the rodent then shares the chocolate with the newly freed cage mate, exhibiting a behavior that encapsulates a common refrain heard from many parents – sharing is caring. Certain prosocial behaviors – especially those that remove the fitness cost associated with the action – are believed to stem from an evolutionary basis (Axelrod & Hamilton, 1981; Nowak & Sigmund, 2005; Trivers, 1971). Kin selection, for example, postulates that helping someone who is genetically related comes at no cost since the person or animal who is helped will transmit (through reproduction) part of the genetic information of the one who helped (Hamilton, 1964). From an evolutionary account, prosocial behaviors among strangers is often explained through the idea of direct reciprocity: There is no cost as long as there is an expectation of repayment (Trivers, 1971) – a theory that has been extended to strangers of strangers, where repayment is accomplished by someone other than the person who helped, a phenomenon known as indirect reciprocity (Nowak & Sigmund, 2005). More recent proposals have suggested other mechanisms, such the survival of the friendliest. This account suggests that Homo sapiens self-domesticated to be prosocial (Hare, 2017). Even though there is an ongoing discussion regarding the level in which all

Prosociality

these evolutionary adaptions are occurring (e.g., either at the individual or group level; see Rand & Nowak, 2013 on the mechanisms behind the evolution of cooperation), prosocial tendencies seen in primates appear to be some form of rudimentary “pre-adaptions” of modern-day human prosociality.

12.3 Development of Prosocial Behavior Another useful insight into understanding how prosociality unfolds is to take a developmental lens, tracking the emergence of prosocial behaviors from infancy to young adulthood. In the past 20 years there has been an explosion of research focused on measuring whether infants express prosocial concerns. The data have converged on the idea that prosociality is present at some of the very earliest stages of life (see Chapter 18, this volume). For example, research reveals that 6- and 10-month-olds have a greater preference for prosocial individuals who help others than antisocial individuals who hinder others (Hamlin et al., 2007), and by 10 months, babies can also distinguish between accidental and intentional help (Woo et al., 2017). These preferences extend to learning about the moral character of others as well: 16-month-olds are sensitive to the food preferences of prosocial puppets that help others but are insensitive to the food preferences of an antisocial puppet that hurt another (Hamlin & Wynn, 2012) – revealing a selective attunement to moral agents. While these results lend support to a nativist account of humans being born with some semblance of a moral calculus, a growing number of mixed findings has led to a multisite collaboration that aims to replicate and clarify the robustness of these effects (Lucca et al., in press). At some point in the developmental trajectory, these preferences for prosocial individuals are transformed into prosocial actions enacted by the child itself. One study found that when in their natural settings, toddlers spontaneously comfort others starting around 12–16 months (Zahn-Waxler et al., 1983). In the same vein, 14-month-olds help other unrelated adults accomplish a goal, even when the adult does not request help (Warneken & Tomasello, 2006). By 30 months, most toddlers appear to have the capacity (and desire) to spontaneously help others, especially when the altruistic action involves a degree of empathic understanding (Svetlova et al., 2010). Around three years of age, children become more selective in their prosocial behavior. For example, children begin to help friends more than other individuals (Engelmann et al., 2019) and preferentially help those who are poor versus rich (Essler et al., 2020). At about five years, children begin to feel sorry for the misbehavior of other ingroup members and will apologize even when the child had nothing to do with the wrongdoing (Over et al., 2016). What motivates children to help others? There are two widely held views (Hepach et al., 2022), one that stresses an intrinsic desire borne out of otherregarding preferences and another that highlights a more strategic approach

281

282

             -        

(both of which are detailed in Section 12.4). The more popular of the two has investigated how the development of intrinsic motivators such as empathy act as a necessary precondition to express prosocial actions (Hoffman, 2000). For example, even newborns show distress for another crying newborn (Dondi et al., 1999). Other work has sought to understand whether this motivation is preserved across childhood, or whether other factors come into play during certain developmental milestones. Recent research demonstrates that while two-year-olds largely appear to be intrinsically motivated to help, by the age of five, other less intrinsically derived strategies begin to be incorporated (Hepach et al., 2022). In Section 12.4, we review in more detail stable motives and temporary states that have been postulated to explain mechanistically why people behave prosocially.

12.4 Stable and Enduring Motives for Human Prosocial Behavior While animal research reveals that prosociality can be tracked across species, and developmental work reveals when these tendencies arise, it cannot explain the psychological mechanisms that underlie human prosocial behaviors. Why do people give to charity, lend money to a friend, or, in some extreme cases, place themselves in harm’s way to prevent harm to another? Philosophers, and more recently social scientists, have tried to identify and explain the broad array of internal psychological mechanisms that govern prosociality. Adam Smith famously argued that cooperative behavior stems from the logic of self-interest, not from an intrinsic motivation to be cooperative (Smith, 1776). This rather simplified account (which Smith himself contradicts in his book Theory of Moral Sentiments; Smith, 1853) has been long echoed, mostly by economists. However, in the second half of the twentieth century, a cottage industry aimed to demonstrate that people behave prosocially even when there is no chance for possible gain. Once this was established (although there are great disparities across societies; Henrich et al., 2001), researchers turned to explaining why some people behave prosocially while others do not and identifying the contexts that help to amplify (or attenuate) prosocial behaviors.

12.4.1 A Risk Perspective Given that humans are not privy to the intentions, motivations, or beliefs of another, almost every social decision we take is marked by uncertainty (FeldmanHall & Shenhav, 2019). For example, a trustworthy or cooperative individual can easily be exploited by another, less trustworthy individual. It is therefore necessary to constantly estimate the possible risks associated with making a prosocial action. Given this natural mapping between prosociality and uncertainty, there has been some work that explores individual differences in risk attitudes and their effect on altruism, trust, and cooperation (Eckel &

Prosociality

Wilson, 2004; Fehr, 2009). Early research revealed mixed evidence, with some work demonstrating a positive relationship between prosocial actions and risk attitudes and other work showing no effect whatsoever (Fairley et al., 2016). One possible reason for these mixed results was that individual risk attitudes were measured in gambling contexts in which people had access to the known probability associated with a given outcome (e.g., knowing that the probability of winning a lottery is 80 percent). In the social world, we do not have access to the known (or exact) probability of whether another individual will exploit us. The “risk” we perceive in the social domain is more akin to ambiguity – a type of uncertainty in which the probability associated with each outcome is unknown. Once ambiguity, rather than risk, is measured at the individual level, there appears to be a robust relationship between those who can tolerate ambiguity and the tendency to behave prosocially (Vives & FeldmanHall, 2018). This relationship vanishes once the ambiguity associated with other individuals is resolved, for example, by gaining access to a history of their behavior or by the existence of strong social norms.

12.4.2 A Matter of Reputation Given that evolutionary forces reinforce the selection of selfishness, reputation concerns have been leveraged in an attempt to reconcile this finding with the existence of prosocial behaviors (Boyd & Richerson, 1989; Nowak & Sigmund, 2005). According to the reputation account, prosocial behaviors occur to bolster one’s reputation and maximize payoffs – assuming repetitive interactions among the same people. The price an individual pays for being prosocial is offset by gaining a positive reputation, which likely increases the probability of being treated well by others in the future. In line with this notion, people do behave more prosocially when reputational concerns are at stake (i.e., when they think others are watching; Bradley et al., 2018). By deciding quickly, people can also signal to others that prosocial actions are their default behavior (Jordan et al., 2016). This type of prosocial signaling is useful for encouraging repeated social interactions. People use speedy prosocial responses as a signal to figure out who they should interact with during economic games, preferring those who are “mindlessly” prosocial (Jordan et al., 2016). Critics have argued, however, that reputation as a motivational concern cannot explain a number of findings, including the high rates of prosociality observed when reputation is not at stake (i.e., when nobody is watching or decisions are anonymous). To combat this critique, it has been argued that people internalize the value of prosociality under standard reputation concerns, and, as a consequence, prosocial behavior is automatically endorsed even when nobody is watching (Jordan & Rand, 2020).

12.4.3 Other-Regarding Motives Perhaps the most accepted explanation regarding motives for prosocial behavior is the existence of what are considered pure, other-regarding concerns

283

284

             -        

(Cooper & Kagel, 2016). The more an individual is concerned about the wellbeing of others, the more likely they are to act prosocially (Eisenberg et al., 1989). This dovetails with common wisdom and everyday observations: People are more prosocial with those they care about (colleagues, friends, and family). It has been theorized that our capacity for prosociality first originated to take care of kin, and, over time, this motive has generalized and extended to other people outside our kinship circles (Christakis, 2019). Research further reveals that the degree to which people value others (as measured by the Social Value Orientation scale; see Murphy & Ackermann, 2014) predicts their prosocial behavior in the laboratory (Balliet et al., 2009). In contrast, stable traits that reflect egoistic motives, such as narcissism, psychopathy, and Machiavellism, are inversely related to prosocial behavioral patterns (Thielmann et al., 2020).

12.5 Temporary States Shaping Prosocial Behavior: The Role of Emotion The motivations listed earlier are enduring dispositional factors that tend, on the whole, to shape an individual’s relative engagement in prosocial behaviors. However, there are also more micro, temporary factors – such as one’s emotions – that are also capable of enhancing or attenuating an individual’s willingness to behave prosocially (Heffner & FeldmanHall, 2022; Keltner et al., 2006). Ever since Hume argued that emotions help to motivate behavior and social coherence (Hume, 1896), the idea that emotions act as a fundamental precursor of prosocial behavior has endured (Bailey et al., 2020; Coulombe et al., 2019; Malti & Krettenauer, 2013; Penner et al., 2005). A plethora of evidence indicates that emotion plays a critical role in prosocial behaviors. For example, broad affective states – such as positive moods – can amplify charitable giving (Bohner et al., 1992), and specific emotions like contempt, shame, disgust, and guilt play a special role in the establishment of response-dependent valuation, norm compliance, and the motivation to behave in prosocial ways (D’Arms & Jacobson, 1994). Understanding how emotion and choice interact is crucial to understanding the basic motivations governing human prosociality.

12.5.1 Affective States Broad affective states, such as positive affect, or transient moods that are not explicitly tied to the causes and consequences of a decision itself, can readily modify how individuals respond when faced with a prosocial dilemma. One early example of this, documented by Isen and Levin (Isen & Levin, 1972), revealed that a positive, happy mood can lead people to help others (Curry et al., 2018). To induce a positive mood, half of the subjects discovered a dime in the return slot of a public telephone. Subjects who found the dime then encountered a stranger who dropped a heap of papers in front of them. Subtly inducing a good mood (by way of a found dime – it was the 1970s) led subjects

Prosociality

to help the stranger pick up papers, more so than those who were not put in a good mood. In essence, the “glow of goodwill” promoted prosocial behavior. Follow-up studies reported a similar finding, whereby inducing positive affect through emotional comedy videos led people to make more prosocial decisions (DeSteno et al., 2010; Valdesolo & Desteno, 2006), and reducing negative emotions also increased prosocial behavior (Kemeny et al., 2012). Additionally, recent research from our lab found that emotion prediction errors play a large role in governing whether an individual will punish a norm violator, revealing that violations of emotional expectations play an outsized role in governing our willingness to act prosocially (Heffner et al., 2021). The idea that affective states govern our perceptions, including our social interactions, is gaining traction in the field, with more formal models dissecting how exactly affective states shape how we navigate and interact with our world (Eldar et al., 2016; Quoidbach et al., 2019).

12.5.2 Distress Other affective states, such as distress, can be tied to the decision to display prosocial behaviors. Watching another experience physical or emotional pain, such as social rejection, is highly aversive and is known to cause distress in the observer (Decety et al., 2008; Eisenberger & Lieberman, 2004; Singer et al., 2004), which can in turn amplify prosocial actions. For example, in one study, subjects’ skin conductance responses (SCRs) were measured while receiving painful shocks, or when observing pain being inflicted on another (Hein et al., 2011). Subjects could prevent pain being administered to another by choosing to endure the painful shocks themselves. Critically, the magnitude of the SCR – the body’s physiological response to stressful or arousing events, which was taken here as a proxy for distress – predicted whether a subject would altruistically take the painful shocks in lieu of another. While the link between distress and altruistic behavior has been replicated in other contexts as well (FeldmanHall et al., 2015), there is early theoretical and empirical work that suggests that feelings of distress are also associated with the opposite behavioral response: avoiding helping altogether (Shaw et al., 1994). In this case, terminating one’s own distress is more pressing than helping another in need (Batson et al., 1987; Eisenberg, 2000; Eisenberg et al., 1994). An example is when a person crosses to the other side of the street after seeing a homeless person on the sidewalk, since avoiding the moral situation attenuates the discomfort that comes with seeing another in distress. This led some researchers to posit that there are two ways to experience distress, either in sharing the pain of another (Cialdini et al., 1997; Singer et al., 2004) or in understanding that another is in distress but without necessarily sharing in the pain itself (Batson, 2011; Masten et al., 2011). Evidence that individuals who show little arousal to the distress of others also exhibit little physiological arousal when distressed themselves (Shirtcliff et al., 2009), hints at the possibility that the capacity to respond prosocially may be the result of failing to

285

286

             -        

understand what the other is feeling, rather than actually sharing in their distress, per se. However, taking a bird’s-eye view of the data, especially with a nod to more recent neuroimaging work (Zaki & Ochsner, 2012), reveals mixed evidence for this claim. Context, for example, appears to have a strong mediating effect on whether prosociality is driven by understanding or sharing in the pain of others (see Zaki & Ochsner, 2012, for a more in-depth discussion). Populations who exhibit a reduced ability to understand the distress of another help to clarify this relationship between distress and prosocial choice (K. Blair et al., 2006). Alexithymia, a personality construct characterized by the inability to identify and describe emotions in oneself or others (Sifneos, 1973), showcases the intimate relationship between understanding another’s distress and altruistic action (Bernhardt & Singer, 2012; Taylor & Bagby, 2000). Our lab tested the link between the ability to identify and understand another’s distress and helping behavior. In our Pain versus Gain paradigm explained above, we found that those high on the alexithymia spectrum reported experiencing less distress when watching others in pain, which translated into keeping more money at the expense of the Receiver’s physical well-being (FeldmanHall et al., 2013). This effect was mirrored at the neural level, with diminished activity in key brain regions that encode for the experience of distress.

12.5.3 Empathy Empathy is one avenue in which people can understand the emotional experiences of another (Decety, 2011; and see Chapter 11 in this volume). The term empathy is applied to a large spectrum of phenomena (Zaki & Ochsner, 2012) and includes a range of emotional dimensions, such as feelings of concern, affect sharing, and cognitive perspective taking. In short, empathy is a multicomponent process that includes both affective and cognitive dimensions (Zaki, 2014). Early foundational research suggested that feeling empathic concern for another in need is the lynchpin of motivated helping (Batson et al., 1987). Batson and colleagues tested this empathy-altruism hypothesis (Batson, 2011) across a number of studies. They found that increasing the perceived similarity between participants and the person in need of help increases altruistic behavior (Toi & Batson, 1982). Moreover, inducing a person to feel greater empathy in general increases cooperation in the prisoner’s dilemma, regardless of whether the economic game is framed as a social exchange or a business transaction (Batson & Moran, 1999). Empathy might even explain a famous “in the wild” experiment, in which Darley and Batson demonstrated that those who were walking across a university green with ample (rather than no) time on their hands before their next meeting were more likely to offer help to another in need along the way (Darley & Batson, 1973) – perhaps because the helper had enough time to experience or conjure up empathic feelings. More recent research has focused on investigating empathy in the context of witnessing another in physical pain. The prevailing narrative is that empathy is

Prosociality

a form of mirroring another’s distress with our own distress. When we experience physical pain, there are certain brain regions, including the anterior insula, that are routinely activated (i.e., the brain’s prototypical pain circuit). This neural network is also activated when seeing a loved one – or even a stranger – in physical pain (Morelli et al., 2014; Singer et al., 2004). Evidence of shared neural activity suggests that perceiving an emotion in another can activate the same emotion in the observer. This perception–action matching mechanism cannot, however, differentiate between whether the observer is aversively aroused and therefore sharing the distressing experience, or whether the observer is simply recognizing and understanding the distress of the other. Work from our own lab suggests that understanding the distress of another, rather than sharing in their distress, is the mechanism that fuels the empathy– altruism relationship. Using the same Pain vs Gain paradigm, we found that a Decider’s trait empathy, their ability to express empathic concern toward another, correlated with the amount of money they were willing to forgo to attenuate the number of shocks administered to the Receiver (FeldmanHall et al., 2015). This relationship between trait empathy and costly altruism was not reflected by activity in the classic pain circuit (i.e., anterior insula and anterior cingulate). Instead, we found the empathy–altruism relationship to be indexed by activity in the caudate, ventral tegmental area, and subgenual anterior cingulate – regions critical for processing reward and social attachment.

12.5.4 Guilt As with empathy, guilt can also motivate people to behave in prosocial ways, especially when trying to repair relationships (Donohue & Tully, 2019; Drummond et al., 2017; Vaish et al., 2016). Guilt is a painful feeling aroused by causing (or even anticipating causing) an aversive event (Ferguson & Stegge, 1998; Vaish, 2018). Guilt proneness consistently correlates with measures of perspective taking and empathic concern and is inversely related to antisocial behavior (Tangney et al., 2007). Neuroimaging studies reveal that when describing moral transgressions, feelings of guilt are associated with neural activation in a network that is also engaged when thinking about another’s feelings (Basile et al., 2011; Takahashi et al., 2004). This is taken to indicate that a key function of guilt is to promote perspective taking and increased social cohesion. In fact, someone who expresses feelings of guilt or shame is more likely to be perceived positively than someone who does not express such feelings (Stearns & Parrott, 2012). Guilt has also been shown to promote social cooperation. For example, norm infringement is associated with greater experiences of guilt, especially when confronted by an angry face perceived to be criticizing the norm violation (Giner-Sorolla & Espinosa, 2011). Because guilt is associated with breaches of moral rule and social standard, the existence of guilt (or even the anticipation of guilt) can foster trust and social reciprocity

287

288

             -        

(Chang et al., 2011; Vaish, 2018). Together, this research illustrates how emotions can powerfully influence prosocial behaviors. Simply put, emotions afford a useful and effective control system to stymie antisocial behavior and promote prosocial behaviors.

12.6 The Brain’s Prosocial Network For the past 20 years, new insights on human prosociality have been gained by directly looking at brain activity through a variety of neuroimaging methods (Bellucci et al., 2020). Some of the earliest imaging experiments detailed the neural mechanisms associated with specific types of prosocial behaviors. For example, Rilling and colleagues (Rilling et al., 2002) recorded brain activity while participants played iterative rounds of the prisoner’s dilemma with the same partner. Results revealed that mutual cooperation between partners was associated with higher activation in the nucleus accumbens, the caudate nucleus, the medial prefrontal cortex (mPFC), and the anterior cingulate cortex. In a follow-up study, the authors found that higher striatal activity was uniquely associated with cooperating with other humans, as opposed to computers (Rilling et al., 2004). Since the ventral striatum is typically associated with reward, these findings dovetail with the notion that successful cooperation is rewarding in itself, above and beyond just monetary gain (Fehr & Camerer, 2007). Similarly, altruism is also associated with activation in reward regions, suggesting that behaving altruistically, without the possibility of economic gain, is experienced in a hedonically rewarding manner (Harbaugh et al., 2007). As more research followed suit, the brain’s prosocial network began to be mapped in a more granular way. Two primary factors – emotional involvement and taking the perspective of another – appear to be fairly well localized within the brain when making prosocial decisions (Bellucci et al., 2020). The amygdala emerged as a region sensitive to salient emotional stimuli (Anderson & Phelps, 2001; Gangopadhyay et al., 2021; Inman et al., 2020; Phelps & LeDoux, 2005) and, in particular, harmful actions toward another (Berthoz et al., 2006; R. J. R. Blair, 2007; Harenski et al., 2010; Kédia et al., 2008). Interestingly, extraordinary altruists (kidney donators) were found to have enhanced volume in the right amygdala (Marsh et al., 2014). Recent work linking neural lesions to antisocial behavior reveals the importance of the amygdala in processing the negative value associated with causing harm (Darby et al., 2018). The amygdala has also been implicated in a number of other, related domains, including indexing another’s untrustworthiness (FeldmanHall, Dunsmoor, et al., 2018; Todorov & Engell, 2008) and monetary inequality (Haruno & Frith, 2010). Prosociality also necessitates taking the perspective of another. Here, the research has resoundingly revealed that the temporoparietal junction seems to uniquely serve the capacity for theory of mind (Bednya et al., 2009; Cikara et al., 2014; Gweon et al., 2012; Richardson et al., 2018; Saxe & Kanwisher,

Prosociality

2003; Telzer et al., 2011; van Hoorn et al., 2016) – a finding that is robust to manipulation and context (Schreuders et al., 2018; van Baar et al., 2021). Furthermore, using transcranial magnetic stimulation, a technique that temporally disrupts the neural activity of targeted brain regions, corroborates the link between experiencing pain and prosocial actions. If the region in the brain that processes pain (i.e., somatosensory cortex) is disrupted, then the relationship between feeling the distress of another in need and lending help to that person is reduced (Gallo et al., 2018). Other factors that can influence the degree to which a person acts prosocially have also been functionally identified within the brain. Factors such as being able to emotionally regulate (Buhle et al., 2014; Etkin et al., 2015; Moll & de Oliveira-Souza, 2007), identify whether a perpetrator had intent (Berthoz et al., 2002; Koster-Hale et al., 2013; Young & Saxe, 2009; Zapparoli et al., 2018), and experience a negative response to the harm of others (Chakroff et al., 2016; Kédia et al., 2008) have all been neurally linked to the mPFC and amygdala, revealing a neurobiological road map for many of the behavioral manipulations that can turn the dial on the capacity for behaving prosocially. The mPFC in particular has become robustly associated with the process of weighing up the costs and benefits of partaking in a prosocial action (Hu et al., 2021). One question that framed much of the early imaging work, and which still endures today, is whether there is a distinct neural network specifically dedicated to indexing social interactions, under which prosocial behavior falls (Lockwood et al., 2020). At present, this is a hard question to answer, since most imaging work has focused on linking certain psychological processes to specific brain regions – the question of where cognitive processes are indexed. At first blush, it appears that the same brain regions that encode the value of a cookie also encode the value of another human life. However, these types of analyses can only tell us the story of functional localization (which does seem to occur in a largely domain-general manner), but they are silent on the topic of representation (Kriegeskorte et al., 2008). Using recent advances in imaging methods (e.g., multivariate analysis) we can, however, look at how certain brain regions, such as the mPFC, might differentially represent the value associated with a cookie versus human life. Future work can clarify how the brain might uniquely represent social information, especially when deciding to act prosocially. A shortcoming with imaging research is that it is entirely correlational. A way to fill this gap is to investigate patients with brain damage and track the relationship between impaired behaviors and specific damage within the brain (Fellows et al., 2005). This type of lesion research has been instrumental for documenting which brain regions play a causal role in prosocial decision making. We now know that lesions to the dorsal lateral PFC lead to dishonest behavior (Zhu et al., 2014) and less cooperation (Wills et al., 2018), while lesions to the mPFC increase decisions to punish (Koenigs & Tranel, 2007). Moreover, lesions to the amygdala affect the ability to build interpersonal trust (Koscik & Tranel, 2011) and to assess the uncertainty endemic to the social situation (FeldmanHall & Shenhav, 2019). In tandem with more traditional

289

290

             -        

imaging approaches, lesion research paints a fuller picture of just how critical the prefrontal-amygdalar network is for governing prosocial behaviors.

12.7 A Few Caveats Successful cooperation, interpersonal trust, and altruism depend on a variety of factors, including the environment, the people we are interacting with, our mood, and the personal tendency to be biased toward behaving prosocially. The complexity and number of factors that bias this process make studying prosociality in the lab an enduring challenge, especially because there is no agreed-upon method for examining prosocial behaviors. For example, recent research has cast doubts on the ecological validity of the classical economic games used in the literature, since they are less successful at predicting prosocial behavior outside the laboratory (Galizzi & Navarro-Martinez, 2019). In an attempt to move toward more ecologically valid paradigms, there have been a number of recent, creative efforts aimed at investigating behaviors in the real world. For example, one recent study recorded what happens when thousands of wallets stuffed with money are left around the globe. People tend to return the lost wallets and, surprisingly, the rate of returned wallets increases as the amount of money in the wallet increases, spiking to 60 percent when there is more than $90 in the wallet (Cohn et al., 2019). While findings like this highlight just how widespread and robust the tendency to act prosocially really is, they also reveal a shortcoming. Most research exploring prosocial behaviors involves money. There are good reasons for this: Money is universally rewarding and easily divisible. Yet, many real-world prosocial acts do not involve money per se but instead require an individual to give their time or effort. Recently, researchers have begun to explore the interplay between effort and prosocial behaviors (Lockwood et al., 2021; Lockwood et al., 2017). However, time still manages to be a neglected factor, probably because it is difficult to run experimental longitudinal studies. Individuals do not become prosocial overnight but instead learn to be prosocial over time and through repeated interactions. In just the last few years, however, research has moved toward characterizing how individuals leverage trial-bytrial learning to titrate their level of prosocial behavior so that it is both useful to others but not too costly to themselves (FeldmanHall, Dunsmoor, et al., 2018; Lamba et al., 2020).

12.8 Concluding Remarks This chapter provided a general overview of the study of prosociality in the laboratory, the predecessors of prosocial behaviors observed in the animal kingdom and during the early formative years in humans, the different motives and psychological constructs associated with behaving prosocially, and a

Prosociality

summary of the insights neuroscience has provided us with on the neurobiological mechanisms subserving this complex process. The human capacity to be kind, generous, and altruistic with one another has historically surpassed their capacity for selfish, harmful behavior. Indeed, as we enter the “golden age of social sciences” (Buyalskaya et al., 2021), the future success of the inquiry of prosociality relies on our capacity for widespread cooperation.

References Anderson, A. K., & Phelps, E. A. (2001). Lesions of the human amygdala impair enhanced perception of emotionally salient events. Nature, 411(6835), 305–309. Axelrod, R., & Hamilton, W. D. (1981). The evolution of cooperation. Science, 211(4489), 1390–1396. Bailey, P. E., Brady, B., Ebner, N. C., & Ruffman, T. (2020). Effects of age on emotion regulation, emotional empathy, and prosocial behavior. Journals of Gerontology – Series B Psychological Sciences and Social Sciences, 75(4), 802–810. Balliet, D., Parks, C., & Joireman, J. (2009). Social value orientation and cooperation in social dilemmas: A meta-analysis. Group Processes and Intergroup Relations, 12(4), 533–547. Balliet, D., Wu, J., & De Dreu, C. K. W. (2014). Ingroup favoritism in cooperation: A meta-analysis. Psychological Bulletin, 140(6), 1556–1581. Barfuss, W., Donges, J. F., Vasconcelos, V. V., Kurths, J., & Levin, S. A. (2020). Caring for the future can turn tragedy into comedy for long-term collective action under risk of collapse. Proceedings of the National Academy of Sciences of the United States of America, 117(23), 12915–12922. Bartal, I. B. A., Decety, J., & Mason, P. (2011). Empathy and pro-social behavior in rats. Science, 334(6061), 1427–1430. Basile, B., Mancini, F., Macaluso, E., Caltagirone, C., Frackowiak, R. S. J., & Bozzali, M. (2011). Deontological and altruistic guilt: Evidence for distinct neurobiological substrates. Human Brain Mapping, 32(2), 229–239. Batson, C. D. (2011). Altruism in humans. Oxford University Press. Batson, C. D., Fultz, J., & Schoenrade, P. A. (1987). Distress and empathy: Two qualitatively distinct vicarious emotions with different motivational consequences. Journal of Personality, 55(1), 19–39. Batson, C. D., & Moran, T. (1999). Empathy-induced altruism in a prisoner’s dilemma. European Journal of Social Psychology, 29(7), 909–924. Batson, C. D., & Powell, A. A. (2003). Altruism and prosocial behavior. In T. Millon & J. M. Lerner (Eds.), Handbook of psychology: Personality and social psychology (Vol. 5, pp. 463–484). John Wiley & Sons. Bednya, M., Pascual-Leone, A., & Saxe, R. R. (2009). Growing up blind does not change the neural bases of Theory of Mind. Proceedings of the National Academy of Sciences of the United States of America, 106(27), 11312–11317. Bellucci, G., Camilleri, J. A., Eickhoff, S. B., & Krueger, F. (2020). Neural signatures of prosocial behaviors. Neuroscience and Biobehavioral Reviews, 118, 186–195. Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity, and social history. Games and Economic Behavior, 10(1), 122–142.

291

292

             -       

Bernhardt, B. C., & Singer, T. (2012). The neural basis of empathy. Annual Review of Neuroscience, 35, 1–23. Berthoz, S., Armony, J. L., Blair, R. J. R., & Dolan, R. J. (2002). An fMRI study of intentional and unintentional (embarrassing) violations of social norms. Brain, 125(8), 1696–1708. Berthoz, S., Grèzes, J., Armony, J. L., Passingham, R. E., & Dolan, R. J. (2006). Affective response to one’s own moral violations. NeuroImage, 31(2), 945–950. Blair, K., Marsh, A. A., Morton, J., Vythilingam, M., Jones, M., Mondillo, K., Pine, D. C., Drevets, W. C., & Blair, J. R. (2006). Choosing the lesser of two evils, the better of two goods: Specifying the roles of ventromedial prefrontal cortex and dorsal anterior cingulate in object choice. Journal of Neuroscience, 26(44), 11379–11386. Blair, R. J. R. (2007). The amygdala and ventromedial prefrontal cortex in morality and psychopathy. Trends in Cognitive Sciences, 11(9), 387–392. Bohner, G., Crow, K., Erb, H. -P, & Schwarz, N. (1992). Affect and persuasion: Mood effects on the processing of message content and context cues and on subsequent behaviour. European Journal of Social Psychology, 22(6), 511–530. Boyd, R., Gintis, H., & Bowles, S. (2010). Coordinated punishment of defectors sustains cooperation and can proliferate when rare. Science, 328(5978), 617–620. Boyd, R., & Richerson, P. J. (1989). The evolution of indirect reciprocity. Social Networks, 11(3), 213–236. Bradley, A., Lawrence, C., & Ferguson, E. (2018). Does observability affect prosociality? Proceedings of the Royal Society B: Biological Sciences, 285(1875), Article 20180116. Brosnan, S. F. (2006). Nonhuman species’ reactions to inequity and their implications for fairness. Social Justice Research, 19(2), 153–185. Brosnan, S. F., & De Waal, F. B. M. (2003). Monkeys reject unequal pay. Nature, 425(6955), 297–299. Buhle, J. T., Silvers, J. A., Wage, T. D., Lopez, R., Onyemekwu, C., Kober, H., Webe, J., & Ochsner, K. N. (2014). Cognitive reappraisal of emotion: A meta-analysis of human neuroimaging studies. Cerebral Cortex, 24(11), 2981–2990. Burkett, J. P., Andari, E., Johnson, Z. V., Curry, D. C., De Waal, F. B. M., & Young, L. J. (2016). Oxytocin-dependent consolation behavior in rodents. Science, 351(6271), 375–378. Buyalskaya, A., Gallo, M., & Camerer, C. F. (2021). The golden age of social science. Proceedings of the National Academy of Sciences, 118(5), Article e2002923118. Camerer, C. F. (2003). Behavioral game theory: Experiments in strategic interaction. Princeton University Press. Chakroff, A., Dungan, J., Koster-Hale, J., Brown, A., Saxe, R., & Young, L. (2016). When minds matter for moral judgment: Intent information is neurally encoded for harmful but not impure acts. Social Cognitive and Affective Neuroscience, 11(3), 476–484. Chang, L. J., Smith, A., Dufwenberg, M., & Sanfey, A. G. (2011). Triangulating the neural, psychological, and economic bases of guilt aversion. Neuron, 70(3), 560–572. Charness, G., & Rabin, M. (2002). Understanding social preferences with simple tests. Quarterly Journal of Economics, 117(3), 817–869.

Prosociality

Chavez, A. K., & Bicchieri, C. (2013). Third-party sanctioning and compensation behavior: Findings from the ultimatum game. Journal of Economic Psychology, 39, 268–277. Christakis, N. A. (2019). Blueprint: The evolutionary origins of a good society. Hachette UK. Cialdini, R. B., Brown, S. L., Lewis, B. P., Luce, C., & Neuberg, S. L. (1997). Reinterpreting the empathy-altruism relationship: When one into one equals oneness. Journal of Personality and Social Psychology, 73(3), 481–494. Cikara, M., Jenkins, A. C., Dufour, N., & Saxe, R. (2014). Reduced self-referential neural response during intergroup competition predicts competitor harm. NeuroImage, 96, 36–43. Cohn, A., Maréchal, M. A., Tannenbaum, D., & Zünd, C. L. (2019). Civic honesty around the globe. Science, 365(6448), 70–73. Cooper, D. J., & Dutcher, E. G. (2011). The dynamics of responder behavior in ultimatum games: A meta-study. Experimental Economics, 14, 519–546. Cooper, D. J., & Kagel, J. H. (2016). Other-regarding preferences. In J. H. Kagel & A. E. Roth (Eds.), The handbook of experimental economics (Vol. 2, pp. 217–275). Princeton University Press. Coulombe, B. R., Rudd, K. L., & Yates, T. M. (2019). Children’s physiological reactivity in emotion contexts and prosocial behavior. Brain and Behavior, 9(10), Article e01380. Crockett, M. J., Kurth-Nelson, Z., Siegel, J. Z., Dayan, P., & Dolan, R. J. (2014). Harm to others outweighs harm to self in moral decision making. Proceedings of the National Academy of Sciences of the United States of America, 111(48), 17320–17325. Curry, O. S., Rowland, L. A., Van Lissa, C. J., Zlotowitz, S., McAlaney, J., & Whitehouse, H. (2018). Happy to help? A systematic review and meta-analysis of the effects of performing acts of kindness on the well-being of the actor. Journal of Experimental Social Psychology, 76, 320–329. Darby, R. R., Horn, A., Cushman, F., & Fox, M. D. (2018). Lesion network localization of criminal behavior. Proceedings of the National Academy of Sciences, 115(3), 601–606. Darley, J. M., & Batson, C. D. (1973). “From Jerusalem to Jericho”: A study of situational and dispositional variables in helping behavior. Journal of Personality and Social Psychology, 27(1), 100–108. D’Arms, J., & Jacobson, D. (1994). Expressivism, morality, and the emotions. Ethics, 104(4), 739–763. De Waal, F. B. M. (2008). Putting the altruism back into altruism: The evolution of empathy. Annual Review of Psychology, 59, 279–300. De Waal, F. B. M., & Suchak, M. (2010). Prosocial primates: Selfish and unselfish motivations. Philosophical Transactions of the Royal Society B: Biological Sciences, 365(1553), 2711–2722. Decety, J. (2011). Dissecting the neural mechanisms mediating empathy. Emotion Review, 3(1), 92–108. Decety, J., Michalska, K. J., & Akitsuki, Y. (2008). Who caused the pain? An fMRI investigation of empathy and intentionality in children. Neuropsychologia, 46(11), 2607–2614. DeSteno, D., Bartlett, M. Y., Baumann, J., Williams, L. A., & Dickens, L. (2010). Gratitude as moral sentiment: Emotion-guided cooperation in economic exchange. Emotion, 10(2), 289–293.

293

294

             -       

Dondi, M., Simion, F., & Caltran, G. (1999). Can newborns discriminate between their own cry and the cry of another newborn infant? Developmental Psychology, 35(2), 418–426. Donohue, M. R., & Tully, E. C. (2019). Reparative prosocial behaviors alleviate children’s guilt. Developmental Psychology, 55(10), 2102–2113. Drummond, J. D. K., Hammond, S. I., Satlof-Bedrick, E., Waugh, W. E., & Brownell, C. A. (2017). Helping the one you hurt: Toddlers’ rudimentary guilt, shame, and prosocial behavior after harming another. Child Development, 88(4), 1382–1397. Eckel, C. C., & Wilson, R. K. (2004). Is trust a risky decision? Journal of Economic Behavior and Organization, 55(4), 447–465. Eisenberg, N. (2000). Emotion, regulation, and moral development. Annual Review of Psychology, 51(1), 665–697. Eisenberg, N. (2014). Altruistic emotion, cognition, and behavior (PLE: Emotion). Psychology Press. Eisenberg, N., Fabes, R. A., Miller, P. A., Fultz, J., Shell, R., Mathy, R. M., & Reno, R. R. (1989). Relation of sympathy and personal distress to prosocial behavior: A multimethod study. Journal of Personality and Social Psychology, 57(1), 55–66. Eisenberg, N., Fabes, R. A., Murphy, B., Karbon, M., Maszk, P., Smith, M., O’Boyle, C., & Suh, K. (1994). The relations of emotionality and regulation to dispositional and situational empathy-related responding. Journal of Personality and Social Psychology, 66(4), 776–797. Eisenberger, N. I., & Lieberman, M. D. (2004). Why rejection hurts: A common neural alarm system for physical and social pain. Trends in Cognitive Sciences, 8(7), 294–300. Eldar, E., Rutledge, R. B., Dolan, R. J., & Niv, Y. (2016). Mood as representation of momentum. Trends in Cognitive Sciences, 20(1), 15–24. Embrey, M., Fréchette, G. R., & Yuksel, S. (2018). Cooperation in the finitely repeated prisoner’s dilemma. Quarterly Journal of Economics, 133(1), 509–551. Engel, C. (2011). Dictator games: A meta study. Experimental Economics, 14, 583–610. Engelmann, J. M., Haux, L. M., & Herrmann, E. (2019). Helping in young children and chimpanzees shows partiality towards friends. Evolution and Human Behavior, 40(3), 292–300. Essler, S., Lepach, A. C., Petermann, F., & Paulus, M. (2020). Equality, equity, or inequality duplication? How preschoolers distribute necessary and luxury resources between rich and poor others. Social Development, 19(1), 110–125. Etkin, A., Büchel, C., & Gross, J. J. (2015). The neural bases of emotion regulation. Nature Reviews Neuroscience, 16(11), 693–700. Fairley, K., Sanfey, A., Vyrastekova, J., & Weitzel, U. (2016). Trust and risk revisited. Journal of Economic Psychology, 57, 74–85. Fehr, E. (2009). On the economics and biology of trust. Journal of the European Economic Association, 7(2–3), 235–266. Fehr, E., & Camerer, C. F. (2007). Social neuroeconomics: The neural circuitry of social preferences. Trends in Cognitive Sciences, 11(10), 419–427. Fehr, E., & Fischbacher, U. (2003). The nature of human altruism. Nature, 425(6960), 785–791.

Prosociality

Fehr, E., & Gächter, S. (2000). Cooperation and punishment in public goods experiments. American Economic Review, 90(4), 980–994. Fehr, E., Kirchsteiger, G., & Riedl, A. (1993). Does fairness prevent market clearing? An experimental investigation. The Quarterly Journal of Economics, 108(2), 437–459. Fehr, E., & Schmidt, K. M. (1999). A theory of fairness, competition, and cooperation. Quarterly Journal of Economics, 114(3), 817–868. FeldmanHall, O., Dalgleish, T., Evans, D., & Mobbs, D. (2015). Empathic concern drives costly altruism. NeuroImage, 105, 347–356. FeldmanHall, O., Dalgleish, T., & Mobbs, D. (2013). Alexithymia decreases altruism in real social decisions. Cortex, 49(3), 899–904. FeldmanHall, O., Dunsmoor, J. E., Tompary, A., Hunter, L. E., Todorov, A., & Phelps, E. A. (2018). Stimulus generalization as a mechanism for learning to trust. Proceedings of the National Academy of Sciences, 115(7), E1690–E1697. FeldmanHall, O., Mobbs, D., Evans, D., Hiscox, L., Navrady, L., & Dalgleish, T. (2012). What we say and what we do: The relationship between real and hypothetical moral choices. Cognition, 123(3), 434–441. FeldmanHall, O., Otto, A. R., & Phelps, E. A. (2018). Learning moral values: Another’s desire to punish enhances one’s own punitive behavior. Journal of Experimental Psychology: General, 147(8), 1211–1224. FeldmanHall, O., & Shenhav, A. (2019). Resolving uncertainty in a social world. Nature Human Behaviour, 3(5), 426–435. FeldmanHall, O., Sokol-Hessner, P., Van Bavel, J. J., & Phelps, E. A. (2014). Fairness violations elicit greater punishment on behalf of another than for oneself. Nature Communications, 5(1), Article 5306. Fellows, L. K., Heberlein, A. S., Morales, D. A., Shivde, G., Waller, S., & Wu, D. H. (2005). Method matters: An empirical study of impact in cognitive neuroscience. Journal of Cognitive Neuroscience, 17(6), 850–858. Ferguson, T. J., & Stegge, H. (1998). Measuring guilt in children: A rose by any other name still has thorns. In J. Bybee (Ed.), Guilt and children (pp. 19–74). Academic Press. Fischbacher, U., & Föllmi-Heusi, F. (2013). Lies in disguise: An experimental study on cheating. Journal of the European Economic Association, 11(3), 525–547. Forsythe, R., Horowitz, J. L., Savin, N. E., & Sefton, M. (1994). Fairness in simple bargaining experiments. Games and Economic Behavior, 6(3), 347–369. Galizzi, M. M., & Navarro-Martinez, D. (2019). On the external validity of social preference games: A systematic lab-field study. Management Science, 65(3), 976–1002. Gallo, S., Paracampo, R., Müller-Pinzler, L., Severo, M. C., Blömer, L., FernandesHenriques, C., Henschel, A., Lammes, B. K., Maskaljunas, T., Suttrup, J., Avenanti, A., Keysers, C., & Gazzola, V. (2018). The causal role of the somatosensory cortex in prosocial behaviour. eLife, 7, Article e32740. Gangopadhyay, P., Chawla, M., Dal Monte, O., & Chang, S. W. C. (2021). Prefrontal– amygdala circuits in social decision-making. Nature Neuroscience, 24(1), 5–18. Gerlach, P., Teodorescu, K., & Hertwig, R. (2019). The truth about lies: A meta-analysis on dishonest behavior. Psychological Bulletin, 145(1), 1–44. Giner-Sorolla, R., & Espinosa, P. (2011). Social cuing of guilt by anger and of shame by disgust. Psychological Science, 22(1), 49–53.

295

296

             -       

Graziano, W. G., & Eisenberg, N. H. (1997). Agreeableness: A dimension of personality. In R. Hogan, J. Johnson, & S. Briggs (Eds.), Handbook of personality psychology (pp. 795–824). Academic Press. Güth, W., Schmittberger, R., & Schwarze, B. (1982). An experimental analysis of ultimatum bargaining. Journal of Economic Behavior and Organization, 3(4), 367–388. Gweon, H., Dodell-Feder, D., Bedny, M., & Saxe, R. (2012). Theory of mind performance in children correlates with functional specialization of a brain region for thinking about thoughts. Child Development, 83(6), 1853–1868. Hamilton, W. D. (1964). The genetical evolution of social behaviour. II. Journal of Theoretical Biology, 7(1), 17–52. Hamlin, J. K., & Wynn, K. (2012). Who knows what’s good to eat? Infants fail to match the food preferences of antisocial others. Cognitive Development, 27(3), 227–239. Hamlin, J. K., Wynn, K., & Bloom, P. (2007). Social evaluation by preverbal infants. Nature, 450(7169), 557–559. Haney, C., Banks, C., & Zimbardo, P. (1973). Interpersonal dynamics in a simulated prison. International Journal of Criminology and Penology, 1, 69–97. Harbaugh, W. T., Mayr, U., & Burghart, D. R. (2007). Neural responses to taxation and voluntary giving reveal motives for charitable donations. Science, 316(5831), 1622–1625. Hare, B. (2017). Survival of the friendliest: Homo sapiens evolved via selection for prosociality. Annual Review of Psychology, 68, 155–186. Harenski, C. L., Harenski, K. A., Shane, M. S., & Kiehl, K. A. (2010). Aberrant neural processing of moral violations in criminal psychopaths. Journal of Abnormal Psychology, 119(4), 863–874. Haruno, M., & Frith, C. D. (2010). Activity in the amygdala elicited by unfair divisions predicts social value orientation. Nature Neuroscience, 13(2), 160–161. Hauser, O. P., Rand, D. G., Peysakhovich, A., & Nowak, M. A. (2014). Cooperating with the future. Nature, 511(7508), 220–223. Hayashi, N., Ostrom, E., Walker, J., & Yamagishi, T. (1999). Reciprocity, trust, and the sense of control: A cross-societal study. Rationality and Society, 11(1), 27–46. Heffner, J., & FeldmanHall, O. (2019). Why we don’t always punish: Preferences for non-punitive responses to moral violations. Scientific Reports, 9(1), 1–13. Heffner, J., & FeldmanHall, O. (2022). A probabilistic map of emotional experiences during competitive social interactions. Nature Communications, 13(1), Article 1718. Heffner, J., Son, J. Y., & FeldmanHall, O. (2021). Emotion prediction errors guide socially adaptive behaviour. Nature Human Behaviour, 5(10), 1391–1401. Hein, G., Lamm, C., Brodbeck, C., & Singer, T. (2011). Skin conductance response to the pain of others predicts later costly helping. PLoS ONE, 6(8), Article e22759. Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., & McElreath, R. (2001). In search of Homo economicus: Behavioral experiments in 15 smallscale societies. American Economic Review, 91(2), 73–78. Hepach, R., Engelmann, J. M., Herrmann, E., Gerdemann, S. C., & Tomasello, M. (2022). Evidence for a developmental shift in the motivation underlying helping in early childhood. Developmental Science, 26(1), Article e13253. Hernandez-Lallement, J., Attah, A. T., Soyman, E., Pinhal, C. M., Gazzola, V., & Keysers, C. (2020). Harm to others acts as a negative reinforcer in rats. Current Biology, 30(6), 946–961.

Prosociality

Hershcovis, M. S., & Bhatnagar, N. (2017). When fellow customers behave badly: Witness reactions to employee mistreatment by customers. Journal of Applied Psychology, 102(11), 1528–1544. Hoffman, M. L. (2000). Introduction and overview. In Empathy and moral development: Implications for caring and justice (pp. 1–28). Cambridge University Press. Hu, J., Hu, Y., Li, Y., & Zhou, X. (2021). Computational and neurobiological substrates of cost-benefit integration in altruistic helping decision. Journal of Neuroscience, 41(15), 3545–3561. Hume, D. (1896). A treatise of human nature. Clarendon Press. Inman, C. S., Bijanki, K. R., Bass, D. I., Gross, R. E., Hamann, S., & Willie, J. T. (2020). Human amygdala stimulation effects on emotion physiology and emotional experience. Neuropsychologia, 145, Article 106722. Isen, A. M., & Levin, P. F. (1972). Effect of feeling good on helping: Cookies and kindness. Journal of Personality and Social Psychology, 21(3), 384–388. Jolly, E., & Chang, L. J. (2021). Gossip drives vicarious learning and facilitates social connection. Current Biology, 31(12), 2539–2549. Jordan, J. J., Hoffman, M., Nowak, M. A., & Rand, D. G. (2016). Uncalculating cooperation is used to signal trustworthiness. Proceedings of the National Academy of Sciences of the United States of America, 113(31), 8658–8663. Jordan, J. J., & Rand, D. G. (2020). Signaling when no one is watching: A reputation heuristics account of outrage and punishment in one-shot anonymous interactions. Journal of Personality and Social Psychology, 118(1), 57–88. Kédia, G., Berthoz, S., Wessa, M., Hilton, D., & Martinot, J. L. (2008). An agent harms a victim: A functional magnetic resonance imaging study on specific moral emotions. Journal of Cognitive Neuroscience, 20(10), 1788–1798. Keltner, D., Horberg, E. J., & Oveis, C. (2006). Emotions as moral intuitions. In J. P. Forgas (Ed.), Affect in social thinking and behavior (pp. 161–175). Taylor & Francis. Kemeny, M. E., Foltz, C., Cavanagh, J. F., Cullen, M., Giese-Davis, J., Jennings, P., Rosenberg, E. L., Gillath, O., Shaver, P. R., Wallace, B. A., & Ekman, P. (2012). Contemplative/emotion training reduces negative emotional behavior and promotes prosocial responses. Emotion, 12(2), 338–350. Koenigs, M., & Tranel, D. (2007). Irrational economic decision-making after ventromedial prefrontal damage: Evidence from the ultimatum game. Journal of Neuroscience, 27(4), 951–956. Koscik, T. R., & Tranel, D. (2011). The human amygdala is necessary for developing and expressing normal interpersonal trust. Neuropsychologia, 49(4), 602–611. Koster-Hale, J., Saxe, R., Dungan, J., & Young, L. L. (2013). Decoding moral judgments from neural representations of intentions. Proceedings of the National Academy of Sciences of the United States of America, 110(14), 5648–5653. Kriegeskorte, N., Mur, M., & Bandettini, P. (2008). Representational similarity analysis: Connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2, Article 4. Lamba, A., Frank, M. J., & FeldmanHall, O. (2020). Anxiety impedes adaptive social learning under uncertainty. Psychological Science, 31(5), 592–603. Latané, B., & Darley, J. M. (1970). The unresponsive bystander: Why doesn’t he help? Appleton-Century-Crofts.

297

298

             -       

Levitt, S. D., & List, J. A. (2007). What do laboratory experiments measuring social preferences reveal about the real world? Journal of Economic Perspectives, 21(2), 153–174. Lockwood, P. L., Abdurahman, A., Gabay, A. S., Drew, D., Tamm, M., Husain, M., & Apps, M. A. J. (2021). Aging increases prosocial motivation for effort. Psychological Science, 668(681), 32–35. Lockwood, P. L., Apps, M. A. J., & Chang, S. W. C. (2020). Is there a “social” brain? Implementations and algorithms. Trends in Cognitive Sciences, 24(10), 802–813. Lockwood, P. L., Hamonet, M., Zhang, S. H., Ratnavel, A., Salmony, F. U., Husain, M., & Apps, M. A. J. (2017). Prosocial apathy for helping others when effort is required. Nature Human Behaviour, 1(7), Article 0131. Lucca, K., Capelier-Mourguy, A., Byers-Heinlein, K., Cirelli, L., Dal Ben, R., Frank, M. C., Henderson, A. M. E., Kominsky, J. F., Liberman, Z., Margoni, F., Reschke, P. J., Schlingloff, L., Scott, K., Soderstrom, M., Sommerville, J., Su, Y., Tatone, D., Uzefovsky, F., Wang, Y., & Hamlin, K. (in press). Infants’ social evaluation of helpers and hinderers: A large-scale, multi-lab, coordinated replication study. Developmental Science. Malti, T., & Krettenauer, T. (2013). The relation of moral emotion attributions to prosocial and antisocial behavior: A meta-analysis. Child Development, 84(2), 397–412. Marsh, A. A., Stoycos, S. A., Brethel-Haurwitz, K. M., Robinson, P., VanMeter, J. W., & Cardinale, E. M. (2014). Neural and cognitive characteristics of extraordinary altruists. Proceedings of the National Academy of Sciences of the United States of America, 111(42), 15036–15041. Masserman, J. H., Wechkin, S., & Terris, W. (1964). “Altruistic” behavior in rhesus monkeys. The American Journal of Psychiatry, 121(6), 584–585. Masten, C. L., Morelli, S. A., & Eisenberger, N. I. (2011). An fMRI investigation of empathy for “social pain” and subsequent prosocial behavior. NeuroImage, 55(1), 381–388. Mattan, B. D., Barth, D. M., Thompson, A., FeldmanHall, O., Cloutier, J., & Kubota, J. T. (2020). Punishing the privileged: Selfish offers from high-status allocators elicit greater punishment from third-party arbitrators. PLoS ONE, 15(5), Article e0232369. Mazar, N., Amir, O., & Ariely, D. (2008). The dishonesty of honest people: A theory of self-concept maintenance. Journal of Marketing Research, 45(6), 633–644. Meyza, K. Z., & Knapska, E. (2018). What can rodents teach us about empathy? Current Opinion in Psychology, 24, 15–20. Meyza, K. Z., Bartal, I. B. A., Monfils, M. H., Panksepp, J. B., & Knapska, E. (2017). The roots of empathy: Through the lens of rodent models. Neuroscience and Biobehavioral Reviews, 76, 216–234. Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67(4), 371–378. Milinski, M., Sommerfeld, R. D., Krambeck, H. J., Reed, F. A., & Marotzke, J. (2008). The collective-risk social dilemma and the prevention of simulated dangerous climate change. Proceedings of the National Academy of Sciences of the United States of America, 105(7), 2291–2294. Moll, J., & de Oliveira-Souza, R. (2007). Moral judgments, emotions and the utilitarian brain. Trends in Cognitive Sciences, 11(8), 319–321.

Prosociality

Morelli, S. A., Rameson, L. T., & Lieberman, M. D. (2014). The neural components of empathy: Predicting daily prosocial behavior. Social Cognitive and Affective Neuroscience, 9(1), 39–47. Murphy, R. O., & Ackermann, K. A. (2014). Social value orientation: Theoretical and measurement issues in the study of social preferences. Personality and Social Psychology Review, 18(1), 13–41. Ng, S. H. (1986). Equity, intergroup bias and interpersonal bias in reward allocation. European Journal of Social Psychology, 16(3), 239–255. Nowak, M. A., & Sigmund, K. (2005). Evolution of indirect reciprocity. Nature, 437(7063), 1291–1298. Over, H., Vaish, A., & Tomasello, M. (2016). Do young children accept responsibility for the negative actions of ingroup members? Cognitive Development, 40, 24–32. Penner, L. A., Dovidio, J. F., Piliavin, J. A., & Schroeder, D. A. (2005). Prosocial behavior: Multilevel perspectives. Annual Review of Psychology, 56, 365–392. Phelps, E. A., & LeDoux, J. E. (2005). Contributions of the amygdala to emotion processing: From animal models to human behavior. Neuron, 48(2), 175–187. Quoidbach, J., Taquet, M., Desseilles, M., de Montjoye, Y. A., & Gross, J. J. (2019). Happiness and social behavior. Psychological Science, 30(8), 1111–1122. Rand, D. G., & Nowak, M. A. (2013). Human cooperation. Trends in Cognitive Sciences, 17(8), 413–425. Range, F., Horn, L., Viranyi, Z., & Huber, L. (2009). The absence of reward induces inequity aversion in dogs. Proceedings of the National Academy of Sciences of the United States of America, 106(1), 340–345. Richardson, H., Lisandrelli, G., Riobueno-Naylor, A., & Saxe, R. (2018). Development of the social brain from age three to twelve years. Nature Communications, 9(1), Article 1027. Rilling, J. K., Gutman, D. A., Zeh, T. R., Pagnoni, G., Berns, G. S., & Kilts, C. D. (2002). A neural basis for social cooperation. Neuron, 35(2), 395–405. Rilling, J. K., Sanfey, A. G., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2004). Opposing BOLD responses to reciprocated and unreciprocated altruism in putative reward pathways. NeuroReport, 15(16), 2539–2243. Sally, D. (1995). Conversation and cooperation in social dilemmas: A meta-analysis of experiments from 1958 to 1992. Rationality and Society, 7(1), 58–92. Saxe, R., & Kanwisher, N. (2003). People thinking about thinking people: The role of the temporo-parietal junction in “theory of mind.” NeuroImage, 19(4), 1835–1842. Schreuders, E., Klapwijk, E. T., Will, G. J., & Güroğlu, B. (2018). Friend versus foe: Neural correlates of prosocial decisions for liked and disliked peers. Cognitive, Affective and Behavioral Neuroscience, 18, 127–142. Shalvi, S., Handgraaf, M. J. J., & De Dreu, C. K. W. (2011). People avoid situations that enable them to deceive others. Journal of Experimental Social Psychology, 47(6), 1096–1106. Shaw, L. L., Batson, C. D., & Todd, R. M. (1994). Empathy avoidance: Forestalling feeling for another in order to escape the motivational consequences. Journal of Personality and Social Psychology, 67(5), 879–887. Sherif, M., Harvey, O. J., White, B. J., Hood, W. R., & Sherif, C. W. (1961). Intergroup conflict and cooperation: The Robbers Cave experiment. University of Oklahoma Book Exchange.

299

300

             -       

Shirtcliff, E. A., Vitacco, M. J., Graf, A. R., Gostisha, A. J., Merz, J. L., & ZahnWaxler, C. (2009). Neurobiology of empathy and callousness: Implications for the development of antisocial behavior. Behavioral Sciences and the Law, 27(2), 137–171. Sifneos, P. E. (1973). The prevalence of “alexithymic” characteristics in psychosomatic patients. Psychotherapy and Psychosomatics, 22(2–6), 255–262. Singer, T., Seymour, B., O’Doherty, J., Kaube, H., Dolan, R. J., & Frith, C. D. (2004). Empathy for pain involves the affective but not sensory components of pain. Science, 303(5661), 1157–1162. Smith, A. (1776). An inquiry into the nature and causes of the wealth of nations. Methuen. Smith, A. (1853). The theory of moral sentiments. HG Bohn. Son, J. Y., Bhandari, A., & FeldmanHall, O. (2019). Crowdsourcing punishment: Individuals reference group preferences to inform their own punitive decisions. Scientific Reports, 9(1), 1–15. Stallen, M., Rossi, F., Heijne, A., Smidts, A., De Dreu, C. K. W., & Sanfey, A. G. (2018). Neurobiological mechanisms of responding to injustice. Journal of Neuroscience, 38(12), 2944–2954. Stearns, D. C., & Parrott, W. G. (2012). When feeling bad makes you look good: Guilt, shame, and person perception. Cognition and Emotion, 26(3), 407–430. Svetlova, M., Nichols, S. R., & Brownell, C. A. (2010). Toddlers’ prosocial behavior: From instrumental to empathic to altruistic helping. Child Development, 81(6), 1814–1827. Takahashi, H., Yahata, N., Koeda, M., Matsuda, T., Asai, K., & Okubo, Y. (2004). Brain activation associated with evaluative processes of guilt and embarrassment: An fMRI study. NeuroImage, 23(3), 967–974. Tangney, J. P., Stuewig, J., & Mashek, D. J. (2007). Moral emotions and moral behavior. Annual Review of Psychology, 58, 345–372. Tavoni, A., Dannenberg, A., Kallis, G., & Löschel, A. (2011). Inequality, communication, and the avoidance of disastrous climate change in a public goods game. Proceedings of the National Academy of Sciences of the United States of America, 108(29), 11825–11829. Taylor, G. J., & Bagby, R. M. (2000). An overview of the alexithymia construct. In R. Bar-On & J. D. A. Parker (Eds.), The handbook of emotional intelligence: Theory, development, assessment, and application at home, school, and in the workplace (pp. 40–67). Jossey-Bass/Wiley. Telzer, E. H., Masten, C. L., Berkman, E. T., Lieberman, M. D., & Fuligni, A. J. (2011). Neural regions associated with self control and mentalizing are recruited during prosocial behaviors towards the family. NeuroImage, 58(1), 242–249. Thielmann, I., Spadaro, G., & Balliet, D. (2020). Personality and prosocial behavior: A theoretical framework and meta-analysis. Psychological Bulletin, 146(1), 30–90. Todorov, A., & Engell, A. D. (2008). The role of the amygdala in implicit evaluation of emotionally neutral faces. Social Cognitive and Affective Neuroscience, 3(4), 303–312. Toi, M., & Batson, C. D. (1982). More evidence that empathy is a source of altruistic motivation. Journal of Personality and Social Psychology, 43(2), 281–292. Trivers, R. L. (1971). The evolution of reciprocal altruism. The Quarterly Review of Biology, 46(1), 35–57.

Prosociality

Vaish, A. (2018). The prosocial functions of early social emotions: The case of guilt. Current Opinion in Psychology, 20, 25–29. Vaish, A., Carpenter, M., & Tomasello, M. (2016). The early emergence of guiltmotivated prosocial behavior. Child Development, 87(6), 1772–1782. Valdesolo, P., & Desteno, D. (2006). Manipulations of emotional context shape moral judgment. Psychological Science, 17(6), 476–477. van Baar, J. M., Halpern, D. J., & FeldmanHall, O. (2021). Intolerance of uncertainty modulates brain-to-brain synchrony during politically polarized perception. Proceedings of the National Academy of Sciences, 118(20), Article e2022491118. van Hoorn, J., Fuligni, A. J., Crone, E. A., & Galván, A. (2016). Peer influence effects on risk-taking and prosocial decision-making in adolescence: Insights from neuroimaging studies. Current Opinion in Behavioral Sciences, 10, 59–64. Vives, M.-L., Cikara, M., & FeldmanHall, O. (2022). Following your group or your morals? The in-group promotes immoral behavior while the out-group buffers against it. Social Psychological and Personality Science, 13(1), 139–149. Vives, M. L., & FeldmanHall, O. (2018). Tolerance to ambiguous uncertainty predicts prosocial behavior. Nature Communications, 9(1), 1–9. Warneken, F., Hare, B., Melis, A. P., Hanus, D., & Tomasello, M. (2007). Spontaneous altruism by chimpanzees and young children. PLoS Biology, 5(7), Article e184. Warneken, F., & Tomasello, M. (2006). Altruistic helping in human infants and young chimpanzees. Science, 311(5765), 1301–1303. Weisel, O., & Shalvi, S. (2015). The collaborative roots of corruption. Proceedings of the National Academy of Sciences of the United States of America, 112(34), 10651–10656. Wichman, H. (1970). Effects of isolation and communication on cooperation in a twoperson game. Journal of Personality and Social Psychology, 16(1), 114–120. Wills, J., FeldmanHall, O., Meager, M. R., Van Bavel, J. J., Blackmon, K., Devinsky, O., Doyle, W. K., Luciano, D. J., Kuzniecky, R. I., Nadkarni, S. S., Vazquez, B., Najjar, S., Geller, E., Golfinos, J. G., Placantonakis, D. G., Friedman, D., Wisoff, J. H., & Samadani, U. (2018). Dissociable contributions of the prefrontal cortex in group-based cooperation. Social Cognitive and Affective Neuroscience, 13(4), 349–356. Wilson, M. L., & Wrangham, R. W. (2003). Intergroup relations in chimpanzees. Annual Review of Anthropology, 32(1), 363–392. Woo, B. M., Steckler, C. M., Le, D. T., & Hamlin, J. K. (2017). Social evaluation of intentional, truly accidental, and negligently accidental helpers and harmers by 10-month-old infants. Cognition, 168, 154–163. Xiao, E., & Houser, D. (2005). Emotion expression in human punishment behavior. Proceedings of the National Academy of Sciences of the United States of America, 102(20), 7398–7401. Young, L., & Saxe, R. (2009). An fMRI investigation of spontaneous mental state inference for moral judgment. Journal of Cognitive Neuroscience, 21(7), 1396–1405. Zahn-Waxler, C., Radke-Yarrow, M., & King, R. (1983). Early altruism and guilt. Academic Psychology Bulletin, 5(2), 247–259. Zaki, J. (2014). Empathy: A motivated account. Psychological Bulletin, 140(6), 1608–1647.

301

302

             -       

Zaki, J., & Ochsner, K. (2012). The neuroscience of empathy: Progress, pitfalls and promise. Nature Neuroscience, 15(5), 675–680. Zapparoli, L., Seghezzi, S., Scifo, P., Zerbi, A., Banfi, G., Tettamanti, M., & Paulesu, E. (2018). Dissecting the neurofunctional bases of intentional action. Proceedings of the National Academy of Sciences of the United States of America, 115(28), 7440–7445. Zhu, L., Jenkins, A. C., Set, E., Scabini, D., Knight, R. T., Chiu, P. H., King-Casas, B., & Hsu, M. (2014). Damage to dorsolateral prefrontal cortex affects tradeoffs between honesty and self-interest. Nature Neuroscience, 17(10), 1319–1321.

13 Antisocial and Moral Behavior A Review and Synthesis Kean Poon and Adrian Raine

This chapter critically reviews the existing literature on antisocial behavior and morality from a moral psychology perspective, aiming to integrate disparate strands of prior research into an overarching perspective. Throughout this chapter, particular attention is given to psychopathy, a mental disorder characterized by the propensity for immoral, antisocial behavior. Following a brief introduction, this chapter poses the initial question of whether individuals on the antisocial spectrum do indeed show an impairment in moral reasoning or, alternatively, they have emotional impairments that interfere with their ability to act in a morally appropriate manner. We then turn to the neural foundations of moral decision making and antisocial behavior. A neurobiological model of both antisocial and moral behavior is presented, together with its relevance to various subforms of antisociality. Interventions aimed at enhancing the moral sense of the individual and reducing antisocial behavior are briefly outlined. Finally, directions for future research are provided.

13.1 The Antisocial Spectrum of Disorders Antisocial personality disorder (APD) was initially described by Phillipe Pinel (1801) to characterize patients with severely impulsive and destructive behavior but who otherwise did not show any evidence of mental disorder or psychosis. He called the condition manie sans délire (insanity without delirium). J. C. Prichard (1835) later described the condition as “moral insanity,” a gross disturbance in social behavior without any impairment in mental functioning (Livesley et al., 1994). This label is intended to convey the notion that although those with this condition are not legally insane, they lack self-control and have a propensity to harm others, a moral equivalent of insanity. As late as the turn of the twentieth century, “psychopathic” or “psychopathological” referred to any mental disorder, and the term “psychopathic inferiority” was introduced to refer to various deviations of personality (Koch, 1891). Kahn (1931), Kraepelin (1913), and Schneider (1923) proposed an alternative classification of personality disorders that included people classified with destructive personalities who cause problems for themselves and the wider society they live in. The term “psychopathy” was used, albeit inconsistently, to refer to any type of personality disorder or antisocial or aggressive personality. 303

304

           

The American Psychiatric Association (APA, 1952) added the term “sociopathic personality disturbance” to the first edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-I) in the 1950s and the term “antisocial personality” was introduced in later editions. According to the fifth edition of the DSM (DSM-V), all 10 personality disorders are grouped into three clusters (A, B, and C). Antisocial personality disorder (APD) falls into cluster B, along with borderline, narcissistic, and histrionic. All these disorders normally present with dramatic, emotional, and unpredictable interactions with others (Fisher & Hany, 2021). The unique essential feature of APD is the persistent violation or neglect of the rights of others. The seven subfeatures of APD are law-breaking, lying and deceiving, impulsivity, irritability and aggression, disregard for the safety of others, irresponsibility, and lack of remorse for actions. A diagnosis of APD requires that the person be at least 18 years old. A distinctive feature of APD that separates it from all other personality disorders is its developmental progression. Of all the personality disorders in the DSM-V, APD is the only one that stipulates a developmental course among its diagnostic criteria, including a history of conduct disorder (CD) before age 15 (APA, 2013). In turn, CD is invariably preceded by oppositional defiant disorder (ODD), which can have symptoms appearing as early as the preschool years (APA, 2013). Therefore, one can expect development over time from hostility, uncooperativeness, and symptoms of ODD in early childhood to more significant aggression, impulsivity, deceitfulness, and rule violations of CD in later childhood and adolescence, progressing from there to more serious APD in early adulthood (Caspi et al., 1996). Psychopathy is commonly viewed as a disorder lying on the extreme of the antisocial spectrum (Hare & Neumann, 2008). Individuals with psychopathy are notorious for their amoral behavior, characterized by lack of empathy and guilt, shallow affect, manipulation of other people, and severe, premeditated, and violent antisocial behavior (Hare & Neumann, 2008). Although individuals with criminal psychopathy share features such as impulsivity with other antisocial offenders, the factor which psychopathy researchers often highlight to set them apart from others is a relative absence of guilt for their harmful behavior (Hare, 1999). They show little concern for the suffering of their fellow human beings, form shallow affiliative bonds, and rarely show loyalty unless it is in their own interest (Kiehl, 2015). Psychopathy is one of the best predictors of criminal offending and reoffending, with psychopathic individuals being three times as likely as other offenders to recidivate. Males are disproportionately affected, with an estimated four males with psychopathy to one female (e.g., Vitale et al., 2002). The most extensively used measure of psychopathy is the Psychopathy Checklist–Revised (PCL-R; Hare, 1991). This “gold standard” assessment tool consists of a semistructured interview combined with a review of correctional institution charts. The PCL-R contains 20 items assessed on a three-point scale, with item scores summed to obtain a total dimensional score ranging from 0 to 40, where higher scores indicate more psychopathic features. It provides a broad

Antisocial and Moral Behavior

assessment of several dimensions of psychopathy: 1) the affective and interpersonal traits key to classic definitions of psychopathy (e.g., shallow affect, superficial charm, manipulativeness, lack of empathy), and 2) the lifestyle and antisocial dimension (e.g., criminal versatility, impulsiveness, irresponsibility, poor behavior controls, juvenile delinquency). Extensive research has confirmed that the PCL-R is a reliable and valid measure of psychopathy (Hare, 2003). In explaining the moral abnormality of psychopathy, some authors suggest that the antisocial behavior characteristics of the psychopathic individual derive from a failure of moral rationalization (e.g., Blair, 1995), which may result from a lack of the cognitive capacity to tell right from wrong (e.g., Fiedler & Glöckner, 2015). In contrast, others see affective processes as the main driver of moral decisions and suggest that while psychopathic individuals demonstrate fairly normal moral reasoning abilities (e.g., Aharoni et al., 2012; Cima et al., 2010), aberrant emotional processing plays a crucial role in their moral incompetence (e.g., Harenski et al., 2010; Newman et al., 2005). Which of these accounts is correct constitutes a key question in the literature, which this review aims to resolve. (For a related discussion of psychopathy and moral reasoning, see Chapter 3, this volume.)

13.2 Can the Psychopathic Individuals Tell Right from Wrong? The Rational Deficit Hypothesis A prominent explanation for psychopathic antisocial behavior is that individuals cannot distinguish cognitively between right and wrong (Blair, 1995, 1997; Garrigan et al., 2018). This proposition bears on the legal and philosophical debate about the psychopathic individual’s moral responsibility and whether an assessment of psychopathy should qualify as a mitigating circumstance in criminal cases (see Aharoni et al., 2008; Blair, 2008; Fine & Kennett, 2004; Levy, 2007; Litton, 2013; Morse, 2008; Pillsbury, 2013). This issue is critical because “rational capacity” – knowing what you are doing at the time of the act and that the act in question is wrong – forms the legal basis for culpability. The moral judgment of the psychopathic individual has been examined using a variety of measurement tools. The four most commonly used approaches are: 1) measures of Kohlbergian moral reasoning (Kohlberg, 1963), which typically ask people to rank the reasons they believe are most relevant to deciding how to respond to moral dilemmas; 2) the moral-conventional transgressions distinction (Nucci & Nucci, 1982; Nucci & Turiel, 1978; Turiel, 1979, 1983), which challenges respondents to draw a line between acts that are considered wrong merely by social convention versus those that are morally wrong; 3) sacrificial moral dilemmas (Greene et al., 2001), in which participants decide how one would act in a situation that entails harming one or more people to avoid harming a larger group of individuals, such as authorizing the death of one person to save several others; and 4) moral foundations questions (Haidt &

305

306

           

Graham, 2007), in which participants indicate which moral concerns, such as purity and harm, are most relevant to their moral judgment. These four methods of measuring moral judgment, while conceptually overlapping, also provide distinctive insights into how the psychopathic individual may differ in their moral judgment. The Kohlbergian measures focus on the reasons why one chooses a certain course of action and are thus characterized as indices of different stages of moral reasoning. Moral-conventional distinctions determine whether moral judgment relies on conventional rules that are contingent, local, and enable social coordination through communal understandings, or by moral rules that could be applied universally and hold independently of the expectations and commands of political or social authorities. Sacrificial moral dilemmas are a type of ethical problem used to study how individuals make tough moral decisions, particularly when they must choose between competing ethical principles. Moral foundation measures are based on the factors people consider when deciding whether an action is morally permissible. Next we examine each perspective in turn to clarify the mixed state of the literature on psychopathy and moral cognition.

13.2.1 Evidence from Kohlberg’s Test of Moral Development Initial research in this area focused exclusively on assessing the psychopathic individual’s developmental level of moral reasoning, as per Kohlberg’s (1963) theory of moral development. This theory outlines three overarching moral stages: 1) preconventional, 2) conventional, and 3) postconventional, with each stage subdivided further into two substages. In each overarching stage, Kohlberg proposed that people invoke various considerations when making moral decisions. Individuals at the preconventional stage decide on moral issues based on the immediate consequences for themselves, while those at the postconventional stage make decisions based on relatively abstract moral principles independent of existing social rules, laws, and authority. Those in the intermediate, conventional stage, make moral decisions based on the expectations of social groups and society. One well-validated and widely used questionnaire measure within Kohlberg’s moral development framework is the defining issues test (DIT; Rest et al., 1974), which presents five moral dilemmas. One wellknown example, the Heinz drug dilemma, asks participants whether a man should steal a drug to save his dying wife’s life when there are no other options, although doing so is against the law. The DIT yields an overall moral development score, with higher scores representing a higher level of postconventional reasoning. Researchers have used Kohlbergian measures of moral reasoning to understand which elements of a psychopathic individual’s moral cognition may be compromised. Given that psychopathy is marked by self-centeredness, some authors (e.g., Campbell et al., 2009) have hypothesized that psychopathic individuals tend to prioritize self-interest over abstract moral principles when making moral decisions and, therefore, do not progress beyond the lower

Antisocial and Moral Behavior

levels of moral development (Campbell et al., 2009; O’Kane et al., 1996). Surprisingly, the research evaluating this argument is dated and has yielded inconsistent results. Some studies have supported the hypothesis of lower levels of moral reasoning in psychopathic individuals (Fodor, 1973; Jurkovic & Prentice, 1977), some argue for higher levels of moral reasoning (Link et al., 1977), while others reveal no significant group differences on moral development (Lose, 1977; O’Kane et al., 1996). Supporting the hypothesis that psychopathic individuals reason differently from others about moral issues, Campbell and colleagues (2009) found that such individuals tend to both prioritize lower moral considerations (e.g., potential threats to personal interest) and deemphasize higher moral reasons (e.g., attempts to protect human rights). Adding to the confusion, Link and colleagues (1977) found that psychopathic individuals received significantly higher moral reasoning scores than did nonpsychopathic individuals on the Kohlberg (1958) test. However, later researchers raised the criticism that Kohlberg’s test relies heavily on good analytical skills and completely neglects the evidence for the powerful emotional and nonverbal determinants of morality (Garrigan et al., 2018). On balance, these findings provide only a cloudy and inconclusive picture of the moraldevelopmental level of the psychopath.

13.2.2 Evidence from the Moral-Conventional Distinction Another form of the moral reasoning hypothesis relies on the findings of Blair et al. (1995, 1997) based on the moral/conventional task (MCT). In these studies, investigators assessed the ability of adults and children who were low or high in psychopathic tendencies to correctly classify eight hypothetical actions based on their moral content. Using the MCT, Blair et al. (1995, 1997) asked participants to judge whether 1) the hypothetical action was acceptable, and 2) whether it would still be acceptable even if an authority figure permitted it under free response conditions. Conventional transgressions presuppose that the perceived wrongfulness is reliant on what the authority says (e.g., wearing pajamas at work). In contrast, moral transgressions are defined as acts that society considers wrong even if there were no rules, customs, or laws against them (e.g., hurting others). Participants low in psychopathy correctly distinguished moral scenarios that were authority-independent from conventional scenarios. However, participants with high psychopathy made no such distinction. It is notable that psychopathic participants who failed to make moral-conventional distinctions in Blair’s (1995) studies did not rate both types of scenario as acceptable. Rather, they tended to rate both types as unacceptable, regardless of the conventions laid down by authority, suggesting that these participants believed all the scenarios to involve moral violations. Blair (1995) concluded that individuals with psychopathy exhibited this counterintuitive effect as the result of attempting to appear socially desirable or “faking good” (see Blair, 1995, p. 23; Blair et al., 1995, p. 749) and that in reality they may have difficulty detecting or rating merely conventional transgressions.

307

308

           

Recent studies on psychopathic individuals and the moral-conventional distinction have not reproduced Blair’s original results. Aharoni et al. (2012, 2014) tested psychopathic individuals’ ability to distinguish between moral and conventional transgressions in a forced-choice setting, telling participants that exactly half the scenarios were considered morally wrong by typical members of society. Respondents were expected to make a forced choice between predefined moral and conventional transgressions, meaning overclassification would not be an effective strategy and allowing a purer assessment of moral reasoning. Under this methodology, if psychopathic participants truly lack moral understanding they should show less accuracy in making moralconventional classifications than nonpsychopathic controls. Conversely, if psychopathic individuals are no less accurate in these forced-choice classifications, we could conclude that they do indeed understand moral wrongfulness. In both studies, no association was established between psychopathy scores and the percentage of acts correctly classified as moral or conventional. In sum, when factors relating to social desirability are removed, psychopathic participants in moral-conventional distinction studies perform as well as controls, providing additional evidence that individuals with psychopathy do indeed understand wrongfulness (Aharoni et al., 2012, 2014).

13.2.3 Evidence from Sacrificial Moral Dilemmas Another commonly used method for measuring moral decision making draws upon the seminal work of Greene and colleagues (2001). They used hypothetical moral dilemmas in which participants must decide whether to sacrifice the life of one person in order to save the lives of a greater number (for a broader discussion of moral dilemmas, see Chapter 5, this volume). In a version of the classic trolley problem (Thomson, 1976), the switch dilemma, participants must choose whether or not to pull a lever that would divert a train and save five others who are lying on the track ahead but also kill one person at the end of the diversion track. In another sacrificial dilemma, the footbridge dilemma, participants must choose whether or not to push a heavyset man off a bridge in order to stop the train, which would kill him but thereby save five others on the track ahead. Although the participants in both cases have to decide whether to cause the death of one person to save several, endorsement of the action in either situation indicates a willingness to engage in utilitarian decision making (i.e., saving five people instead of one), while avoiding action in either case indicates a deontological decision (i.e., considering it immoral to inflict harm). However, while avoiding action in the footbridge dilemma clearly aligns with deontological principles (i.e., considering it immoral to directly inflict harm), the deontological decision in the switch case is more complex. Pulling the lever can be seen as consistent with the maxim of universalizability and a duty to save lives, making the deontological judgment in this scenario less clear-cut. Interestingly, most individuals deem the “pulling the lever” – the switch dilemma – to be more morally acceptable than pushing a person onto the

Antisocial and Moral Behavior

tracks, despite both choices having identical outcomes in terms of numbers of lives lost (Cushman et al., 2006). One explanation for this discordance in moral decision making across these dilemmas postulates that most people think pulling the lever does not cause direct harm to someone else whereas pushing the man off the footbridge causes direct physical harm to another individual (Cushman et al., 2012). This distinction is crucial because causing direct harm typically elicits stronger emotional responses, such as empathy and guilt. People are more likely to experience and be influenced by these emotions when contemplating direct harm. Psychopathic individuals, who have deep-rooted deficits in empathy and guilt (Berg et al., 2013), may not be as affected by the emotional weight of causing direct harm. As a result, they might adopt more utilitarian reasoning, focusing on the outcomes (i.e., saving more lives) rather than the means of achieving those outcomes. The reduced influence of emotions in individuals with psychopathy may enhance the more “rational” aspects of their moral decision making over those of nonpsychopathic individuals. The reduced influence of emotions in individuals with psychopathy may serve to enhance the more “rational” aspects of moral decision making over those of nonpsychopathic individuals (Pletti et al., 2017). Supporting this prediction, Bartels and Pizarro (2011) found that selfreported psychopathy was positively associated with utilitarian decision making about sacrificial dilemmas among undergraduates. Koenigs et al. (2012) partially replicated this correlation in a sample of prisoners, albeit specifically in the case of the classical switch dilemma where only indirect harm is inflicted. Tassy et al. (2013) replicated the study with French university students and showed that a high level of psychopathic traits predicted a greater likelihood of a utilitarian response to hypothetical choices (i.e., “Would you . . . in order to . . .?”) but not for the judgment question (i.e., “Is it acceptable to . . . in order to . . .?”). This finding confirmed the link between psychopathic traits and utilitarianism in choice of action but not in moral judgment.

13.2.4 Moral Foundations Theory A final way in which researchers have examined the moral abnormality of psychopathy emerges from work on moral foundations. Six moral domains are argued to represent the various ethical considerations of decision making (Fernandes et al., 2020; Haidt, 2012): 1) preventing harm, which involves concern about the suffering of others; 2) preserving fairness, which involves following the norms of reciprocity, equality, and justice; 3) respecting authority, which involves moral obligations to hierarchies; 4) exhibiting in-group loyalty, which involves moral obligations to members of an identified group without betrayal and giving preferential treatment to in-group over out-group members; 5) practicing purity, which involves living in a noble manner with purity of body, mind, and soul; and 6) protecting liberty, which involves the feelings of reactance and resentment people feel toward those who dominate others and restrict their freedom.

309

310

           

In the context of moral judgments, moral foundations research is focused primarily on preferences in moral values as opposed to identifying deficits in moral judgment patterns. A study conducted by Blair (2007) showed that while individuals scoring high in psychopathy demonstrated a lack of concern about harming others, they had similar levels of concern within other domains, such as purity and loyalty in the MFQ (Graham et al., 2011). This finding was replicated in a sample of 222 incarcerated males, where it was found that psychopathic traits were negatively associated with concerns about harm and fairness (Aharoni et al., 2011). Others, in contrast, have found that psychopathy is related to diminished moral concern in all five moral domains (Jonason et al., 2015). Using a large online sample, Glenn et al. (2009) found that individuals who scored higher on the psychopathic traits reported less concern about preventing harm and being fair than those who scored lower on psychopathy. The psychopathic individuals also showed slightly increased concerns about ingroup loyalty, which may be due to their reduced concern about nongroup members and their desire for their in-group to socially dominate other groups. Although the body of work relating psychopathy to moral foundations theory (MFT) is relatively small, overall these studies suggested that psychopathy is associated with moral abnormality.

13.2.5 Meta-analysis on Moral Judgment and Psychopathy Taking different perspectives and using various measures of moral judgment, the previous sections have tried to address the question of whether individuals with psychopathy can distinguish cognitively between right and wrong. Although the results of these findings are mixed, they generally point to the conclusion that psychopathic individuals have the capacity to make moral judgment (Borg & Sinnott-Armstrong, 2013). Recently, Marshall et al. (2018) conducted a meta-analysis of 27 studies, systematically analyzing the results of previous research to derive conclusions about the relationship between psychopathy and moral reasoning. Drawing from these studies (N ¼ 4,376) of psychopathy using measures including self-reporting and common moral tasks (i.e., sacrificial moral dilemmas, Kohlbergian moral reasoning, and the moral foundations questionnaire [MFQ]), they detected a small but statistically significant relationship between psychopathy scores and the commonly used measures of moral decision making (rW ¼ 0.16) and moral reasoning (rW ¼ 0.10). However, very little evidence of “pronounced and overarching” moral deficits in conjunction with psychopathic traits is presented. It is possible that reports of marked deficits are overstated due to publication bias (Marshall et al., 2018). The authors suggested that their results raise the distinct possibility that the mental health profession and laypeople have underestimated the capacity of psychopathic individuals to appreciate moral responsibility. Six studies within the meta-analysis used the MFQ created by Graham et al. (2011). The results suggest that, like nonpsychopathic individuals, psychopathic individuals emphasize nearly all moral foundations they relate to equally.

Antisocial and Moral Behavior

At the same time, while the slightly (but nonsignificantly) stronger scores on the harm subscale of the MFQ do not provide strong statistical support for the hypothesis (Glenn et al., 2009) it hints at a trend that warrants further investigation. Overall, the meta-analysis from Marshall et al. (2018) provides evidence against the idea that psychopathic individuals exhibit a pronounced or predominant deficit of moral reasoning (Furnham et al., 2009), suggesting instead that psychopathic individuals exhibit subtle differences in moral judgment and emphasize slightly different moral foundations for making decisions than nonpsychopathic individuals. In summary, as most studies fail to identify consistent or systematic differences in judgment capabilities between individuals with psychopathy and nonpsychopathy, there is good reason to believe that psychopathic individuals possess the capacity for making normative moral judgments. Though psychopathic individuals appear to have the ability to form a rational judgment as to whether an act is right or wrong, at a behavioral level they lack the capacity to follow through with what they reason to be right or wrong (or are prone to acting against moral norms out of self-interest). In other words, they are incapable of holding genuine moral concepts as a guide to behavior and thus fail to translate moral judgments into action. In Section 13.3, we move on from the role of reasoning and deliberation in moral judgment to the role of emotion processing and discuss how emotions contribute to moral decision making.

13.3 Can Individuals with Psychopathy Act on What Is Right? The Role of Emotion Processing Some researchers believe that there is a strong connection between emotion and judgment of moral behavior (see Horne & Powell, 2016). Such processing involves not only the ability to experience negative emotions around the understanding of right and wrong but also the ability to recognize others’ affect (e.g., empathy; Nichols, 2004; Tuvblad et al., 2013). In the following subsections, we explore how emotion processing differs in individuals with psychopathy and how such differences likely affect moral behavior.

13.3.1 Shallow Emotional Experience A fundamental reason why most individuals refrain from committing crimes relates to moral emotions. Feelings of anticipated shame, guilt, and remorse that are aversive and unpleasant serve to buffer individuals from offending (Rebellon et al., 2010; Tangney et al., 2011). But what if an individual is unable to experience these emotions? Psychopathic personality is characterized by shallow emotional experiences and an inability to experience the full range and depth of normal emotions (Hare & Neumann, 2008; Ribeiro da Silva et al., 2012). Such emotional incapacity in psychopathic individuals may prevent them from relating empathically to others.

311

312

           

In an explication of early theories of psychopathy, Blackburn (2006) put forward the idea that psychopathic individuals have a deficit in participating in role-taking within social groups and are insensitive to the needs of others. He argued that they are prone to ignore the rights of others and show a general inability to form lasting emotional bonds, all relevant to this key failure to assume the normal range of role-taking responsibilities that most people do quite easily. Without concern for others, individuals with psychopathy lack the capacity to make authentic moral judgments based on empathy and can only rationalize such judgments in the abstract. This basic ability to empathize with another can be viewed as foundational to all other emotions that one experiences in reaction to what people do to others. Essentially, the reason why we think it is wrong to harm others is because when we contemplate such harm, we feel part of the pain of the victim through an empathic affective reaction. This basic propensity therefore gives rise to other, more recognizable moral emotions, such as anger at injustice. A deficit in this ability to experience empathy among psychopathic individuals is thought to disrupt their emotional connectedness to others and interfere with the development of a moral conscience (DeLisi et al., 2013; Poon & Ho, 2015) and the internalization of moral standards of behavior (Blair, 1995).

13.3.2 Facial Affect Recognition Deficits Studies of emotion processing have shown that individuals with psychopathy not only display shallow emotional experience but also exhibit poor accuracy in their recognition of others’ emotions. Processing facial affect is crucial for socialization and normal social interaction (Corden et al., 2006; Fridlund, 1991). Antisocial and aggressive behaviors may stem from an inability to correctly interpret the social and emotional cues of others (Blair, 2003; Montagne et al., 2005; Poon, 2016; Walker & Leister, 1994). Some studies have suggested that this deficit appears to be specific to the emotions of fear and sadness, as demonstrated in a “violence inhibition” paradigm proposed by Blair and colleagues (Blair et al., 2004; Marsh & Blair, 2008). In this model, the absence of fear and sadness when faced with others’ distress promotes violent behavior (Blair et al., 2005). The core idea of Blair’s model (Blair, 1995, 2006) lies in classical conditioning theory. People typically find the fear and sadness cues of others to be innately aversive, so that when an antisocial behavior is followed by fear or sadness, the antisocial action itself becomes aversive and is inhibited. It follows that if an individual is impaired in recognizing fear and sadness, this allows them to behave in a more selfgratifying manner without the aversive effect of “feeling bad.” This explanation is supported by a meta-analysis conducted by Marsh and Blair (2008), which suggests that psychopathy is associated with a specific deficit in recognizing fear and sadness, rather than a universal deficit in recognizing all emotions. Some studies have found that psychopathic individuals showed impairments in the recognition of fear (Fairchild et al., 2009) and sadness (e.g., Fairchild

Antisocial and Moral Behavior

et al., 2010; Fairchild et al., 2009; Hastings et al., 2008) based on facial expressions. Others have also found deficits in recognizing fear by measuring callous-unemotional (CU) traits (Leist & Dadds, 2009; Muñoz, 2009). There are, however, also numerous studies that did not find psychopathic traits to be associated with specific deficits in recognizing facial expressions of fear (e.g., Eisenbarth et al., 2008; Fairchild et al., 2010; Hastings et al., 2008) or sadness (Del Gaizo & Falkenbach, 2008; Hansen et al., 2008; Leist & Dadds, 2009; Muñoz, 2009). Moreover, Del Gaizo and Falkenbach (2008) found that psychopathy was associated with better recognition of fear, and a trend in this direction was observed by Woodworth and Waschbusch (2008). Other studies have also found evidence that psychopathic traits are also associated with deficits in recognizing other emotions such as disgust (Hansen et al., 2008), anger (in relation to CU traits; Muñoz, 2009), happiness (Hastings et al., 2008), and surprise (Fairchild et al., 2009), calling into question whether psychopathic deficits really are specific to fear and sadness. These mixed findings raise the possibility that deficits in facial expression recognition associated with psychopathic traits may be more pervasive than previously believed. A meta-analysis of facial expression recognition by Wilson et al. (2011) discovered that psychopathy was significantly associated with deficits in multiple emotional domains, although the effects were numerically small (N-weighted r ¼ 0.06–0.12) and trended toward being largest for fear and sadness. These findings suggest that emotion recognition impairments in psychopathy are more pervasive than the more specific deficits outlined in Marsh and Blair’s (2008) model.

13.3.3 Physiological Evidence of Emotion Deficits in Psychopathic Individuals Psychophysiological research has convincingly supported the hypothesis that psychopathy involves a fundamental deficit in emotion processing. Studies in this area relate primarily to automatic affective responses consisting of a peripheral physiological response mediated by the autonomic nervous system such as the skin conductance response, eye blink reflex, and the approachavoidance response (Levenson, 2014; Patrick, 2018). Early studies examined the change of participants’ skin conductance level to stimuli associated with the distress of others. In one paradigm, participants observed confederates who they thought were receiving electric shocks while their own skin conductance responses were recorded. When witnessing the distress of these confederates, offenders with a high level of psychopathy were found to show weaker autonomic arousal (i.e., lower skin conductance responses) than those without psychopathic traits (Aniskiewicz, 1979; House & Milligan, 1976). Since then, numerous other studies have been conducted using the startle eyeblink response to test the hypothesis that psychopathic individuals are deficient in fear reactivity. The affective startle paradigm involves recording startle eyeblink responses to incidental noise bursts presented with emotional or neutral

313

314

           

foreground stimuli. For the normal population, the startle blink response is an unconscious defensive reaction to aversive stimuli (Lang et al., 1990), which is more potentiated during exposure to unpleasant images (e.g., violence, mutilation, murder) compared to neutral images (Vaidyanathan et al., 2009). A considerable body of research using this paradigm provides consistent and compelling evidence that, compared to normal individuals, psychopathic individuals are less sensitive (i.e., exhibit reduced or absent startle eye-blink responses) to aversive cues, indicating blunted emotions (e.g., Patrick, 2018) and heightened thresholds for reactivity to an aversive affective state (Levenston et al., 2000). As one example, Patrick et al. (1993) reported that high-psychopathic male offenders have a heightened threshold for the startle blink during aversive picture viewing, indicating that it takes a stronger or more imminent threat to activate their defensive motivational system. Similar results were reported by Anderson et al. (2011) in a sample of 76 undergraduate women assessed with high levels of psychopathic traits. The authors measured both affective startle blink responses and the P3 wave (i.e., an event-related potential component elicited in the process of decision making) using an affective picture-viewing task. Results showed that those scoring high on psychopathic traits lacked startle blink potentiation and demonstrated larger P3 amplitudes when faced with aversive stimuli. The P3 wave, is associated with attention and cognitive processing, with larger amplitudes indicating more cognitive resources devoted to the stimulus. In aversive contexts, larger P3 amplitudes suggest heightened arousal or emotional response, indicating greater engagement. For those with high psychopathic traits, larger P3 amplitudes in response to aversive stimuli suggest atypical processing, reflecting enhanced focus or heightened arousal that differs from typical responses. The study supports the generalizability of deficient startle potentiation of nonincarcerated females with psychopathic traits and adds to a growing body of research suggesting that psychopathic traits are associated with distinctive information-processing characteristics, as indexed by P3 amplitude. In addition to the above paradigms, the approach-avoidance task (AAT) is another physiological reaction task used in psychopathy studies. In a study conducted by von Borries et al. (2012), the AAT was used to assess avoidance reactions to stimuli of potential threat among psychopathic individuals. During the task, stimulus photographs display facial expressions of actors with either angry, happy, or neutral expressions. Participants push a joystick either away or toward themselves as the faces increase or decrease in size. In this task, healthy individuals show a general tendency to move away from angry expressions and approach happy faces as in previous findings (e.g., Volman et al., 2011). Psychopathic individuals, in contrast, show a total absence of avoidance tendencies toward angry faces (von Borries et al., 2012), suggesting a lack of defensive responding consistent with findings from the startle blink paradigm. Moreover, their study found that avoidance rates were negatively correlated with aggression rates, suggesting that psychopathic individuals who show the

Antisocial and Moral Behavior

lowest avoidance tendencies have the highest level of aggression. In summary, an absence of emotional reactivity seems to be a central deficit in psychopathy, leading to increased antisocial behavior due to diminished aversive arousal from punishment. To conclude, intact emotional processes are theorized to be essential to moral behavior, providing immediate and salient feedback on behavior. Psychopathic individuals have difficulty responding empathetically, showing impaired processing of distress in others. Hence, we suggest that, while individuals with psychopathy understand in a rational sense what is right or wrong, they fail to experience this cognitive knowledge at an affective level because they lack the necessary emotional reactions to witnessing the harm caused to others. Nonetheless, how such deficits account for their deficient or absent moral sense still requires further investigation. We next take this physiological evidence further by addressing the neural foundation of the morality impairments found in antisocial and psychopathic individuals.

13.4 The Neuromoral Theory of Antisocial, Violent, and Psychopathic Behavior Assuming that antisocial individuals exhibit some form of moral impairment, what accounts for such a deficit? In this section we argue that there are fundamental differences in the brains of psychopathic, violent, and antisocial individuals that can account for their moral aberrations. We first consider structural and functional brain deficits that have been observed in these antisocial populations. We then consider the neural foundations of moral decision making in normal populations. Finally, we review these two very different literatures in order to draw some broad conclusions on reasons why criminal offenders behave as they do. Our key proposition is that the brain impairments found in offenders correspond to those parts of the brain that govern moral decision making and that, lacking that neural moral compass, some individuals can step into a zone of moral transgressions (Crockett et al., 2017). Regarding brain impairments in offenders, the past 25 years have established beyond reasonable doubt that antisocial individuals have brains that are structurally and functionally different from the rest of us (Raine, 2013). Early research documented that murderers compared to normal controls have reduced functioning in the prefrontal cortex, as well as impairments to other brain regions that include the angular gyrus and the amygdala (Raine et al., 1997; Raine et al., 1994). More recent research has confirmed and extended this proposition, documenting also that murderers have reduced gray matter in the prefrontal cortex, as well as in other regions that include the anterior cingulate and insula (Sajous-Turner et al., 2020). Such brain impairments are not specific to extreme groups, however. Structural and functional brain differences have been documented in a wide range of antisocial populations (those with conduct

315

316

           

disorder or antisocial personality disorder, violent and nonviolent criminals, psychopathic individuals, life-course persistent offenders) from adolescence to adulthood (Carlisi et al., 2020; Raine & Yang, 2006), with prefrontal deficits being the most replicated finding. Additionally, impairments to the temporal lobe have also been highlighted in both past and current research on antisocial populations (Carlisi et al., 2020; Raine, 1993; Yang et al., 2009; Yang et al., 2015). Despite structural and functional brain differences, it is important to acknowledge the difficulty in demonstrating whether aberrant cognitive task performance by individuals with psychopathy represents a lack of capacity of understanding what is right and wrong or an actual difference in values. An entirely different line of enquiry concerns the neural basis of moral decision making in normal individuals. At the turn of the century, a pioneering brain imaging study of moral decision making employing both personal and impersonal moral dilemmas at the turn of the century documented greater activation of the prefrontal cortex, posterior cingulate, and angular gyrus in response to personal moral dilemmas compared to impersonal dilemmas (Greene et al., 2001). Since then, a considerable body of knowledge has been built on the neural network subserving moral decision making in healthy individuals. One meta-analysis of studies employing either moral decisions (deciding whether to accept a proposed solution) or moral evaluations (judging the appropriateness of another’s actions) has highlighted frontal and temporal regions in addition to the cingulate cortex (Garrigan et al., 2016). A second ALE (activation likelihood estimation) meta-analysis in the same year documented common neural denominators to both right–wrong moral judgments and reasoning about moral dilemmas in the medial frontal gyrus, the middle frontal gyrus, and the left middle and superior temporal gyri (Bryant et al., 2016). A third meta-analysis highlighted the ventromedial prefrontal cortex, orbitofrontal cortex, cingulate cortex, temporoparietal junction including the angular gyrus, amygdala, and the temporal pole being activated during moralrelated tasks compared to control conditions (Han, 2017). As such, while different reviews draw somewhat different conclusions, overall there is a consensus that the prefrontal cortex (particularly ventral, polar, and medial prefrontal sectors), posterior cingulate, temporal cortex, and angular gyrus are particularly involved in moral decision making in healthy controls. A broad question concerns whether or not these two disparate fields of enquiry – one on the neural basis of antisocial behavior and the other on the neural basis of moral judgments – are related in any way. This issue was addressed more than 15 years ago in a review that for the first time generated the hypothesis that there exists a common neural denominator to both antisocial behavior and moral decision making (Raine & Yang, 2006). With respect to antisocial behavior, it was argued that key areas found to be functionally or structurally impaired in antisocial populations included dorsal and ventral regions of the prefrontal cortex, the amygdala, hippocampus, angular gyrus, posterior cingulate, and subregions of the temporal cortex including anterior and superior gyri. For moral decision making, it was suggested that extant

Antisocial and Moral Behavior

research implicated a neural circuit consisting of polar, medial, and ventral regions of the prefrontal cortex, the superior temporal sulcus, and the angular gyrus, with initial findings additionally implicating the temporal pole and the amygdala. It was concluded that there exists a substantial overlap between the brain mechanisms implicated in antisocial/psychopathic behavior on the one hand and those involved in moral decision making on the other (Raine & Yang, 2006). Recently this neuromoral theory of antisocial behavior has been updated to include more recent findings in these two neuroimaging fields (Raine, 2019). This revised model is illustrated in Figure 13.1. Key revisions in this update consist of the insula and anterior cingulate being added as areas common to both antisociality and morality. The striatum (caudate, putamen, globus pallidus, nucleus accumbens – only the caudate and putamen are illustrated) were added as areas specific to antisocial behavior, although, as we have noted, there is increasing support for its role in both antisocial and moral processing. The angular gyrus remains a common area but should be taken more widely to represent the associated temporoparietal junction. The original neuromoral theory (Raine & Yang, 2006) and its update (Raine, 2019) are essentially based on comparing and contrasting different patterns of brain imaging findings from two different research fields, essentially a correlational approach. However, recent neurological research places this model on a more causal footing. Researchers identified 17 cases in which lesions to the brain were followed by criminal behavior (Darby et al., 2018). Lesions in these 17 cases were scattered throughout the brain – there was not one single focus, although in more than half the cases lesions occurred in the frontal cortex. Using lesion network mapping, it was established that all of these locations were connected to one neural network. It was then documented that this specific network overlapped considerably with the functional network underlying moral decision making, consisting of the ventromedial prefrontal cortex, the inferior frontal gyrus, and the temporal cortex. From different scientific perspectives, therefore, there is converging evidence that damage to brain areas that result in criminal offending have their causal effect by impairing the neural basis of moral decision making. The neuromoral theory of antisocial behaviors constitutes a functional neuroanatomical model positing that a foundational cause of disparate antisocial behaviors resides in dysfunction to a network of brain areas that constitute the infrastructure to moral behavior. As a causal model, its core proposition is that dysfunction to one or more areas of the neuromoral circuit results in impairment to feeling, thinking, and behaving in a moral way, which in turn lays the foundation for antisocial, violent, and psychopathic behavior. There are, nevertheless, unresolved issues. For example, which moral component is more impaired in offenders – the cognitive or the emotional? The neuromoral theory has argued that it is the emotional feeling of what is morally wrong that constitutes the primary deficit, with cognitive components of morality being secondary (Raine & Yang, 2006). In contrast, research on brain

317

318

           

Figure 13.1 The neuromoral model of antisocial behavior. Brain regions impaired only in antisocial groups include the striatum, hippocampus, temporal lobe, and dorsolateral prefrontal cortex. Regions activated only in moral decision making include the posterior cingulate. Regions common to both antisocial behavior and moral decision making include the anterior cingulate, fronto-polar/medial prefrontal cortex, ventral prefrontal cortex, amygdala, insula, superior temporal gyrus, medial prefrontal cortex, and angular gyrus.

Antisocial and Moral Behavior

lesion cases suggests that emotional components such as empathy and cognitive control are not involved, whereas cognitive components of morality such as theory of mind and reward-based decision making are involved (Darby et al., 2018). The distinction between cognition and emotion may be a false dichotomy, as both could be implicated in moral decision making. These two competing perspectives on the primacy of affective versus cognitive attributes of morality in relation to the neuromoral theory requires further resolution.

13.5 Interventions for Enhancing Moral Sensitivity and Reducing Antisocial Behavior Despite many years of study, the efficacy of traditional psychosocial treatments for psychopathy such as cognitive behavioral therapy (Carroll & Kiluk, 2017), psychodynamic interventions (Juni, 2010), behavioral modification (Pearson et al., 2011), and the therapeutic community approach (Hecht et al., 2018) is still controversial. Reviews tend to yield somewhat inconsistent conclusions, ranging from optimistic (Salekin, 2002; Salekin et al., 2010) to pessimistic (Harris & Rice, 2006). Moreover, the main objective of most of these interventions is reducing recidivism, while changes in moral cognition and affect are usually neglected (Caldwell et al., 2006; Salekin et al., 2010; Skeem et al., 2011). Recent studies have attempted to address this research gap by studying treatment efficacy using empathic improvement as an outcome measure (Romero-Martínez et al., 2019; Romero-Martínez et al., 2016). In one study, Romero-Martinez and colleagues (2019) examined whether violent perpetrators experienced changes in emotion-decoding abilities and showed improvement in empathic responses after completing a standardized court-ordered intervention program incorporating cognitive restructuring, emotion management skills, and problem-solving training with motivational strategies. Results revealed that perpetrators receiving this program were more accurate at decoding emotional facial signals and presented with better cognitive empathy (perspective taking) than those who did not receive this intervention. Nevertheless, the authors were unable to identify which part of the intervention accounted for improvement in these abilities. In contrast to psychosocial interventions, a completely different approach has been explored recently using brain stimulation techniques with the goal of enhancing moral decision making in order to reduce criminal behavior. In a randomized controlled trial, Choy et al. (2018) randomized human adults to receive transcranial direct current stimulation to the prefrontal cortex and others to receive a sham condition where participants believed they were getting stimulation but ultimately did not. Following the stimulation, participants were given vignettes which placed them in social situations in which they were provoked and given the opportunity to commit a violent act. Stimulation to the prefrontal cortex reduced the intention to commit a violent act by 47.8 percent relative to the sham condition. In addition, prefrontal stimulation increased the participants’ sense of the moral wrongfulness of perpetrating a

319

320

           

retaliatory violent act. An increased sense of moral wrongfulness was, in turn, associated with a reduced likelihood of acting violently. Mediation analysis documented that the moral enhancement (i.e., perceptions of greater moral wrongfulness) produced by prefrontal stimulation accounted for 31 percent of the treatment effect. We caution that not all studies have shown positive treatment effects using transcranial direct current stimulation (Ling et al., 2020), but the study does further our understanding of causation, and these recent results provide a new vista on how moral behavior could potentially be enhanced in a way to reduce violence.

13.6 Future Directions and Summary This review has aimed to clarify a mixed literature on the relationship between antisociality and morality and to provide insight into the roles of cognition and emotion in the moral behavior of individuals on the antisocial spectrum. One broad conclusion that we can draw is that psychopathic individuals are probably more capable of forming reasoned moral judgments than has been traditionally assumed. A second conclusion is that, at a physiological and neurocognitive level, psychopathic individuals have difficulty responding empathically and show an impairment in the processing of distress in others, affective deficits that may predispose them to immoral, uncaring behavior. A third conclusion is that the immoral behavior of antisocial individuals is in part predicated on disruption to neural processes that subserve moral decision making. Notwithstanding these broad conclusions, there are several issues that are unresolved and require further investigation. First, we have suggested that psychopathic individuals have emotional impairments that interfere with their ability to act in a morally appropriate manner, specifically highlighting deficits in empathizing with the negative emotional states of others. Recent evidence however suggests that psychopathic individuals are significantly less impaired in empathizing with others’ positive emotional states (Raine et al., 2022). A future avenue for treatment could capitalize on this islet of positive emotional processing in psychopathic individuals to enhance their moral sense. Second, it is important to note that antisociality and aggressivity are broad and multifaceted constructs, and there are different subgroups within this population that may have different moral capacities. For example, while children who are proactively aggressive (using aggression to achieve desired goals) are less differentiated in moral and conventional concepts, children who are reactively aggressive (aggressive in response to provocation) have greater differentiation in these concepts (Jambon & Smetana, 2018). Future research could explore whether this provocative finding could apply to adult reactive and proactive aggression. Relatedly, while it is argued that there is substantial evidence for the involvement of dysfunctional neural structure in producing both impaired moral decision making and antisocial behavior, the heterogeneity of antisocial behavior

Antisocial and Moral Behavior

suggests that this perspective may not apply to all criminal offenders. For example, while the moral cognitions, emotions, and behavior of “primary” psychopathic individuals, proactive aggressive individuals, and life-course persistent offenders may in large part be explained by brain impairment to the neural moral circuit, this may be less true of more “emotionally charged” reactively aggressive individuals, “secondary” psychopathic individuals, and drug offenders where other causal factors may better account for their antisocial and aggressive behavior (Raine, 2019). Furthermore, there is increasing evidence that different antisocial subgroups within this antisocial spectrum have distinct cognitive-affective deficits (BaskinSommers et al., 2015). For instance, in the area of emotion regulation, individuals within the antisocial spectrum often display significantly different patterns of stress tolerance. Specifically, individuals with psychopathy with a callous, fearless, and irresponsible disposition have been found to have an emotionally “cold” style and to demonstrate superior distress tolerance, while antisocial individuals with externalizing behavior are usually associated with an emotionally “hot” style and demonstrate poor distress tolerance (Baskin-Sommers & Newman, 2013; Sargeant et al., 2011). Similarly, in the area of cognitive control, individuals with psychopathy demonstrate better cognitive control and working memory than an antisocial subgroup with externalizing traits (Endres et al., 2011). Baskin-Sommers et al. (2015) demonstrated that training designed to treat the distinct deficits of different antisocial subtypes resulted in differential improvements on both behavioral and psychophysiological measures. Given the heterogeneity of antisocial subtypes, future research needs to identify better methods of characterizing subgroups with relatively distinct and homogeneous cognitive-affective traits, especially in the area of moral capacity, to develop more effective mechanism-based interventions for antisocial subpopulations. In addition, a recent review suggested that there are four classes of moral judgment: blame judgments, wrongness judgments, norm judgments, and evaluations (Malle, 2021). Not only is the concept of psychopathy heterogeneous, but moral judgments are also varied. Therefore, successful examination of the moral capacity of psychopaths will require new empirical research studies that take these differences into account. In closing, it is clear that research into the moral values of antisocial and psychopathic populations has a long and venerable past. What must be recognized, however, is that this body of knowledge has struggled to develop any practical utility in enhancing the moral sense in antisocial individuals to reduce their offensive behavior. A major future challenge lies in translating our theoretical knowledge of deficits in moral judgment making into practice.

References Aharoni, E., Antonenko, O., & Kiehl, K. A. (2011). Disparities in the moral intuitions of criminal offenders: The role of psychopathy. Journal of Research in Personality, 45(3), 322–327.

321

322

           

Aharoni, E., Funk, C., Sinnott-Armstrong, W., & Gazzaniga, M. (2008). Can neurological evidence help courts assess criminal responsibility? Lessons from law and neuroscience. Annals of the New York Academy of Sciences, 1124(1), 145–160. Aharoni, E., Sinnott-Armstrong, W., & Kiehl, K. A. (2012). Can psychopathic offenders discern moral wrongs? A new look at the moral/conventional distinction. Journal of Abnormal Psychology, 121(2), 484–497. Aharoni, E., Sinnott-Armstrong, W., & Kiehl, K. A. (2014). What’s wrong? Moral understanding in psychopathic offenders. Journal of Research in Personality, 53, 175–181. American Psychiatric Association. (1952). Diagnostic and statistical manual of mental disorders (1st ed.). American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Anderson, N. E., Stanford, M. S., Wan, L., & Young, K. A. (2011). High psychopathic trait females exhibit reduced startle potentiation and increased P3 amplitude. Behavioral Sciences and the Law, 29(5), 649–666. Aniskiewicz, A. S. (1979). Autonomic components of vicarious conditioning and psychopathy. Journal of Clinical Psychology, 35(1), 60–67. Bartels, D. M., & Pizarro, D. A. (2011). The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas. Cognition, 121(1), 154–161. Baskin-Sommers, A. R., Curtin, J. J., & Newman, J. P. (2015). Altering the cognitiveaffective dysfunctions of psychopathic and externalizing offender subtypes with cognitive remediation. Clinical Psychological Science: A Journal of the Association for Psychological Science, 3(1), 45–57. Baskin-Sommers, A. R., & Newman, J. P. (2013). Differentiating the cognition-emotion interactions that characterize psychopathy versus externalizing. In M. D. Robinson, E. Watkins, & E. Harmon-Jones (Eds.), Handbook of cognition and emotion (pp. 501–520). Guilford Press. Berg, J. M., Lilienfeld, S. O., & Waldman, I. D. (2013). Bargaining with the devil: Using economic decision-making tasks to examine the heterogeneity of psychopathic traits. Journal of Research in Personality, 47(5), 472–482. Blackburn, R. (2006). Other theoretical models of psychopathy. In C. J. Patrick (Ed.), Handbook of psychopathy (pp. 35–57). Guilford Press. Blair, R. J. R. (1995). A cognitive developmental approach to morality: Investigating the psychopath. Cognition, 57(1), 1–29. Blair, R. J. R. (1997). Moral reasoning and the child with psychopathic tendencies. Personality and Individual Differences, 22(5), 731–739. Blair, R. J. R. (2003). Neurobiological basis of psychopathy. British Journal of Psychiatry, 182(1), 5–7. Blair, R. J. R. (2006). The emergence of psychopathy: Implications for the neuropsychological approach to developmental disorders. Cognition, 101(2), 414–442. Blair, R. J. R. (2007). The amygdala and ventromedial prefrontal cortex in morality and psychopathy. Trends in Cognitive Sciences, 11(9), 387–392. Blair, R. J. R. (2008). Fine cuts of empathy and the amygdala: Dissociable deficits in psychopathy and autism. Quarterly Journal of Experimental Psychology, 61(1), 157–170.

Antisocial and Moral Behavior

Blair, R. J. R., Budhani, S., Colledge, E., & Scott, S. (2005). Deafness to fear in boys with psychopathic tendencies. Journal of Child Psychology and Psychiatry and Allied Disciplines, 46(3), 327–336. Blair, R. J. R, Jones, L., Clark, F., & Smith, M. (1995). Is the psychopath “morally insane”? Personality and Individual Differences, 19(5), 741–752. Blair, R. J. R, Jones, L., Clark, F., & Smith, M. (1997). The psychopathic individual: A lack of responsiveness to distress cures? Psychophysiology, 34(2), 192–198. Blair, R. J. R., Mitchell, D. G. V., Peschardt, K. S., Colledge, E., Leonard, R. A., Shine, J. H., Murray, L. K., & Perrett, D. I. (2004). Reduced sensitivity to others’ fearful expressions in psychopathic individuals. Personality and Individual Differences, 37(6), 1111–1122. Borg, J. S., & Sinnott-Armstrong, W. P. (2013). Do psychopaths make moral judgments? In K. A. Kiehl & W. P. Sinnott-Armstrong (Eds.), Handbook of psychopathy and law (pp. 107–128). Oxford University Press. Bryant, D. J., Wang, F., Deardeuff, K., Zoccoli, E., & Nam, C. (2016). The neural correlates of moral thinking: A meta-analysis. International Journal of Computational & Neural Engineering, 3(2), 28–39. Caldwell, M., Skeem, J., Salekin, R., & Van Rybroek, G. (2006). Treatment response of adolescent offenders with psychopathy features: A 2-year follow-up. Criminal Justice and Behavior, 33(5), 571–596. Campbell, M. A., Doucette, N. L., & French, S. (2009). Validity and stability of the youth psychopathic traits inventory in a nonforensic sample of young adults. Journal of Personality Assessment, 91(6), 584–592. Carlisi, C. O., Moffitt, T. E., Knodt, A. R., Harrington, H., Ireland, D., Melzer, T. R., Poulton, R., Ramrakha, S., Caspi, A., Hariri, A. R., & Viding, E. (2020). Associations between life-course-persistent antisocial behaviour and brain structure in a population-representative longitudinal birth cohort. The Lancet Psychiatry, 7(3), 245–253. Carroll, K. M., & Kiluk, B. D. (2017). Cognitive behavioral interventions for alcohol and drug use disorders: Through the stage model and back again. Psychology of Addictive Behaviors: Journal of the Society of Psychologists in Addictive Behaviors, 31(8), 847–861. Caspi, A., Moffitt, T. E., Newman, D. L., & Silva, P. A. (1996). Behavioral observations at age 3 years predict adult psychiatric disorders. Archives of General Psychiatry, 53(11), 1033–1039. Choy, O., Raine, A., & Hamilton, R. H. (2018). Stimulation of the prefrontal cortex reduces intentions to commit aggression: A randomized, double-blind, placebo-controlled, stratified, parallel-group trial. Journal of Neuroscience, 38(29), 6505–6512. Cima, M., Tonnaer, F., & Hauser, M. D. (2010). Psychopaths know right from wrong but don’t care. Social Cognitive and Affective Neuroscience, 5(1), 59–67. Corden, B., Critchley, H. D., Skuse, D., & Dolan, R. J. (2006). Fear recognition ability predicts differences in social cognitive and neural functioning in men. Journal of Cognitive Neuroscience, 18(6), 889–897. Crockett, M. J., Siegel, J. Z., Kurth-Nelson, Z., Dayan, P., & Dolan, R. J. (2017). Moral transgressions corrupt neural representations of value. Nature Neuroscience, 20(6), 879–885.

323

324

           

Cushman, F., Gray, K., Gaffey, A., & Mendes, W. B. (2012). Simulating murder: The aversion to harmful action. Emotion, 12(1), 2–7. Cushman, F., Young, L., & Hauser, M. (2006). The role of conscious reasoning and intuition in moral judgment: Testing three principles of harm. Psychological Science, 17(12), 1082–1089. Darby, R. R., Burke, M., & Fox, M. D. (2018). Network localization of disordered free will perception. Journal of Neuropsychiatry and Clinical Neurosciences, 30(3), E6–E6. Del Gaizo, A. L., & Falkenbach, D. M. (2008). Primary and secondary psychopathictraits and their relationship to perception and experience of emotion. Personality and Individual Differences, 45(3), 206–212. DeLisi, M., Vaughn, M. G., Gentile, D. A., Anderson, C. A., & Shook, J. J. (2013). Violent video games, delinquency, and youth violence: New evidence. Youth Violence and Juvenile Justice, 11(2), 132–142. Eisenbarth, H., Alpers, G. W., Segrè, D., Calogero, A., & Angrilli, A. (2008). Categorization and evaluation of emotional faces in psychopathic women. Psychiatry Research, 159(1–2), 189–195. Endres, M. J., Rickert, M. E., Bogg, T., Lucas, J., & Finn, P. R. (2011). Externalizing psychopathology and behavioral disinhibition: Working memory mediates signal discriminability and reinforcement moderates response bias in approach– avoidance learning. Journal of Abnormal Psychology, 120(2), 336–351. Fairchild, G., Stobbe, Y., Van Goozen, S. H. M., Calder, A. J., & Goodyer, I. M. (2010). Facial expression recognition, fear conditioning, and startle modulation in female subjects with conduct disorder. Biological Psychiatry, 68(3), 272–279. Fairchild, G., Van Goozen, S. H. M., Calder, A. J., Stollery, S. J., & Goodyer, I. M. (2009). Deficits in facial expression recognition in male adolescents with earlyonset or adolescence-onset conduct disorder. Journal of Child Psychology and Psychiatry and Allied Disciplines, 50(5), 627–636. Fernandes, S., Aharoni, E., Harenski, C. L., Caldwell, M., & Kiehl, K. A. (2020). Anomalous moral intuitions in juvenile offenders with psychopathic traits. Journal of Research in Personality, 86, Article 103962. Fiedler, S., & Glöckner, A. (2015). Attention and moral behavior. Current Opinion in Psychology, 6, 139–144. Fine, C., & Kennett, J. (2004). Mental impairment, moral understanding and criminal responsibility: Psychopathy and the purposes of punishment. International Journal of Law and Psychiatry, 27(5), 425–443. Fisher, K. A., & Hany, M. (2021). Antisocial personality disorder. In StatPearls. StatPearls Publishing. Fodor, E. M. (1973). Moral development and parent behavior antecedents in adolescent psychopaths. Journal of Genetic Psychology, 122(1), 37–43. Fridlund, A. J. (1991). Evolution and facial action in reflex, social motive, and paralanguage. Biological Psychology, 32(1), 3–100. Furnham, A., Daoud, Y., & Swami, V. (2009). “How to spot a psychopath”: Lay theories of psychopathy. Social Psychiatry and Psychiatric Epidemiology, 44(6), 464–472. Garrigan, B., Adlam, A. L. R., & Langdon, P. E. (2016). The neural correlates of moral decision-making: A systematic review and meta-analysis of moral evaluations and response decision judgements. Brain and Cognition, 108, 88–97.

Antisocial and Moral Behavior

Garrigan, B., Adlam, A. L. R., & Langdon, P. E. (2018). Moral decision-making and moral development: Toward an integrative framework. Developmental Review, 49, 80–100. Glenn, A. L., Iyer, R., Graham, J., Koleva, S., & Haidt, J. (2009). Are all types of morality compromised in psychopathy? Journal of Personality Disorders, 23(4), 384–398. Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011). Mapping the moral domain. Journal of Personality and Social Psychology, 101(2), 366–385. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105–2108. Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Pantheon. Haidt, J., & Graham, J. (2007). When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. Social Justice Research, 20(1), 98–116. Han, H. (2017). Neural correlates of moral sensitivity and moral judgment associated with brain circuitries of selfhood: A meta-analysis. Journal of Moral Education, 46(2), 97–113. Hansen, A. L., Johnsen, H., Hart, S., Waage, L., & Thayer, J. F. (2008). Brief communication: Psychopathy and recognition of facial expressions of emotion. Journal of Personality Disorders, 22(6), 639–645. Hare, R. D. (1991). The Hare Psychopathy Checklist–Revised. Multi-Health Systems. Hare, R. D. (1999). Without conscience: The disturbing world of the psychopaths among us. Guilford Press. Hare, R. D. (2003). The Hare Psychopathy Checklist–Revised (2nd ed.). Multi-Health Systems. Hare, R. D., & Neumann, C. S. (2008). Psychopathy as a clinical and empirical construct. Annual Review of Clinical Psychology, 4, 217–246. Harenski, C. L., Harenski, K. A., Shane, M. S., & Kiehl, K. A. (2010). Aberrant neural processing of moral violations in criminal psychopaths. Journal of Abnormal Psychology, 119(4), 863–874. Harris, G. T., & Rice, M. E. (2006). Treatment of psychopathy: A review of empirical findings. In C. Patrick (Ed.), Handbook of psychopathy (pp. 555–572). Guilford Press. Hastings, M. E., Tangney, J. P., & Stuewig, J. (2008). Psychopathy and identification of facial expressions of emotion. Personality and Individual Differences, 44(7), 1474–1483. Hecht, L. K., Latzman, R. D., & Lilienfeld, S. O. (2018). The psychological treatment of psychopathy: Theory and research. In D. David, S. J. Lynn, & G. H. Montgomery (Eds.), Evidence-based psychotherapy: The state of the science and practice (pp. 271–298). Wiley-Blackwell. Horne, Z., & Powell, D. (2016). How large is the role of emotion in judgments of moral dilemmas? PLoS ONE, 11(7), Article e0154780. House, T. H., & Milligan, W. L. (1976). Autonomic responses to modeled distress in prison psychopaths. Journal of Personality and Social Psychology, 34(4), 556–560.

325

326

           

Jambon, M., & Smetana, J. G. (2018). Individual differences in prototypical moral and conventional judgments and children’s proactive and reactive aggression. Child Development, 89(4), 1343–1359. Jonason, P. K., Strosser, G. L., Kroll, C. H., Duineveld, J. J., & Baruffi, S. A. (2015). Valuing myself over others: The dark triad traits and moral and social values. Personality and Individual Differences, 81, 102–106. Juni, S. (2010). Conceptualizing psychopathy: A psychodynamic approach. Journal of Aggression, Maltreatment & Trauma, 19(7), 777–800. Jurkovic, G. J., & Prentice, N. M. (1977). Relation of moral and cognitive development to dimensions of juvenile delinquency. Journal of Abnormal Psychology, 86(4), 414–420. Kahn, E. (1931). Psychopathic personalities. Yale University Press. Kiehl, K. A. (2015). The psychopath whisperer: The science of those without conscience. Crown. Koch, J. L. A. (1891). Die psychopathischen Minderwertigkeiten (3 vols). Maier. Koenigs, M., Kruepke, M., Zeier, J., & Newman, J. P. (2012). Utilitarian moral judgment in psychopathy. Social Cognitive and Affective Neuroscience, 7(6), 708–714. Kohlberg, L. (1958). The development of modes of moral thinking and choice in the years to sixteen. University of Chicago Press. Kohlberg, L. (1963). The development of children’s orientations toward a moral order: I. Sequence in the development of moral thought. Vita Humana, 6(1– 2), 11–33. Kraepelin, E. (1913). Clinical psychiatry: A textbook for physicians. Macmillan. Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1990). Emotion, attention, and the startle reflex. Psychological Review, 97(3), 377–395. Leist, T., & Dadds, M. R. (2009). Adolescents’ ability to read different emotional faces relates to their history of maltreatment and type of psychopathology. Clinical Child Psychology and Psychiatry, 14(2), 237–250. Levenson, R. W. (2014). The autonomic nervous system and emotion. Emotion Review, 6(2), 100–112. Levenston, G. K., Patrick, C. J., Bradley, M. M., & Lang, P. J. (2000). The psychopath as observer: Emotion and attention in picture processing. Journal of Abnormal Psychology, 109(3), 373–385. Levy, N. (2007). The responsibility of the psychopath revisited. Philosophy, Psychiatry, & Psychology, 14(2), 129–138. Ling, S., Raine, A., Choy, O., & Hamilton, R. (2020). Effects of prefrontal cortical stimulation on aggressive and antisocial behavior: A double-blind, stratified, randomized, sham-controlled, parallel-group trial. Journal of Experimental Criminology, 16(3), 367–387. Link, N. F., Scherer, S. E., & Byrne, P. N. (1977). Moral judgment and moral conduct in the psychopath. Canadian Psychiatric Association Journal, 22(7), 341–346. Litton, P. (2013). Criminal responsibility and psychopathy: Do psychopaths have a right to excuse? In K. A. Kiehl & W. P. Sinnott-Armstrong (Eds.), Handbook of psychopathy and law (pp. 275–296). Oxford University Press. Livesley, W. J., Schroeder, M. L., Jackson, D. N., & Jang, K. L. (1994). Categorical distinctions in the study of personality disorder: Implications for classification. Journal of Abnormal Psychology, 103(1), 6–17.

Antisocial and Moral Behavior

Lose, C. A. (1977). Level of moral reasoning and psychopathy within a group of federal inmates. Dissertation Abstracts International: Section B: The Sciences and Engineering, 57(7-B), Article 4716. Malle, B. F. (2021). Moral judgments. Annual Review of Psychology, 72, 293–318. Marsh, A. A., & Blair, R. J. R. (2008). Deficits in facial affect recognition among antisocial populations: A meta-analysis. Neuroscience and Biobehavioral Reviews, 32(3), 454–465. Marshall, J., Watts, A. L., & Lilienfeld, S. O. (2018). Do psychopathic individuals possess a misaligned moral compass? A meta-analytic examination of psychopathy’s relations with moral judgment. Personality Disorders: Theory, Research, and Treatment, 9(1), 40–50. Montagne, B., van Honk, J., Kessels, R. P. C., Frigerio, E., Burt, M., van Zandvoort, M. J. E., Perrett, D. I., & de Haan, E. H. F. (2005). Reduced efficiency in recognising fear in subjects scoring high on psychopathic personality characteristics. Personality and Individual Differences, 38(1), 5–11. Morse, S. J. (2008). Psychopathy and criminal responsibility. Neuroethics, 1(3), 205–212. Muñoz, L. C. (2009). Callous-unemotional traits are related to combined deficits in recognizing afraid faces and body poses. Journal of the American Academy of Child and Adolescent Psychiatry, 48(5), 554–562. Newman, J. P., MacCoon, D. G., Vaughn, L. J., & Sadeh, N. (2005). Validating a distinction between primary and secondary psychopathy with measures of Gray’s BIS and BAS constructs. Journal of Abnormal Psychology, 114(2), 319–323. Nichols, S. (2004). Sentimental rules: On the natural foundations of moral judgment. Oxford University Press. Nucci, L. P., & Nucci, M. S. (1982). Children’s responses to moral and social conventional transgressions in free-play settings. Child Development, 53(5), 1337–1342. Nucci, L. P., & Turiel, E. (1978). Social interactions and the development of social concepts in preschool children. Child Development, 49(2), 400–407. O’Kane, A., Fawcett, D., & Blackburn, R. (1996). Psychopathy and moral reasoning: Comparison of two classifications. Personality and Individual Differences, 20 (4), 505–514. Patrick, C. J. (2018). Handbook of psychopathy (2nd ed.). Guilford Press. Patrick, C. J., Bradley, M. M., & Lang, P. J. (1993). Emotion in the criminal psychopath: Startle reflex modulation. Journal of Abnormal Psychology, 102(1), 82–92. Pearson, J. M., Heilbronner, S. R., Barack, D. L., Hayden, B. Y., & Platt, M. L. (2011). Posterior cingulate cortex: Adapting behavior to a changing world. Trends in Cognitive Sciences, 15(4), 143–151. Pillsbury, S. H. (2013). Why psychopaths are responsible. In K. A. Kieh & W. SinnottArmstrong (Eds.), Handbook of psychopathy and law (pp. 297–318). Oxford University Press. Pinel. P. (1801). Traité médico-philosophique sur l’aliénation mentale, ou la manie. Richard, Caille & Ravier. Pletti, C., Lotto, L., Buodo, G., & Sarlo, M. (2017). It’s immoral, but I’d do it! Psychopathy traits affect decision-making in sacrificial dilemmas and in everyday moral situations. British Journal of Psychology, 108(2), 351–368.

327

328

           

Poon, K. (2016). Understanding risk-taking behavior in bullies, victims, and bully victims using cognitive- and emotion-focused approaches. Frontiers in Psychology, 7, Article 1838. Poon, K., & Ho, C. S-H. (2015). Contrasting psychosocial outcomes in Chinese delinquent adolescents with attention deficit and hyperactivity disorder symptoms and/or reading disability. The Journal of Forensic Psychiatry & Psychology, 26(1), 38–59. Prichard, J. C. (1835). A treatise on insanity and other disorders affecting the mind. Sherwood, Gilbert, and Piper. Raine, A. (1993). The psychopathology of crime: Criminal behavior as a clinical disorder. Academic Press. Raine, A. (2013). The anatomy of violence: The biological roots of crime. Pantheon Books. Raine, A. (2019). The neuromoral theory of antisocial, violent, and psychopathic behavior. Psychiatry Research, 277, 64–69. Raine, A., Buchsbaum, M., & LaCasse, L. (1997). Brain abnormalities in murderers indicated by positron emission tomography. Biological Psychiatry, 42(6), 495–508. Raine, A., Buchsbaum, M. S., Stanley, J., Lottenberg, S., Abel, L., & Stoddard, J. (1994). Selective reductions in prefrontal glucose metabolism in murderers. Biological Psychiatry, 36(6), 365–373. Raine, A., Chen, F. R., & Waller, R. (2022). The cognitive, affective and somatic empathy scales for adults. Personality and Individual Differences, 185, Article 111238. Raine, A., & Yang, Y. (2006). Neural foundations to moral reasoning and antisocial behavior. Social, Cognitive, and Affective Neuroscience, 1, 203–213. Rebellon, C. J., Piquero, N. L., Piquero, A. R., & Tibbetts, S. G. (2010). Anticipated shaming and criminal offending. Journal of Criminal Justice, 38(5), 988–997. Rest, J., Copper, D., Coder, R., Masanz, J., & Anderson, D. (1974). Judging the important issues in moral dilemmas: An objective test of development. Developmental Psychology, 10(4), 491–501. Ribeiro da Silva, D., Rijo, D., & Salekin, R. T. (2012). Child and adolescent psychopathy: A state-of-the-art reflection on the construct and etiological theories. Journal of Criminal Justice, 40(4), 269–277. Romero-Martínez, Á., Lila, M., Gracia, E., & Moya-Albiol, L. (2019). Improving empathy with motivational strategies in batterer intervention programmes: Results of a randomized controlled trial. British Journal of Clinical Psychology, 58(2), 125–139. Romero-Martínez, Á., Lila, M., Martínez, M., Pedrón-Rico, V., & Moya-Albiol, L. (2016). Improvements in empathy and cognitive flexibility after courtmandated intervention program in intimate partner violence perpetrators: The role of alcohol abuse. International Journal of Environmental Research and Public Health, 13(4), Article 394. Sajous-Turner, A., Anderson, N. E., Widdows, M., Nyalakanti, P., Harenski, K., Harenski, C., Koenigs, M., Decety, J., & Kiehl, K. A. (2020). Aberrant brain gray matter in murderers. Brain Imaging and Behavior, 14(5), 2060–2061. Salekin, R. T. (2002). Psychopathy and therapeutic pessimism: Clinical lore or clinical reality? Clinical Psychology Review, 22(1), 79–112.

Antisocial and Moral Behavior

Salekin, R. T., Worley, C., & Grimes, R. D. (2010). Treatment of psychopathy: A review and brief introduction to the mental model approach for psychopathy. Behavioral Sciences and the Law, 28(2), 235–266. Sargeant, M. N., Daughters, S. B., Curtin, J. J., Schuster, R. M., & Lejuez, C. W. (2011). Unique roles of antisocial personality disorder and psychopathic traits in distress tolerance. Journal of Abnormal Psychology, 120(4), 987–992. Schneider, F. (1923). Die Psychopathischen Persönlichkeiten [The psychopathic personalities]. Springer. Skeem, J. L., Polaschek, D. L. L., Patrick, C. J., & Lilienfeld, S. O. (2011). Psychopathic personality: Bridging the gap between scientific evidence and public policy. Psychological Science in the Public Interest, 12(3), 95–162. Tangney, J. P., Stuewig, J., & Hafez, L. (2011). Shame, guilt, and remorse: Implications for offender populations. Journal of Forensic Psychiatry and Psychology, 22(5), 706–723. Tassy, S., Deruelle, C., Mancini, J., Leistedt, S., & Wicker, B. (2013). High levels of psychopathic traits alters moral choice but not moral judgment. Frontiers in Human Neuroscience, 7, Article 229. Thomson, J. J. (1976). Killing, letting die, and the trolley problem. The Monist, 59(2), 204–217. Turiel, E. (1979). Distinct conceptual and developmental domains: Social-convention and morality. Nebraska symposium on motivation. University of Nebraska Press. Turiel, E. (1983). The development of social knowledge: Morality and convention. Cambridge University Press. Tuvblad, C., Bezdjian, S., Raine, A., & Baker, L. A. (2013). Psychopathic personality and negative parent-to-child affect: A longitudinal cross-lag twin study. Journal of Criminal Justice, 41(5), 331–341. Vaidyanathan, U., Patrick, C. J., & Bernat, E. M. (2009). Startle reflex potentiation during aversive picture viewing as an indicator of trait fear. Psychophysiology, 46(1), 75–85. Vitale, J. E., Smith, S. S., Brinkley, C. A., & Newman, J. P. (2002). The reliability and validity of the Psychopathy Checklist–Revised in a sample of female offenders. Criminal Justice and Behavior, 29(2), 202–231. Volman, I., Toni, I., Verhagen, L., & Roelofs, K. (2011). Endogenous testosterone modulates prefrontal-amygdala connectivity during social emotional behavior. Cerebral Cortex, 21(10), 2282–2290. von Borries, A. K. L, Volman, I., de Bruijn, E. R. A., Bulten, B. H., Verkes, R. J., & Roelofs, K. (2012). Psychopaths lack the automatic avoidance of social threat: Relation to instrumental aggression. Psychiatry Research, 200(2–3), 761–766. Walker, D. W., & Leister, C. (1994). Recognition of facial affect cues by adolescents with emotional and behavioral disorders. Behavioral Disorders, 19(4), 269–276. Wilson, K., Juodis, M., & Porter, S. (2011). Fear and loathing in psychopaths: A metaanalytic investigation of the facial affect recognition deficit. Criminal Justice and Behavior, 38(7), 659–668. Woodworth, M., & Waschbusch, D. (2008). Emotional processing in children with conduct problems and callous/unemotional traits. Child: Care, Health and Development, 34(2), 234–244.

329

330

           

Yang, Y., Raine, A., Colletti, P., Toga, A. W., & Narr, K. L. (2009). Abnormal temporal and prefrontal cortical gray matter thinning in psychopaths. Molecular Psychiatry, 14(6), 561–562. Yang, Y., Wang, P., Baker, L. A., Narr, K. L., Joshi, S. H., Hafzalla, G., Raine, A., & Thompson, P. M. (2015). Thicker temporal cortex associates with a developmental trajectory for psychopathic traits in adolescents. PLoS ONE, 10(5), Article e0127025.

14 Intergroup Conflict and Dehumanization Nick Haslam

It goes without saying that parties in intergroup conflict tend to hold negative views of one another: jaundiced at best, vilifying at worst. It has been said that we always hurt the ones we love, but those we hurt often seem to be the ones we hate. In theory, conflicts could arise between groups that are mutually appreciative and respectful but that just happen to have different interests. In fact, of course, lasting conflicts tend to be marked by shared beliefs that our opponents are wicked and untrustworthy. This commonsense view that mutual derogation is fundamental to conflict is backed up by generations of social psychological theory and research. Studies of intergroup relations present as axiomatic the view that people evaluate their own group (in-group) more positively than other groups (out-groups), and even if the discrepancy primarily represents in-group love rather than out-group hate (Brewer, 1999), negative perceptions of out-groups that picture them as morally suspect and us as morally elevated rapidly emerge from it when conflict arises. Studies of stereotyping commonly draw the same conclusion: We have the virtues of warmth and competence, but they have a collection of vices. Studies of prejudice examine how negative attitudes toward out-groups drive antagonistic behavior and who tends to hold these attitudes more than others. If the idea that intergroup conflict is grounded in negative group perceptions is common sense, it has become close to common sense in some circles that the most atrocious conflicts also tend to involve dehumanizing group perceptions. There is widespread historical awareness of genocides and wars in which human groups have been likened to subhuman animals or have had their personhood denied in egregious ways. These examples seem to suggest that in intergroup conflicts, groups of others are not only perceived as more negative but also as less human. Although this view has entered the zeitgeist as an almost self-evident fact, it has become a good deal more controversial than many people suspect. Just as social psychology has clarified the cognitive, emotional, and motivational processes that underpin negative group perceptions within intergroup conflict, it has also examined the processes and dynamics that underpin dehumanizing perceptions in these contexts. Especially in the past two decades, psychologists have developed theoretical accounts of dehumanization and methodological tools that allow it to be studied in accordance with the field’s scientific norms. Hundreds of empirical studies have documented dehumanizing perceptions of an assortment of groups, from Roma in Hungary to cyclists in Australia 331

332

  

to American Republicans and Democrats, and explored the correlates, consequences, and cures of these perceptions. Dehumanization has come to be seen as another way in which out-groups and enemies can be viewed, perhaps secondary to out-group derogation and prejudice but important in its own right. In this chapter I will present an overview of the burgeoning literature on dehumanization with the aim of clarifying its roles in intergroup conflict. The chapter will present an overview of the main psychological accounts of dehumanization, an analysis of the many forms that dehumanization has been theorized to take, and a discussion of its possible functions in intergroup conflict in general and genocide in particular. It will conclude with some remarks on recent challenges to the role of dehumanization in intergroup conflict and some speculations on its boundary conditions. The aim of the chapter is to provide a panoramic view of the field that will allow moral psychology researchers to orient themselves within it and plan new lines of investigation. Two prefatory points must be made before proceeding further. First, although most dehumanization scholarship has addressed its role in intergroup contexts, it is important to note that some more recent work has examined dehumanization processes in perceptions of self and in interpersonal relationships. This work explores how people may come to view themselves as losing or lacking humanness and how people in close relationships may – fleetingly or lastingly – perceive their loved ones in ways that fail to humanize them. This work demonstrates that dehumanization may not belong exclusively in the intergroup domain. Readers interested in an example of this genre or a review of dehumanization research in a more expansive frame might consult Pizzirani et al. (2021) and Haslam and Loughnan (2014), respectively. Second, this chapter will focus primarily on social psychological work on dehumanization. It does so in full awareness that there is a lively and important body of philosophical research on the topic (e.g., Kronfeldner, 2021a). The reason for attending primarily to the psychological literature, with the exception of an analysis of the work of David Livingstone Smith, is twofold. For one thing, social psychology is easily the most substantial disciplinary contributor to the reemergence of dehumanization scholarship over the past three decades (Haslam, 2021). For another, philosophical work on the subject has mainly addressed fundamental conceptual issues in the field, generally with a critical stance toward psychological formulations and without contributing to or motivating empirical work. As a result, it has largely developed in parallel to the much larger corpus of psychological work, rather than exerting a strong influence on it. Readers wishing for a more multi-disciplinary overview of dehumanization studies, with philosophical voices well represented, should consult Kronfeldner (2021b).

14.1 Theoretical Accounts of Dehumanization Before we explore how social psychologists have fashioned their theoretical accounts of dehumanization, it is instructive to consider how

Intergroup Conflict and Dehumanization

“dehumanization” tends to be employed in everyday discourse. First, it tends to be seen as an extreme phenomenon, most often invoked in the contexts of the Holocaust, genocides, or armed conflicts. Second, in these extreme settings dehumanization is usually understood to promote and enable violence and atrocity. Third, it is typically exemplified by the blatant use of offensive animal metaphors, such as references to Jews as rodents in Nazi propaganda, to Tutsis as cockroaches during the Rwandan conflict, or to people of the African diaspora as apes on racist websites or in European soccer stadiums. Commonly it is assumed that people using these demeaning animal metaphors believe that the people slurred in this way are literally less than human. Recent psychological accounts of dehumanization depart significantly from every one of these typical understandings. Most of them present dehumanization as a phenomenon that falls on a spectrum from the extreme to the everyday. Much of the recent body of dehumanization research examines banal forms of dehumanization that can occur in the absence of intergroup conflict and that constitute entirely normal processes of group perception. Although it is usually assumed and sometimes corroborated that dehumanization fosters aggression, most accounts do not present dehumanization primarily as an enabler of violence. In addition, contemporary psychological theories of dehumanization do not conceptualize it as the explicit use of animal metaphors. Instead, dehumanization is understood and measured as a subtler phenomenon that may not be symbolized verbally or visually and may have nothing to do with nonhuman animals even indirectly. Rather than amounting to an ontological claim that members of a group are nonhuman or less than human, psychological accounts typically claim it when the group is ascribed human attributes to a lesser degree than others. It is essential to remember that dehumanization has a substantially broader meaning in psychological accounts than it does in the prototypical cases from Nazi rhetoric and other extreme examples that define the common understanding among laypeople.

14.1.1 Moral Disengagement and Moral Exclusion The first psychological writers on dehumanization began to publish on the subject in the 1970s. Their work made important contributions to the theory of dehumanization but relatively modest contributions to its empirical base. Perhaps as a result of this emphasis, the topic did not catch on in social psychology with the vigor that characterized the second wave of dehumanization scholarship, which arose around the millennium. Nevertheless, their work often foregrounded the moral dimensions of dehumanization in a way that later work did not. Among the key contributors of this period are Kelman (1976) and Staub (1989), who wrote conceptually nuanced accounts of the role of dehumanization in war and in mass killing. However, the work of Bandura, Opotow, and Bar-Tal, and their colleagues, is of particular relevance to a chapter on moral psychology.

333

334

  

Bandura, one of the most influential psychologists of the twentieth century, is noteworthy for bringing dehumanization into the laboratory, for developing a theoretical formulation that emphasized how dehumanization overcomes inhibitions against aggressive behavior, and for examining individual differences in the tendency to dehumanize others. According to Bandura’s model, people are ordinarily restrained from behaving aggressively toward others by moral selfsanctions. When the potential targets of aggression are perceived as less than human, these self-sanctions are disengaged and those inhibitions are weakened. This “moral disengagement” account therefore presents an intrapsychic mechanism for enabling aggression, viewing it essentially as releasing the brakes of moral compunction and guilt. Bandura and colleagues carried out a series of studies that demonstrated greater punitiveness toward people described in a dehumanizing manner (e.g., Bandura et al., 1975; see Bandura, 1999, for an overview). Whereas Bandura formulated dehumanization’s role in aggression in terms of loosened inhibitions on hostile behavior, Opotow’s somewhat related work on “moral exclusion” offered a more cognitive account that was not tied specifically to aggressive behavior. By her account (Opotow, 1990), individuals or groups can be shifted outside the moral sphere, where normal considerations of justice, fairness, and compassion apply. This active process of excluding persons from the moral domain effectively disqualifies them from moral standing. The result of that exclusion might be passive neglect or forms of active harming that extend beyond aggression in the narrow sense. Opotow did not privilege dehumanization as a phenomenon in immoral action or inaction but presented it as merely one among many forms of moral exclusion. In a similar vein, Bar-Tal (1989) presented dehumanization as one of several forms of “delegitimization,” by which people withdraw legitimacy from groups they oppose, rendering aggressive behavior normatively acceptable. Legitimacy is commonly denied to opponents in violent conflicts, who are frequently denigrated as animals, demons, or monsters who are undeserving of concern and unfit for anything but the harshest punishments. Delegitimization and moral exclusion are therefore alternative ways of expressing the claim that people treat one another morally by default, but immoral treatment is enabled when their status as humans is denied or diminished.

14.1.2 Infrahumanization Theories of moral disengagement, moral exclusion, and delegitimization understand dehumanization as one of a collection of related processes or mechanisms that serve to release moral restraints on aggression and other forms of illtreatment. These theories typically understand dehumanization to be overt (e.g., revealed in language), its effects to be harmful, and its settings to be conflictual. The theories of moral exclusion and disengagement in particular also did not tie dehumanization primarily to intergroup contexts: One might exclude another from consideration on grounds other than their out-group status, and Bandura investigated whether delinquent schoolchildren’s aggressiveness might

Intergroup Conflict and Dehumanization

spring from greater tendencies to disengage morally. Research on “infrahumanization” (Leyens et al., 2007; Leyens et al., 2001) differed from these earlier theories in all of these respects. The infrahumanization phenomenon was subtle and covert, was not necessarily associated with harmful behavior, occurred in the absence of major conflict, and was fundamentally an intergroup process. Leyens and colleagues initiated the concept of infrahumanization not as part of a theory of dehumanization – they were at pains to differentiate the two concepts – but as a way to understand ethnocentrism, the tendency for ethnic groups to see their own cultural kind as the true humans. Pilot work suggested that humanness, conceptualized as contrastive with nonhuman animals, was understood by laypeople as constituted primarily by the capacities for language, intelligence, and uniquely human emotions (sentiments in French). Narrowing their focus to emotions, the researchers reasoned that if people see their in-group as more human than out-groups, they should reserve uniquely human emotions such as nostalgia and guilt for the in-group but not differentially attribute nonuniquely human emotions such as happiness and fear. This basic finding of differential attribution of uniquely human emotions, independent of emotion valence (positive versus negative), was replicated in many intergroup comparisons, including comparisons between groups with little or no contemporary conflict (e.g., Peninsular and Canary Island Spaniards). The ease with which infrahumanization could be studied in laboratory and field settings, using familiar survey and implicit social cognition tasks, ensured that it generated a substantial research literature. Leyens and colleagues presented infrahumanization as a subtle but fundamental phenomenon that had more to do with how people view their in-group than how they view out-groups. Whereas dehumanization might normally be understood as a denial of humanity to others, generally accompanied by a negative view of them, infrahumanization was conceptualized as reserving humanity for the in-group, with the out-group thereby perceived as less human by this standard of reference. Infrahumanization does not entail dehumanization in a strong sense, but it might plausibly prepare the ground for it when conflict is stoked. If the out-group is already judged as less human than the ingroup – further down the continuum from humans to animals than the ingroup – then exacerbation of tensions might tip implicit deficiency in humanness into explicit denial. Leyens and colleagues showed that infrahumanization was not without consequences – it reduced out-group helping, for example, and deepened intergroup divides – but they gave less attention than earlier writers to the moral psychological dimensions of the phenomenon. Nevertheless, the fact that uniquely human emotions tend to be seen as more diagnostic of people’s “moral nature” (Demoulin et al., 2004) than other emotions implies that infrahumanization is not a morally neutral bias in group perception.

14.1.3 The Dual Model Infrahumanization theory offers a novel account of intergroup dehumanization that presents it as subtle, everyday, and falling on an implied continuum with

335

336

  

more severe variants. Covertly perceiving out-groups as deficient in some of the emotional capacities that distinguish humans from nonhuman animals is a few incremental steps removed from overtly judging them to be animals. The theory also departs from prior accounts by understanding dehumanization processes in terms of human attributes rather than categories. Instead of understanding “human” as a sphere from which people could be categorically removed, as in moral exclusion, or “nonhuman” as a derogatory label such as “monster,” as in delegitimization research, Leyens and colleagues characterized infrahumanization as a lack of specified human characteristics relative to an in-group. Humanness was formulated as a dimension of psychological content along which groups can be located, such that dehumanization is a matter of people being judged less human or lesser humans, rather than as categorically nonhuman. Haslam’s (2006) dual model represents an extension of the approach pioneered by Leyens and colleagues. It retained their crucial idea that a satisfactory theory of dehumanization needs a theory of humanness, the quantity that is denied to people when they are dehumanized. It also retained their basic assumption that dehumanization can be subtle and should be conceptualized as a matter of degree. The dual model differed from infrahumanization theory in two key respects. First, it proposed that humanness could not be adequately defined only in contrast to nonhuman animals. Second, it expressly argued that dehumanization occurs on a continuum of severity or blatancy and that the contrast categories for defining humanness are implicated in that continuum. On the first point, Haslam (2006) argued that although humanness can be defined vis-à-vis animals as a set of uniquely human attributes, it can also be defined in a second salient way vis-à-vis inanimate objects. What makes us human is not only the uniquely human qualities of intellect, rationality, civility, self-control and the like that distinguish us from beasts but also the qualities of emotionality, warmth, and mental flexibility that people believe distinguish us from robots or automatons. Haslam referred to these latter qualities collectively as “human nature” and demonstrated empirically that the two senses of humanness were independent, were negatively associated as theorized with animals and objects, respectively, and had different correlates (e.g., human nature attributes were believed to emerge early in development and be widely prevalent in the population and were essentialized, whereas uniquely human attributes were believed to emerge later in development and to be uncommon, and were not essentialized). Haslam presented the two senses of humanness as encompassing dimensions that could be examined empirically by examining a range of psychological attributes, including emotions and personality traits. On the second point, Haslam (2006) argued that a theory of dehumanization must not only refer to these distinct senses of humanness but also remember that they are anchored by different kinds of nonhuman entity. Milder forms of dehumanization are likely to involve subtle denials of humanness to out-groups, but harsher forms will tend to involve explicit likening of those groups to the corresponding nonhuman. The standard infrahumanization effect is a subtle judgment that the out-group has a deficient capacity for experiencing refined,

Intergroup Conflict and Dehumanization

uniquely human emotions, but it lies at the mild end of a spectrum of animalistic dehumanization that extends to the use of vilifying animal metaphors at the other end. Haslam’s dual model aimed to provide an integrative framework for dehumanization research in psychology by describing two such spectra. “Animalistic” dehumanization captures forms of dehumanization in which people are denied uniquely human attributes and/or likened to nonhuman animals and tends to occur when groups are seen as primitive, bestial, crude, uncivilized, barbaric, or stupid. “Mechanistic” dehumanization occurs when groups are denied human nature attributes and/or likened to inanimate objects. It tends to occur when people are judged to be cold, unfeeling, fungible, robotic, shallow, or literally objectified. An assortment of studies has found support for elements of this two-spectrum model, including the content and cross-cultural consistency of the humanness dimensions (Bain et al., 2012), and applied it to understanding dehumanizing perceptions of a range of groups (e.g., Loughnan & Haslam, 2007).

14.1.4 Dehumanization and Mind Perception The dual model conceptualizes dehumanization as the denial of humanness to people along two dimensions. Although these dimensions would appear to have moral significance and should, in principle, capture the moral standing of humans relative to nonhumans, most research has assessed them using personality traits. Traits, and the emotions studied in infrahumanization research, are clearly germane to moral psychologists (e.g., Bastian et al., 2011). However, the moral psychological implications of dehumanization became clearer in another recent tradition of research on dehumanization that grounded it in mind perception. In this work (e.g., Gray et al., 2007; Waytz et al., 2010), the account of dehumanization largely falls out of a primary focus on understanding the nature, causes, and consequences of perceiving mind in other entities, human and nonhuman. Dehumanization is understood as dementalization. A key insight of this work is that perceptions of mind may not be unitary and different entities may be ascribed different kinds of mind. Work by Gray and colleagues (2007), for example, found evidence of two dimensions of mind, which they labelled Agency and Experience. Agency represents the degree to which entities were judged to have the capacity for thinking, planning, and selfcontrol, with adult humans ranking high and nonhuman animals ranking low. Experience, in contrast, represents the degree to which entities are judged capable of feeling, motivation, and subjectivity, with humans and other mammals ranked high and inanimate objects, robots, dead persons, and persons in a persistent vegetative state ranking low. The mapping of these dimensions onto the dual model’s human/animal and human/object contrasts is striking, but Gray and colleagues (2007) drew out their moral implications more powerfully by linking their mind dimensions to the dimensions of moral agency and patiency. The degree to which people are seen as moral agents,

337

338

  

capable of responsibility and blameworthiness, is driven by their perceived capacity for Agency, and the degree to which they are seen as moral patients, capable of having moral and immoral acts perpetrated on them, tracks their perceived capacity for Experience. On this understanding, dehumanization involves mind denial, and the form of that denial may have distinct implications for the moral credentials of the target. Understanding dehumanization in terms of mind perception has benefits beyond providing clarity to moral psychologists. Research on the cognitive and motivational influences on mind perception (Waytz et al., 2010) has direct relevance for understanding the causes and effects of dehumanization, connects it to important related literatures on empathy and theory of mind, and also helps to make dehumanization processes tractable for cognitive neuroscientists. Harris and Fiske (2006), for example, evaluated dehumanization of an assortment of social groups by examining the absence of activation of social cognition centers in the brain, in the process supporting their proposal that groups stereotyped as low in warmth and competence, such as drug addicts and homeless people, are especially likely to be cognized as objects rather than social beings. This work has inspired several later studies of the neuroscience of dehumanization (e.g., Bruneau et al., 2018; Jack et al., 2013). Although the specific validity of Gray et al.’s (2007) dimensions of mind perception has been called into question (Malle, 2019; Weisman et al., 2017), and alternative structures might therefore imply different accounts of dehumanization, the idea of grounding an account of plural forms of dehumanization in an account of plural components of perceived mind has been a highly generative one. (See Chapter 9 in this volume for further discussion of the relationship between mind perception and dehumanization.)

14.1.5 David Livingstone Smith’s Account We have examined several prominent psychological accounts of dehumanization, but it is also essential to review the most influential philosophical account, which has been set out by David Livingstone Smith in a powerful series of books (Smith, 2011, 2020, 2021). Smith’s work places a high premium on applying his theoretical account of dehumanization to historical examples of racism, genocide, and enslavement, and it aims to explain the apparently paradoxical conceptual aspects of the phenomenon rather than to drive forward an empirical research program. In these respects, it tends to take a critical perspective toward psychological accounts of dehumanization, which in their efforts to bring the concept into the laboratory and make it tractable for social psychology researchers have often lacked historical depth, theoretical ambition, and philosophical sophistication. At the core of Smith’s account is the recognition of an ambiguity or contradiction at the heart of well-known examples of dehumanization. He argues that perceiving others as humans is automatic (Smith, 2021, p. 359). Dehumanization occurs when perceivers are led to believe by people

Intergroup Conflict and Dehumanization

with epistemic authority – whether they be scientists, ideologues, or others, and regardless of whether that authority is legitimate – that this human appearance is deceptive and the others are in reality subhuman. As a result, Smith (2021, p. 359) writes: [W]hen people dehumanize others, they are saddled with two contradictory mental representations of them. And because these are starkly contradictory, they cannot both be salient simultaneously. The mind of the dehumanizer foregrounds the humanity of the other and backgrounds their subhumanity at some moments, and foregrounds their subhumanity and backgrounds their humanity at others.

Smith uses the concept of “psychological essentialism” – laypeople’s tendencies to believe that certain categories are grounded in causal essences that define their members’ true nature – to explain this contradictory tendency for people to perceive others as simultaneously human and less than human. At the level of appearance, people are invariably perceived as human-like, but when they are dehumanized these “human-seeming beings” are ascribed a subhuman essence. Dehumanization therefore represents a belief that although someone might look human, they fundamentally are not: The target of dehumanization is both human and subhuman. Smith takes issue with how some psychological writers employ the concept of psychological essentialism in their work. He argues, for example, that Leyens and colleagues’ work on infrahumanization, which formulates the phenomenon as a denial to out-groups of the “human essence” (represented by uniquely human emotions inter alia), is problematic because infrahumanization is conceptualized as a matter of degree whereas essences are either present or absent. To be infrahumanized should therefore constitute a categorical loss of humanness, not merely a partial or relative attenuation of humanness. Similarly, Smith criticizes Haslam’s account, which argues (and finds; Haslam et al., 2005) that human nature attributes are essentialized and that mechanistic dehumanization therefore involves a denial of fundamentally human qualities to people. To Smith, phenotypical attributes such as traits are not the relevant locus of essentialist thinking in dehumanization and such attributes cannot qualify as causal essences regardless. Ultimately, these critiques hinge on unresolved theoretical disagreements about the nature of psychological essentialism. At root, Smith’s account holds that essentialism operates at the level of the human category, that people construe essences in a categorical fashion, and that dehumanization is therefore a categorical perception that the target is nonhuman by virtue of having a subhuman essence. Leyens’ account also holds that essentialism operates at the level of the human category but conceptualizes the essence as a matter of degree: People are perceived to have the human essence to the extent they are ascribed uniquely human attributes. Haslam’s account places essentialism at the level of human attributes, not the category as a whole, and understands essentialist beliefs about human attributes to vary by degrees: The extent to which particular human qualities are seen as deep-seated, fixed, either/or, and defining falls on a continuum (Haslam et al., 2004).

339

340

  

The upshot of these disagreements is that Leyens and Haslam construe dehumanization as implicating essentialist thinking but occurring on a spectrum, a default assumption in quantitative social science if not in philosophy. By their accounts, dehumanization amounts to seeing others as less human, or as lesser humans, rather than as nonhuman. In contrast, Smith construes it as a categorical phenomenon: Either someone is dehumanized or they are not. This view raises challenging questions: If dehumanization is either/or, why does it appear to vary in severity or blatancy, and how can a person be dehumanized and yet still be perceived as human in some respects? Smith’s account resolves these questions by proposing that two simultaneous representations exist, one human (appearance) and one subhuman (essence). This account therefore has to posit dual representations of fluctuating relative salience to accommodate varying levels or severities of dehumanization, or it has to use “dehumanization” to refer only to the starkest examples and exclude subtler but otherwise very similar phenomena. In contrast, the psychological accounts can accommodate these variations more parsimoniously by invoking single representations that vary by degrees on a continuum from subtle to severe. Whether that parsimony is gained at the cost of a misunderstanding of psychological essentialism and an overly expansive definition of dehumanization remains open to debate.

14.2 The Diversity of Dehumanization: Clarifying Forms and Definitions This extended review of psychological theories of dehumanization emphasizes the diversity of their descriptive and explanatory approaches. It sets aside the large body of research that applies these theories to such questions as which people are most likely to dehumanize others, which people are most likely to be dehumanized, which situational and psychological factors promote dehumanization, what are its effects, and how might it be reduced. This now very extensive literature is out of scope for the present chapter but is reviewed in Kteily and Landry (2022) and Haslam and Loughnan (2014). For present purposes it is more important to clarify the range of meanings that dehumanization has acquired in recent psychological work. Definitional issues are sometimes ignored, but the concept of dehumanization is intrinsically slippery, the term is used in disparate ways by different writers, and discrepant conceptualizations may contribute to misunderstandings between scholars who approach the topic from different disciplinary backgrounds. In this section of the chapter, I unpack several dimensions along which understandings of dehumanization vary within psychological theory and research.

14.2.1 Blatant versus Subtle The most commonly discussed examples of dehumanization from history fall at one extreme on a dimension of blatancy or explicitness. When a human group is

Intergroup Conflict and Dehumanization

directly likened to a reviled animal and that likening carries an obvious implication for how the group should be treated, there can be no mistaking the severity of the dehumanization. As we have seen, however, some accounts of dehumanization refer to much subtler and milder phenomena. When a group is infrahumanized, there is no explicit likening to a nonhuman entity and both perceivers and targets may be unaware that targets are being perceived as less human humans. The phenomenon itself may simply be an unconscious and unmotivated tendency to not ascribe some emotions, both negative and positive, to the out-group. Other research on dehumanization deliberately examines unconscious or automatic processes by assessing implicit associations between groups and either humanness-related traits or nonhuman words, using the implicit association task or similar procedures (e.g., Loughnan & Haslam, 2007; Vaes et al., 2011). Research of this kind may reveal that people mentally associate out-groups with animals or do not associate them with uniquely human traits, but those associations are likely to be entirely unknown to the research participant and sincerely deniable by them. In cases such as these, dehumanization has occurred according to some existing accounts, even if they fall well short of prototypical cases from world history. Although researchers have tended to emphasize the more subtle forms of dehumanization over the past two decades, blatant forms have also received some attention. The important work of Kteily, Bruneau and colleagues is especially relevant (e.g., Kteily & Bruneau, 2017; Kteily et al., 2015). They have shown repeatedly that many people are willing to adjudge some groups as less than fully human when asked to rate them on the well-known “Ascent of man” scale, which depicts an evolutionary sequence from ape (0) to modern human (100). This blatant form of dehumanization, which appears to be strongest toward groups with which study participants are experiencing conflict, such as Muslim terrorists in the US and Roma in Hungary, is only moderately correlated with prejudice toward those groups, demonstrating that dehumanization and humanness are not reducible to hatred and negativity. Equally importantly, blatant dehumanization measured by the scale predicts a range of aggressive, punitive, and callous responses to its targets independently of prejudice. Blatant forms of dehumanization have also been examined in studies of animal metaphors (e.g., Haslam et al., 2011), which make the important point that not all animal metaphors are judged to be dehumanizing when applied to people, and that animals whose metaphorical use is judged to be offensive tend either to be seen as disgusting or seen as hierarchically beneath humans (e.g., apes; Brandt & Reyna, 2011) in a way that implies a dehumanizing comparison.

14.2.2 Absolute versus Relative Just as contemporary research on dehumanization encompasses subtle as well as blatant expressions, it also encompasses cases in which an out-group is seen as lacking humanness in itself and also when that lack is simply relative to the

341

342

  

in-group. The prototypical cases of historical dehumanization are absolute in this sense, as they viewed ethnic or racial groups not merely as less human than other groups but as categorically and intrinsically subhuman. However, some forms of dehumanization recognized by contemporary psychological researchers are entirely relative in this sense. The infrahumanization effect involves the lesser attribution of uniquely human emotions to the out-group relative to the in-group and therefore need not involve any absolute or wholesale denial of the out-group’s capacity to experience refined emotions. Misunderstanding this point has implications for some challenges to the role of dehumanization in violence, to which I will return later in this chapter.

14.2.3 Animal versus Object Prototypical examples of dehumanization invariably involve animal metaphors. Although animals may constitute the most salient contrast to the human category, and hence a primary basis for dehumanizing comparisons, Haslam’s (2006) dual model insists that dehumanization can occur on a second register. The human/object contrast may not provide as many ready-to-hand dehumanizing metaphors as the human/animal distinction. Nevertheless, people spontaneously define humanness relative to unfeeling machines, have crossculturally consistent understandings of human nature that are orthogonal to their understandings of human uniqueness, and differentiate humans from objects on a dimension of perceived mind that is more powerful than the one that differentiates humans from animals, according to Gray et al. (2007). Failing to recognize how humans can be denied human nature and likened to objects would overlook many proposed manifestations of dehumanization that do not fit the animalistic mold. These examples include the literal objectification of women (Loughnan et al., 2010), the reduction of patients to unfeeling bodies in some medical practices (Haque & Waytz, 2012), and the treatment of employees as fungible instruments in organizations (Brison et al., 2022). The concept of dehumanization is necessarily broader than a single framing of humanness in opposition to animals allows.

14.2.4 Simple versus Complex A final contrast is between simple models of dehumanization and more complex ones. The simple case that dominates standard conceptualizations of dehumanization is that dominant group A holds and expresses a dehumanizing view of subordinate group B. Recent psychological work on the topic complicates this picture in several ways: 1) A may dehumanize itself; 2) A and B may dehumanize one another reciprocally; and 3) A and B may both believe that the other dehumanizes them. The first of these complexities may seem implausible, but researchers have documented several cases in which people hold transient dehumanizing selfperceptions, sometimes in response to externally inflicted events such as social

Intergroup Conflict and Dehumanization

exclusions (Bastian & Haslam, 2010), sometimes as a side effect of engaging in violent behavior (Bastian et al., 2012), and sometimes as an enabler of immoral action (Kouchaki et al., 2018). A perceived need to restore humanity to the self or group may motivate moral conduct. The second complexity, reciprocal dehumanization, has been studied primarily by Kteily and colleagues in recognition that dehumanizing perceptions and the actions they drive may engender a matching response. Kteily and Bruneau (2017), for example, demonstrated that Latinos and Muslims in the US held dehumanizing perceptions (assessed on the Ascent scale) of Republicans and Republican candidates during the 2016 Republican Primaries, in response to accurately perceived dehumanization by them, and the former perceptions drove reduced willingness to assist with counterterrorism efforts and increased desires to engage in violent protest. Their work demonstrated that dehumanization can be cyclical, mutual, and self-reinforcing and that it may be symmetrical even in asymmetrical conflicts. The third complexity is linked to the second: A major trigger of reciprocated dehumanization is the perception that one group has been dehumanized by the other. This “meta-dehumanization” – feeling or learning that one’s group is not seen as fully human by an out-group – has become a major focus of research, starting with the work of Kteily et al. (2016). Their studies found, for example, that feeling dehumanized was distinct from feeling disliked or hated (metaprejudice) and that meta-dehumanization was associated with support for aggressive actions and policies toward out-groups independently of metaprejudice. Recent work underscores the vital importance of metadehumanization, showing that it predicts hostility more strongly than metaprejudice (Landry et al., 2021) and that meta-humanization – feeling or learning that one’s group is seen as fully human – may effectively counteract prejudice (Pavetich & Stathi, 2021). To summarize this section of the chapter, the recent psychology of dehumanization has extended its meanings well beyond the standard understanding embodied in well-known historical examples from the Holocaust and the Rwandan genocide. The prototype of dehumanization is blatant, categorical, and one-directional in applying an animal metaphor to a target group. However, dehumanization extends to cases that are subtle and unconscious, that are matters of degree defined relative to an in-group rather than absolute denials of humanity, that assimilate humans to inanimate objects rather than animals, and that occur in reciprocal cycles driven by meta-perceptions.

14.3 The Roles of Dehumanization in Intergroup Conflict Researchers have identified an extensive range of adverse correlates or consequences of dehumanization, in relation to a very extensive range of social groups (see Haslam & Loughnan, 2014, for a review). Studies have shown that people who dehumanize a group are especially likely to respond to that group

343

344

  

with dislike, disgust, callous neglect, racism, and social rejection. Dehumanizers demonstrate greater willingness to harm and greater unwillingness to help those they dehumanize. They are more likely to support policies and practices of torture, harsh punishment, retaliatory violence, military intervention, and forced population displacement. They show greater opposition to immigration and the integration of minority groups and support active discrimination against them. Men who dehumanize women demonstrate a greater propensity to sexually harass them, to endorse sexist attitudes, and to score high on measures of proclivity to rape. People who dehumanize a group report less guilt and moral concern over injustices that befall it. The span of dehumanization’s potential consequences appears to be broad indeed. Even so, precisely how those consequences occur is not always evident. Much of the research evidence is correlational, unable to determine to what degree dehumanization plays a causal role in the harms with which it is associated or to clarify its mechanisms or functions. Although a great deal of further research is required to elucidate these mechanisms and functions, it is helpful to distinguish them temporally, according to whether they precede, accompany, or follow a harmful action or inaction.

14.3.1 Before Harm The role of dehumanization in conflict has often been thought of as preparatory. Dehumanizing a group is seen as creating the conditions under which violence flourishes. Several lines of theory and research help to comprehend how that preparation occurs. The most relevant approaches to dehumanization for this purpose are those that conceptualize it as an enduring set of beliefs or perceptions rather than as an episodic phenomenon or one that motivates specific actions. The infrahumanization effect, for example, implies that ethnocentrism is a bedrock phenomenon in which groups perceive themselves to be more human than others. At the most basic level, Leyens and colleagues would claim, groups are predisposed to judge others as less good and less human than themselves. Intergroup relations start from a position in which a subtle degree of dehumanization is present even before conflict arises, and infrahumanization may make conflict more likely to develop than if that imbalance in perceived humanness did not exist. Approaches to dehumanization grounded in trait-like dimensions of humanness or mind or in models of stereotype content are similar in this regard. In the dual model, out-groups can be located on dimensions of human uniqueness and human nature according to their members’ perceived attributes, and those locations may influence how more severe forms of dehumanization may arise under conditions of conflict. For example, out-groups perceived as lacking in uniquely human qualities (e.g., perceived as unintelligent and lacking in selfcontrol) are at risk of being seen as bestial and barbaric during conditions of conflict, and the more they are seen as lacking those qualities the greater that

Intergroup Conflict and Dehumanization

risk may be. The same may be inferred from the mind perception account of dehumanization. Groups that are judged to lack one dimension of mind may be especially likely to be mistreated on account of their associated (perceived) deficiencies. A group believed to lack Experiential mental capacities should be especially likely to be treated badly when conflict arises, as it might be seen as lacking the capacity to suffer. This link between group stereotypes and vulnerability to mistreatment is clearest in Harris and Fiske’s (2006) model of dehumanization, which identifies groups stereotyped as low in warmth and competence as at particular risk.

14.3.2 During Harm Dehumanization in these varied theoretical models can help to explain how and why intergroup conflict may erupt in harmful behavior. Other models emphasize how dehumanization enables aggression in the present by releasing restraints on harming others. This element is most prominent in Bandura’s early work on moral disengagement (e.g., Bandura et al., 1975). In effect, Bandura proposes that perceiving the target of potential harm as less than fully human overcomes normal inhibitions that block it, such as anticipatory guilt and moral compunction. In the heat of conflict, dehumanizing the out-group may enable harmful behavior and dissolve objections to it if it is being carried out by members of one’s own group. Delegitimization and inflammatory metaphors can also be understood to play a part in enabling harm or violence in the present. Bar-Tal’s (1989) work on intergroup conflict emphasized how withdrawing the legitimacy of an enemy group can take place in the emotional heat of conflict and is commonly expressed on vilifying labels. In this line of work, dehumanizing language expresses and reinforces a view of the other group as undeserving of concern in the here and now.

14.3.3 After Harm Although dehumanization is usually examined as a process that prepares the ground for future harm and enables the commission of present harm, researchers have also examined how people dehumanize victims after harm has been perpetrated against them. In this work, people have several motives for dehumanizing harmed out-groups, including rationalizing the harm, minimizing its magnitude, and reducing guilt and collective responsibility for committing the harm. For example, Castano and Giner-Sorolla (2006) found that people whose nations had killed indigenous inhabitants of their colonies denied those inhabitants the capacity for uniquely human emotions more when reminded of past atrocities. Similarly, Cehajic et al. (2009) showed that Serbians reminded of atrocities committed against Bosnian Muslims during the Bosnian War infrahumanized them. Other research has suggested that dehumanization may serve not only to whitewash the past but also to be an obstacle to future

345

346

  

reconciliation. Tam and colleagues (2007) showed that Northern Irish participants on the Protestant and Catholic sides who denied uniquely human emotions to their adversaries were less inclined to forgive, to abandon historical grievances, and to seek rapprochement.

14.3.4 The Case of Genocide These discussions indicate that dehumanization may be thoroughly enmeshed in several aspects of intergroup conflict rather than merely setting the stage for it. The multiple involvements of dehumanization across the life history of conflict can be combined to offer an expanded framing of the role of dehumanization in genocide in particular (see Haslam, 2019, for a more fully developed analysis). According to one prominent model of the genocidal process (Stanton, 1998), dehumanization is a factor that is specific to the third of eight stages. Us– them classification of in-group and out-group (stage 1) is succeeded by the symbolizing of group identities by labeling or visible signs such as distinctive dress (stage 2). Dehumanizing expressions, taking the form of hateful animal metaphors, are then disseminated by perpetrator groups (stage 3) and promote the formation of organized militias (stage 4). Extremist voices polarize intergroup relations (stage 5) and extreme actions such as segregation of the victim group (stage 6) prepare the way for mass killings (stage 7), which are covered up after the fact by concealment of evidence and denials (stage 8). This stage model may be accurate as far as it goes, but its delimited role for dehumanization rests on a narrow understanding of the phenomenon as blatant, absolute, animal metaphor-based, and one-directional. As I have shown, contemporary dehumanization theory and research provide a much more expansive account of dehumanization’s varied forms. Taking up that account allows for fuller understanding of the role that dehumanization might play in multiple stages of the genocide process. For example, infrahumanization research indicates that even prior to conflict, in-groups tend to perceive out-groups as less human than they are, ensuring that even at stage 1 the us–them distinction incorporates a tacit assumption that the out-group is composed of lesser humans. Similarly, research indicates that the derogatory group labels and stereotypes of Stanton’s stage 2, which in theory occur prior to the overt adoption of dehumanizing animal metaphors, are often accompanied by covert and perhaps unconscious dehumanizing associations. The work of Goff et al. (2008), for example, showed that White Americans implicitly associate Black Americans with apes and that unconscious metaphorical association is implicated in hostile and punitive responses to Black targets. Implicit forms of dehumanization may therefore be consequential aspects of the symbolization of out-groups, before dehumanizing metaphors are spread by organized propaganda. The processes of planning, polarization, segregation, and killing (stages 4–7) may also be propelled by the dynamics of dehumanization rather than merely set up by them, as Stanton’s stage model asserts. This view is shared by genocide scholars Haagensen and Croes (2012), who write: “Dehumanization . . . does

Intergroup Conflict and Dehumanization

not only play a role in preparing the ground for genocide; it also drives genocidal campaigns forward, sustaining killing operations over lengths of time” (p. 225). For example, Kteily et al.’s (2016) work demonstrates that the blatant dehumanization of out-groups is associated with endorsement of extreme or polarized policy positions, a finding also obtained by Maoz and McCauley (2008) in a study of the Israel-Palestine conflict. Moral exclusion surely plays a role in driving popular support for segregation of a targeted minority, and moral disengagement has been shown to release inhibitions on violent action, potentially including acts of mass killing. Stage 8 denial may also implicate dehumanization, which has been shown to help reduce collective guilt, rationalize past in-group violence, diminish support for providing reparations to victims, and minimize perceptions of their suffering in contexts including indigenous–settler relations, civil war, and religious strife. In sum, it is possible that out-group dehumanization plays a variety of roles through the genocide process rather than being confined to a single stage. This speculation may even be too timid, as other forms of dehumanization may also be implicated. Self-dehumanization (Kouchaki et al., 2018) may play a causal role in enabling the commission of violence or other immoral actions. Some descents into genocide may also be accelerated by reciprocal dehumanization between conflicting groups. When dehumanization is understood in an expansive way that is grounded in recent research and scholarship, the processes through which it might contribute to particular cases of violence are many and varied, extending well beyond the traditional emphasis on hateful propaganda.

14.4 Critiques Over the past decade, several scholars have challenged the role of dehumanization in intergroup conflict in general or genocide in particular. Their interconnected critiques constitute a backlash of sorts to the growing dehumanization literature, which has the potential to refine, qualify, and enhance that literature. Three main challenges can be identified: that the role of dehumanization in conflict has been overstated; that some violence attributed to dehumanization only makes sense if the victim is perceived as human; and that some supposed cases of dehumanization are better understood in terms of valence rather than humanness. These challenges are interlocking but they will be examined separately. The argument that dehumanization’s role in collective violence has been exaggerated has been articulated cogently by Lang (2010, 2020). Lang maintains that genocidal violence is grounded much more in hatred than in prejudice and that the phenomenology of group-based cruelty often makes sense only in the context of a desire to punish and hurt despised fellow humans. Lang and colleagues set up their critique in opposition to a putative “dehumanization hypothesis” that claims that “dehumanization of the victims is a necessary precondition for killing them” (Brudholm & Lang, 2021, p. 346).

347

348

  

Lang’s challenge is a crucial one. Dehumanization scholars should resist the urge to inflate the role of dehumanization in collective violence and should evaluate carefully how or when dehumanization, hatred, their toxic combination, or something else entirely accounts for it. Nevertheless, the challenge attacks a straw man, as contemporary dehumanization researchers – in psychology at least – make no strong claims about the necessity of dehumanization as a precondition for killing. Instead, they have proposed a variety of ways in which dehumanization might contribute to violence. Recent scholarship offers an account of dehumanization that is broader and more textured than the narrow-sense account held by some early genocide scholars, and it offers an expanded range of dehumanization processes that might be implicated in violence. Determining whether or to what degree these processes operate in actual genocides is a task for further investigation. That task will be difficult because so many of the proposed processes occur in the conscious and unconscious minds of the protagonists during the conflict rather than in the public record. A narrow understanding of dehumanization that only sees evidence of it when hateful animal metaphors are used in public fora will overlook the many subtle ways that dehumanizing perceptions operate in private minds and hearts and in interpersonal and intergroup dynamics. Ideally, future studies will avoid maximalist claims that treat the centrality of dehumanization to genocide as self-evident and minimalist accounts that see it as marginal, based on constricted understandings of dehumanization. A second challenge to dehumanization scholarship is the claim that many acts of violence ascribed to dehumanization only make sense if the perpetrator perceives the victim as human. Efforts to inflict pain and humiliation depend on the conviction that victims are not insensate objects and that they have the uniquely human capacity to feel shame and the degrading loss of human dignity. Arguments of this kind about the “dehumanization paradox” have been proposed by Appiah (2006) and Manne (2016), among others. If dehumanization were the total, categorical denial of a person’s humanity, such that they were perceived as entirely equivalent to objects or beasts, this argument might have merit. As we have seen, however, dehumanization is not conceptualized in this fashion in contemporary theory and research. In most psychological accounts it is understood to vary in degrees of severity, including cases where the person’s humanness is perceived as attenuated rather than annihilated. In another recent psychological account, Kteily and Landry (2022) defend the related view that multiple aspects of humanness may be denied in variegated ways, rather than all being denied at once. In Smith’s account, the categorical denial of humanness is accompanied by a simultaneous representation of the dehumanized person’s humanity. These conceptualizations all allow that a person can be perceived as unequivocally human – albeit a lesser human – and therefore capable of pain, mortification, and other moral emotions, though perhaps to a lesser degree than perpetrators arrogate to themselves. As Kronfeldner (2021b, p. 15) argues, the paradox of dehumanization “can easily be dissolved since the core of it rests on an equivocation of

Intergroup Conflict and Dehumanization

‘being human’: many cases of dehumanization seem to involve a recognition of the bare humanity of the targets, while (parts of ) their humanness and/or their moral standing is ignored or destroyed.” The third main challenge to recent dehumanization research queries whether dehumanization is in fact redundant with negative valence: whether it is genuinely distinct from hatred, dislike, or prejudice. This challenge has recently been posed by Over and colleagues (e.g., Enock et al., 2021; Over, 2021), who argue that some findings that claimed to reflect infrahumanization or dehumanization in fact reflect intergroup preference – the subject of a vast social psychological literature – rather than denials of humanness. Their work, the empirical details of which are beyond the scope of this brief review, is significant in drawing critical attention to the conceptual claims and evidence base of social psychological research on dehumanization. Although some writers have stressed the importance of differentiating dehumanization from negative evaluation both conceptually and empirically (e.g., Haslam, 2013), researchers have not consistently done so in their studies, and it is therefore probable that some findings ascribed to the dehumanization of a group may partially or wholly reflect prejudice or dislike toward it. Nevertheless, the argument that dehumanization as studied in social psychological research is nothing but negative evaluation is highly implausible. Emotion- and trait-based measures of humanness have deliberately controlled for the valence of their emotions and traits and demonstrated effects of humanness in group perception tasks that are separable from valence effects, including studies that statistically control for valence (e.g., Haslam et al., 2005). Studies using the “ascent of humans” measure of dehumanization (e.g., Kteily et al., 2015) show, unsurprisingly, that the extent to which groups are dehumanized correlates moderately with the extent to which they are evaluated negatively, as assessed by a feeling thermometer. However, they also show that dehumanization predicts responses to groups independently of prejudice. In some cases, dehumanization-based measures predict the harshness of responses more strongly than prejudice-based measures (Landry et al., 2021). Dehumanization and negative evaluation even appear to have distinct neural signatures (Bruneau et al., 2018). Findings such as these indicate that dehumanization and negative valence are often entwined – the groups we see as less than fully human are often the ones we dislike – but they are incompatible with the view that one can be reduced to the other.

14.5 Conclusions Scholarship on dehumanization has burgeoned in the past two decades, generating an assortment of new theoretical accounts and a voluminous empirical literature. The rapid growth of this work has produced substantial evidence for the malign implications of dehumanization in the context of intergroup conflict. It has also generated an array of alternative conceptualizations and

349

350

  

often poorly specified definitions that has triggered substantial debates and disagreements and motivated a range of conceptual and empirical critiques. Humanness is surely a central concept in moral psychology, and the stage is set for a new generation of scholars to clarify it and to determine how it can best be deployed to understand the continuing fact of human inhumanity.

References Appiah, A. (2006). Cosmopolitanism: Ethics on a world of strangers. Norton. Bain, P., Vaes, J., Haslam, N., Kashima, Y., & Guan, Y. (2012). Folk psychologies of humanness: Beliefs about distinctive and core human characteristics in Australia, Italy, and China. Journal of Cross-Cultural Psychology, 43(1), 53–58. Bandura, A. (1999). Moral disengagement in the perpetration of inhumanities. Personality and Social Psychology Review, 3(3), 193–209. Bandura, A., Underwood, B., & Fromson, M. E. (1975). Disinhibition of aggression through diffusion of responsibility and dehumanization of victims. Journal of Research in Personality, 9(4), 253–269. Bar-Tal, D. (1989). Delegitimization: The extreme case of stereotyping. In D. Bar-Tal, C. Grauman, A. Kruglanski, & W. Stroebe (Eds.), Stereotyping and prejudice: Changing conceptions (pp. 169–188). Springer. Bastian, B., & Haslam, N. (2010). Excluded from humanity: Ostracism and dehumanization. Journal of Experimental Social Psychology, 46(1), 107–113. Bastian, B., Jetten, J., & Radke, H. R. (2012). Cyber-dehumanization: Violent video game play diminishes our humanity. Journal of Experimental Social Psychology, 48(2), 486–491. Bastian, B., Laham, S. M., Wilson, S., Haslam, N., & Koval, P. (2011). Blaming, praising, and protecting our humanity: The implications of everyday dehumanization for judgments of moral status. British Journal of Social Psychology, 50(3), 469–483. Brandt, M. J., & Reyna, C. (2011). The chain of being: A hierarchy of morality. Perspectives on Psychological Science, 6(5), 428–446. Brewer, M. B. (1999). The psychology of prejudice: Ingroup love and outgroup hate? Journal of Social Issues, 55(3), 429–444. Brison, N., Stinglhamber, F., & Caesens, G. (2022). Organizational dehumanization. In Oxford research encyclopedia of psychology. https://doi.org/10.1093/acrefore/ 9780190236557.013.902 Brudholm, T., & Lang, J. (2021). On hatred and dehumanization. In M. Kronfeldner (Ed.), The Routledge handbook of dehumanization (pp. 341–354). Routledge. Bruneau, E., Jacoby, N., Kteily, N., & Saxe, R. (2018). Denying humanity: The distinct neural correlates of blatant dehumanization. Journal of Experimental Psychology: General, 147(7), 1078–1093. Castano, E., & Giner-Sorolla, R. (2006). Not quite human: Infrahumanization in response to collective responsibility for intergroup killing. Journal of Personality and Social Psychology, 90(5), 804–818. Cehajic, S., Brown, R., & Gonzalez, R. (2009). What do I care? Perceived ingroup responsibility and dehumanization as predictors of empathy felt for the victim group. Group Processes and Intergroup Relations, 12(6), 715–729.

Intergroup Conflict and Dehumanization

Demoulin, S., Leyens, J-P., Paladino, M., Rodriguez-Torres, R., Rodriguez-Perez, A., & Dovidio, J. (2004). Dimensions of “uniquely” and “non-uniquely” human emotions. Cognition and Emotion, 18(1), 71–96. Enock, F. E., Flavell, J. C., Tipper, S. P., & Over, H. (2021). No convincing evidence outgroups are denied uniquely human characteristics: Distinguishing intergroup preference from trait-based dehumanization. Cognition, 212, Article 104682. Goff, P., Eberhardt, J., Williams, M., & Jackson, M. (2008). Not yet human: Implicit knowledge, historical dehumanization, and contemporary consequences. Journal of Personality and Social Psychology, 94(2), 292–306. Gray, H., Gray, K., & Wegner, D. (2007). Dimensions of mind perception. Science, 315 (5812), Article 619. Haagensen, L., & Croes, M. (2012). Thy brother’s keeper? The relationship between social distance and intensity of dehumanization during genocide. Genocide Studies and Prevention: An International Journal, 7(2), Article 7. Haque, O., & Waytz, A. (2012). Dehumanization in medicine: Causes, solutions, and functions. Perspectives on Psychological Sciences, 7(2), 176–186. Harris, L., & Fiske, S. (2006). Dehumanizing the lowest of the low: Neuroimaging responses to extreme out-groups. Psychological Science, 17(10), 847–853. Haslam, N. (2006). Dehumanization: An integrative review. Personality and Social Psychology Review, 10(3), 252–264. Haslam, N. (2013). What is dehumanization? In P. Bain, J. Vaes, & J. P. Leyens (Eds.), Humanness and dehumanization (pp. 34–48). Psychology Press. Haslam, N. (2019). The many roles of dehumanization in genocide. In L. S. Newman (Ed.), Confronting humanity at its worst: The social psychology of genocide and extreme intergroup violence (pp. 119–138). Oxford University Press. Haslam, N. (2021). The social psychology of dehumanization. In M. Kronfeldner (Ed.), The Routledge handbook of dehumanization (pp. 127–144). Routledge. Haslam, N., Bain, P., Douge, L., Lee, M., & Bastian, B. (2005). More human than you: Attributing humanness to self and others. Journal of Personality and Social Psychology, 89(6), 937–950. Haslam, N., Bastian, B., & Bissett, M. (2004). Essentialist beliefs about personality and their implications. Personality and Social Psychology Bulletin, 30(12), 1661–1673. Haslam, N., & Loughnan, S. (2014). Dehumanization and infrahumanization. Annual Review of Psychology, 65, 399–423. Haslam, N., Loughnan, S., & Sun, P. (2011). Beastly: What makes animal metaphors offensive? Journal of Language and Social Psychology, 30(3), 311–325. Jack, A. I., Dawson, A. J., & Norr, M. E. (2013). Seeing human: Distinct and overlapping neural signatures associated with two forms of dehumanization. Neuroimage, 79, 313–328. Kelman, H. (1976). Violence without restraint: Reflections on the dehumanization of victims and victimizers. In G. Kren & L. Rappoport (Eds.), Varieties of psychohistory (pp. 282–314). Springer. Kouchaki, M., Dobson, K. S. H., Waytz, A., & Kteily, N. S. (2018). The link between self-dehumanization and immoral behavior. Psychological Science, 29(8), 1234–1246. Kronfeldner, M. (2021a). Psychological essentialism and dehumanization. In M. Kronfeldner (Ed.), The Routledge handbook of dehumanization (pp. 362–377). Routledge.

351

352

  

Kronfeldner, M. (Ed.). (2021b). The Routledge handbook of dehumanization. Routledge. Kteily, N., & Bruneau, E. (2017). Backlash: The politics and real-world consequences of minority group dehumanization. Personality and Social Psychology Bulletin, 43(1), 87–104. Kteily, N., Bruneau, E., Waytz, A., & Cotterill, S. (2015). The ascent of man: Theoretical and empirical evidence for blatant dehumanization. Journal of Personality and Social Psychology, 109(5), 901–931. Kteily, N., Hodson, G., & Bruneau, E. (2016). They see us as less than human: Metadehumanization predicts intergroup conflict via reciprocal dehumanization. Journal of Personality and Social Psychology, 110(3), 343–370. Kteily, N. S., & Landry, A. P. (2022). Dehumanization: Trends, insights, and challenges. Trends in Cognitive Sciences, 26(3), 222–240. Landry, A. P., Ihm, E., & Schooler, J. W. (2021). Hated but still human: Metadehumanization leads to greater hostility than metaprejudice. Group Processes & Intergroup Relations, 25(2), 315–334. Lang, J. (2010). Questioning dehumanization: Intersubjective dimensions of violence in the Nazi concentration and death camps. Holocaust and Genocide Studies, 24 (2), 225–246. Lang, J. (2020). The limited importance of dehumanization in collective violence. Current Opinion in Psychology, 35, 17–20. Leyens, J-P., Demoulin, S., Vaes, J., Gaunt, R., & Paladino, M. (2007). Infra-humanization: The wall of group differences. Social Issues and Policy Review, 1(1), 139–172. Leyens, J-P., Rodriguez-Torres, R., Rodríguez-Pérez, A., Gaunt, R., Paladino, M, Vaes, J., & Demoulin, S. (2001). Psychological essentialism and the attribution of uniquely human emotions to ingroups and outgroups. European Journal of Social Psychology, 81(4), 395–411. Loughnan, S., & Haslam, N. (2007). Animals and androids: Implicit associations between social categories and nonhumans. Psychological Science, 18(2), 116–121. Loughnan, S., Haslam, N., Murnane, T., Vaes, J., Reynolds, C., & Suitner, C. (2010). Objectification leads to depersonalization: The denial of mind and moral concern to objectified others. European Journal of Social Psychology, 40(5), 709–717. Malle, B. F. (2019). How many dimensions of mind perception really are there? In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (pp. 2268–2274). Cognitive Science Society. Manne, K. (2016). Humanism: A critique. Social Theory and Practice, 42(2), 389–415. Maoz, I., & McCauley, C. (2008). Threat, dehumanization, and support for retaliatory aggressive policies in asymmetric conflict. Journal of Conflict Resolution, 52(1), 93–116. Opotow, S. (1990). Moral exclusion and injustice: An introduction. Journal of Social Issues, 46(1), 173–182. Over, H. (2021). Seven challenges for the dehumanization hypothesis. Perspectives on Psychological Science, 16(1), 3–13. Pavetich, M., & Stathi, S. (2021). Meta-humanization reduces prejudice, even under high intergroup threat. Journal of Personality and Social Psychology, 20(3), 651–671.

Intergroup Conflict and Dehumanization

Pizzirani, B., Karantzas, G. C., Roisman, G. I., & Simpson, J. A. (2021). Early childhood antecedents of dehumanization perpetration in adult romantic relationships. Social Psychological and Personality Science, 12(7), 1175–1183. Smith, D. L. (2011). Less than human: Why we demean, enslave, and exterminate others. St Martin’s Press. Smith, D. L. (2020). On inhumanity: Dehumanization and how to resist it. Oxford University Press. Smith, D. L. (2021). Making monsters: The uncanny power of dehumanization. Harvard University Press. Stanton, G. H. (1998). The 8 stages of genocide. Genocide Watch, 1. https://www .genocidewatch.com/tenstages Staub, E. (1989). The roots of evil: The origins of genocide and other group violence. Cambridge University Press. Tam, T., Hewstone, M., Cairns, E., Tausch, N., Maio, G., & Kenworthy, J. (2007). The impact of intergroup emotions on forgiveness in Northern Ireland. Group Processes & Intergroup Relations, 10(1), 119–136. Vaes, J., Paladino, M., & Puvia, E. (2011). Are sexualized females complete human beings? Why males and females dehumanize sexually objectified women. European Journal of Social Psychology, 41(6), 774–785. Waytz, A., Gray, K., Epley, N., & Wegner, D. (2010). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14(8), 383–388. Weisman, K., Dweck, C. S., & Markman, E. M. (2017). Rethinking people’s conceptions of mental life. Proceedings of the National Academy of Sciences Proceedings of the National Academy of Sciences, 114(43), 11374–11379.

353

15 Blame and Punishment Two Distinct Mechanisms for Regulating Moral Behavior Bertram F. Malle

15.1 Morality and Its Regulation To regulate group living, humans have developed two related socialcultural tools: a system of norms and complex social practices of regulating those norms. Central to norm regulation are responses to violations of norms. They include conciliatory ones, such as tolerance or forgiveness, and corrective ones, such as blame and punishment. Blame is often treated as parallel to punishment, as two currencies of moral sanction or norm enforcement (Ames & Fiske, 2013; Berger & Hevenstone, 2016; Cushman, 2008). Indeed, both blaming and punishing can be pedagogical (Cushman, 2015; Malle et al., 2014), and both impose costs on the offender. However, the two are also importantly distinct (Baumard, 2011; Buckholtz et al., 2015; Malle, 2021). Blaming a transgressor is both a particular type of moral judgment (Malle, 2021) and, when socially expressed, an overt act of moral criticism. Punishment is always an overt act – and not merely one of criticism but one of damage. Because it would, in other circumstances, violate a person’s rights, punishment must be warranted or legitimized, typically by roles or institutions, such as parent, teacher, or state. Blame, too, must be warranted (Coates & Tognazzini, 2012; Malle et al., 2022), but blame can be withdrawn when found unwarranted whereas punishment cannot. Once inflicted, the damage of punishment is rarely reversible. This is just a rough comparison. In this chapter, I investigate blame and punishment from two perspectives: their cultural history and their psychology. I also hope to make plausible the hypothesis that the distinct cultural histories of blame and punishment have shaped their distinct psychology. I should acknowledge at the outset that some literatures use the term punishment as another word for all forms of norm enforcement or sanction and then subsume blame (as verbal criticism) under the label punishment (for a more nuanced form of this position, see Chapter 16, this volume). I agree that both blame and punishment are forms of norm enforcement, but I hope to convince the reader that distinguishing between the phenomena of blame and punishment as well as between the corresponding terms helps clarify the important differences among the various tools of human norm enforcement.

354

Blame and Punishment

15.2 Cultural History of Blame and Punishment In human cultural evolution, responses to an individual’s norm violations have undergone enormous changes, often characterized as overcoming a brutal past (Farrington, 1996) to more “humane” present times (Pinker, 2011). However, if we look closely, there were really two distinct phases of human social living (Dubreuil, 2010) that gave rise to two different responses to norm violations: blame and punishment. The first phase was a biological evolution from often violent dominance hierarchies in our primate ancestors to more egalitarian forms of human social living (Boehm, 2000), beginning perhaps as early as homo erectus, 1 million years ago (Dubreuil, 2010). As humans evolved in small communities of huntergatherers, they enforced social-moral norms with sanctions that were primarily informal, interpersonal, and relatively mild (Wiessner, 2005) – much like today’s informal acts of blame as moral criticism. The second phase began with human settlements after 10,000 BCE, leading to the formation of chiefdoms and early states, rapid population growth, and eventually culminating in nations and empires with large populations. Crucially, these societies reintroduced dominance hierarchies but legitimized them, not through our primate ancestors’ raw strength and aggression, but through wealth, inheritance, military, and religious structures. Many norms became codified into law, shaping a system of institutionalized punitive sanctions imposed on the rest of the community by those at the top of the dominance hierarchies (e.g., chief, king, state). I now flesh out this sketch of how two phases of human evolution gave rise to two forms of sanctioning, which form the origins of today’s tools of blame and punishment.

15.2.1 Hunter-Gatherer Communities and Moral Criticism Until about 10,000 years BCE, humans lived as hunter-gatherers in small bands of 25–50 (Boehm, 1999; Knauft, 1994). We know this from some archeological finds (Bandy, 2004; Enloe, 2003), population genetic analyses (Atkinson et al., 2008; Henn et al., 2011), but predominantly from ethnographic research of hunter-gatherer societies over the past 100 years (Lee & Daly, 1999; Wilson, 1988; Woodburn, 1982). According to this accumulated evidence, hunter-gatherer communities were nomadic and thus highly mobile, changing camp every few weeks to months (Lee, 1972), mostly lacking possessions, and having little sense of territorial boundaries (Wilson, 1988). They were highly egalitarian and, without one supreme ruler, lawmaker, or judge, leadership was provided by different members for different tasks (Service, 1966). The hunter-gatherer norm system was grounded in reciprocity, which regulated both community living and vital activities such as food acquisition and consumption. When a hunter shared his yield with another hunter one day, he

355

356

    .   

could expect to receive part of the other’s yield another day. Indeed, sharing large hunts counted as a “virtually universal rule” among hunter-gatherer societies (Wilson, 1988, p. 37), and violating this sharing norm was met with sanctions. In these highly interdependent bands, sanctioning was interpersonal. Most transgressions were easy to detect (Silberbauer, 1982) because life was public and transparent. But sanctions of norm violators rarely consisted of punishment (Baumard, 2010; Guala, 2012), since these egalitarian groups disdained assertations of power and coercion, and debilitating penalties would have hurt the community at least as much as the transgressor. Instead, communities favored communication, criticism, sometimes ridicule (Wiessner, 2005), and gossip (Dunbar, 1996). Repeat offenders may have been isolated or, as a last resort, expelled from the group (Woodburn, 1982). But the typical way of responding to norm violations was a form of public criticism, a threat to the person’s social standing and reputation – which, in small, interdependent groups, largely ensured norm compliance. These civil forms of sanctions, however, occurred within groups and did not prevent highly punitive actions between groups. How much warfare occurred in early human hunter-gatherer communities is debated. But few scholars doubt that intergroup violence gradually increased with population density and with the correlated expansion from extended families to clan-based networks (Flannery & Marcus, 2012). These growth patterns set the stage for a transformation in the social organization of human societies.

15.2.2 The Emergence of Punishment Post Settlement The second phase in the cultural evolution of sanctioning methods occurred after humans settled down, beginning around 12,000 years ago. Norm enforcement turned from an interpersonal process of criticism and reputation threat to an institutionalized process that confronted the perpetrator with the damaging actions of a ruling entity and its henchmen. The transition occurred in different places at different times, but the endpoint was, with few exceptions, a hierarchical society that centralized and legitimized harsh punishment. A few essential facts about sedentary life help explain this change (Lee, 1972; Peregrine et al., 2007; Redman, 1978). Property. As hunter-gatherers, people possessed only tools, tents, and clothes; food was distributed and consumed right away; and land had no boundaries of ownership (Wilson, 1988). After settling down, more and more people had permanent housing, household effects, and eventually land, livestock, and crops. Quickly, some gained more property than others. Such inequalities were enlarged by inheritance rights, intermarriage, as well as reciprocal trade and protection among property owners (Peregrine et al., 2007). The have-nots desired others’ property, and the crimes of robbery and burglary emerged. The haves feared losing their property, so the need for protection, law enforcement, and punishment increased.

Blame and Punishment

Population Growth. Between 10,000 and 8,000 BCE, population growth was spurred by environmental changes (the end of the Last Glacial) and geographic opportunities (e.g., the fertile Near East; Aurenche et al., 2013), turning camps into villages and towns (Atkinson et al., 2008; Gignoux et al., 2011). Settlements also increased the rates of offspring. Whereas nomad communities had to carry their newborns for thousands of kilometers a year, limiting child birth to once every 3–4 years (Lee, 1972), settled communities were able to dramatically increase the frequency of pregnancies (Buikstra et al., 1986). Furthermore, plant and animal breeding experiments as well as improved food production and storage technologies fed more people. In a positive feedback loop, the resulting population growth put pressure on food production and demand for labor. Consequently, a second population growth turned towns into cities, several of which grew to 30,000 inhabitants or more between 2000 BCE and 1200 BCE. Another feedback loop then emerged between rapidly increasing built structures (Wilson, 1988) – from private homes to royal monuments and sacral edifices – and the massive opportunities for work to sustain this building boom (Boserup, 1965). That work was likely done by those who didn’t own property – the poor, foreigners, and conquered populations; thus, dependent labor and class stratification accelerated. Intergroup Conflict. By and large, hunter-gatherer societies were peaceful (Kelly, 2000; Silberbauer, 1982). There was little reason to attack another band (what do they have that we don’t?). There was also little reason to fight over territory that constantly shifted (Woodburn, 1982). But once there were settlements, some land was more fertile, some villages more productive, and fastgrowing populations had a need to either intensify food production (Carneiro, 1970) or expand their territory (Dumond, 1972), especially because food resources may not have kept pace with population growth (Larsen, 1995). Thus emerged incentives for organized warfare (Milner et al., 1991; Redman, 1978) to gain land, food, raw materials, tools, and to acquire further labor by subjugating other communities. As potential targets for attack and raids, settled communities bolstered their safety by developing weapons and defensive facilities, which further demanded an expanded labor force and eventually a professional army. Hierarchical Social Structure. All these changes – expanded and unequally distributed property, population growth, dependent labor, and intergroup hostilities – demanded organization and administration, taken on by a few that turned into elites and authorities (Redman, 1978). New crimes emerged when this authority was violated, such as treason, social revolt, or tax evasion. Leadership structures were also necessary to maintain within-group cohesion in light of new role differentiation and resulting conflict potential (White, 2007). Settled humans were arranged from slaves to freemen to landowners, eventually aristocrats and priests, and almost always a supreme ruler. In the hundreds of thousands of years prior to settlement, primate hierarchies were transformed into egalitarian, cooperative human bands; within just a few thousand years, a new society was born, returning to hierarchical and often brutally competitive primate roots.

357

358

    .   

15.2.3 Escalation of Punishment In small hunter-gatherer communities, a norm violator had equal-status relationships with other group members, and these relationships had to be maintained for continued group success. Sanction severity for norm violations was therefore restrained because harsh penalties might invite equally harsh responses in the future and harm community relationships. By contrast, in larger settlements, ties between norm violators and the ruling class became weak and asymmetric, promising scant reciprocal future interaction and setting few limits on sanction severity. Over thousands of years of growing social hierarchies, we therefore see both increasingly centralized and increasingly brutalized forms of punishment. Several causal processes contributed to such escalation of punishment. Punishment became more severe when rulers defended themselves against threats to their position. For example, in eighteenth-century Tahiti, the chief left it to family heads to enforce rules about property or marriage but applied severe punishment when offenses were “sacred,” typically those against his interests (Claessen, 2003). In nineteenth-century despotic African Buganda, sacred crimes, especially those against the ruler, were met with punishments of maiming, cutting into pieces, and burning alive (Claessen, 2003). Political rulers in China’s hierarchical societies BCE (Bozan et al., 2001) also relied on stern rules and punishment, especially during times of civil unrest. The death penalty applied to many offenses against the emperor (e.g., treason, rebellion) and the state (e.g., malpractice, bribery, illicit sales). In the early Roman Republic, laws were in tight control of the ruling class and were even kept secret from the lower class. When the laws were finally publicized in the Twelve Tables (c. 450 BCE), they contained detailed civil statutes but also prescribed capital punishment for those who defamed others in song (VIII.1b) – to avert any chance of inciting the proletariat’s sentiments against the ruling elite. Over the centuries, wars and uprisings caused tyrannic rulers to take the reins of the Roman Republic, and punishments became excessively cruel, even public spectacles, to deter any threat to the elites (Moore, 2001). War is a driver of severe punishment. It brutalizes relations between communities, and it brutalizes relations within. When people are exposed to war, violence is part of life, and aggressive behavior becomes disinhibited (MacManus et al., 2012). In wartime, rulers and state governments also more easily justify violent punishment within the state, especially for crimes of treason or cowardice. For example, before and during World War II, Nazi Germany took capital punishment to incomparable extremes, with a huge increase in death warrants and executions (Messerschmidt, 2005). The death penalty was applied to a wide range of criminal acts, various forms of military disobedience, and many forms of political resistance. Despite the brutality of punishment in many early societies, it did not succeed in deterring crime, which then further incited harsher punishments. We see this cycle in modern times as well. For example, in the United States’ “war on

Blame and Punishment

crime” of the 1980s (Pratt et al., 2005), when punishment trends hardened in response to (perceived) rising crime rates but failed to loosen when crime rates declined. To this day, increased institutional punitiveness does not result in crime reduction (Cullen et al., 2011). We also see detrimental escalation processes in today’s school environment (Darling-Hammond et al., 2023). As teachers increasingly punish students for even minor misbehavior – often because they regard the students as “troublemakers” (Okonofua & Eberhardt, 2015) – students become defiant and misbehave even more (Pesta, 2022), which cyclically escalates punishment. This escalation occurs primarily for minoritized students (Amemiya et al., 2020). Thus, both past and present reveal that punishment is a questionable method of moral regulation; it often expresses power, social hierarchies, and their dominance-maintaining mechanisms (Sidanius & Pratto, 1999).

15.3 Psychology of Punishment and Blame I now shift from a cultural history of blame and punishment to the contemporary psychology of these phenomena. I begin with an outline of their broad differences and then devote more detailed attention to our current understanding of each form of sanctioning.

15.3.1 Distinguishing Punishment and Blame Everyday moral sanctions range from raised eyebrows to muttered disapproval, from emotional complaints to thoughtful criticism; but there are also acts of physical aggression, withholding of resources, public shaming, and social exclusion. The first set is more representative of moral criticism (blame, for short) whereas the second set is more representative of genuine punishment. Even though blame and punishment are not categorically distinct, a bundle of features allows us to distinguish them. Blame has a dual nature as a particular kind of moral judgment and as communicated moral criticism; punishment is always an observable act. When expressed, blame often requests or demands that the violator respond to the criticism and change their (future) behavior; punishment coercively imposes costs on the transgressor, often relying on power or status to legitimize the imposition of costs. Blame is typically reversible – one can admit that one unfairly blamed the other and take back the criticism – whereas the tangible costs of punishment can rarely be taken back. Finally, blame can be contested, leaving room for justification and negotiation; punishment leaves little room for negotiation and is often the last resort when moral criticism has failed. (For a different conception of punishment, incorporating the features of blame, see Chapter 16, this volume.) In light of these distinguishing properties, we would expect that, in informal, everyday interactions, humans prefer to use moral criticism over punishment. Indeed, using daily surveys to study naturally occurring norm violations,

359

360

    .   

Molho et al. (2020) found that people were generally unmotivated to engage in punishment as physical confrontation and somewhat more motivated to gossip or exclude the transgressor. Field studies show that people enforce norms with subtle verbal or nonverbal interventions but not active punishment (e.g., Przepiorka & Berger, 2016). In a cross-cultural study, rates of responding to violations with cost-imposing sanctions (e.g., yelling, insulting, hitting) were extremely low, between 1 percent and 6 percent (Pedersen et al., 2020). Even though a number of behavioral economics experiments have suggested that people are willing to punish norm violators, this form of punishment is mild and indirect (taking a little money from a stranger). Moreover, when given a choice, people prefer not to punish but to criticize the violators to their face or to others (Feinberg et al., 2012; Kriss et al., 2016; Xiao & Houser, 2005), to compensate the victim (Chavez & Bicchieri, 2013; Hershcovis & Bhatnagar, 2017), or even to restore order by correcting the negative outcome (e.g., cleaning up other people’s litter; Berger & Hevenstone, 2016). As we move into more hierarchical relationships – parent and child, teacher and student, boss and employee – punishment becomes more likely, more consequential, and is often seen as more legitimate (Gershoff & Lee, 2020; Mooijman & Graham, 2018). Embedded in such hierarchies, “[p]unishment is not a behavior, but an institution” (Binder, 2002, p. 321). Indeed, punishment is the predominant sanctioning instrument used by the state (Tonry, 2009).

15.3.2 Understanding Punishment If interpersonal punishment is relatively infrequent in daily life, why is it so widespread in its institutionalized form? The brief cultural history of punishment has taught us that punishment has been used to maintain hierarchies, defend the powerful, and remove the unwanted. The severity of punishment has declined over time, but to this day, punishment is an instrument of power, oppression, and the maintenance of dominance hierarchies (Redford & Ratliff, 2018; Sidanius et al., 2006). And not only is punishment discriminatory, it is also ineffectual as an instrument of deterrence (Cullen et al., 2011). If punishment tends to fail in so many ways, what maintains the public support for institutional punishment?

15.3.3 Forces that Maintain Support for Punishment Retribution. A first hypothesis is that people are fundamentally retributivists – desiring proportional and painful punishment of norm violators, even when it has no deterrent function (Carlsmith et al., 2002; Goodwin & Gromet, 2014). But rather than explaining common support for institutional punishment, people’s retributive tendencies may be themselves explained by their exposure to institutions of punishment. Research on lay retributivism always probes whether people recommend or endorse retributive punishment for crimes, such as assault, rape, or murder. For those violations, people may endorse severe

Blame and Punishment

punishment because that, they have learned, is the proper response. However, people are not merely reflecting back the severity of punishment in legal reality; they are often manipulated by political rhetoric, more so from the right, that crime is rampant and society needs to be tougher on crime (Muhammad et al., 2015; Tonry, 2009). However acquired, what is it that human retributive tendencies really support? The current literature often defines retributive commitments as referring to “deserved punishment of a guilty offender” (Goodwin & Gromet, 2014, p. 562). People are retributivists, then, if they endorse a statement such as “by punishing offenders we give them what they deserve” (Nadelhoffer et al., 2013, p. 244). But in what sense is deserved punishment “retributive” but a deserved award is not (Cottingham, 1979, p. 239)? In both cases, the response is considered appropriate or justified, given what the person actually did. This notion of desert stands in contrast to the extreme consequentialist position that what the transgressor did (or thought while doing) is irrelevant; that the only thing that matters for punishment is its beneficial consequence (e.g., deterrence of crime, calming public outrage). Rejecting this position, people endorse retribution in the sense of paying back (Cottingham, 1979), which is what the original twelfth-century English word referred to. Such retribution was a symmetric practice of moral accounting – both rewards and punishments were forms of paying back. If retribution is the mirror image of the human disposition toward reciprocity for good deeds (Gouldner, 1960), then being retributive need not imply cruel enjoyment of a wrongdoer’s suffering. Instead, a scale is being balanced between the transgressor’s behavior and the community’s norms; this scale, of course, is what the goddess Justitia holds in her hand to symbolize justice. Whether the repayment is cruel or civil – mutilations and executions, or monetary fines, apology, and atonement – depends on the historically and geographically specific currency in which moral debts are repaid. Thus, even if ordinary people’s support for retributive punishment is grounded in a general principle of reciprocity, it has undoubtedly been nourished by long-standing patterns of political and religious influence (Griffith, 2020). Delegation. A second contributor to support for institutional punishment stems from the dangers of personal punishment. Punishment is typically an act of aggression and inflicts damage on the perpetrator, who may therefore retaliate. A perpetrator who has committed a serious transgression has already shown little commitment to the community’s norms; when punished, the person is likely to avenge the inflicted damage. In hunter-gatherer societies, personal interventions were more pressing and more likely in response to extreme violations (Kelly, 2000, p. 27). However, they came with significant risk. Lee (1979) reported that almost half of the !Kung community’s fatalities over several decades were bystanders or “peacemakers,” who died in the attempt to end a conflict. Retaliation against norm-enforcing punishment is even more likely to be committed by people outside one’s group because the perpetrators have no relationship with the norm enforcer or the community and thus no reason to

361

362

    .   

show remorse, correct their behavior, and protect their reputation. Indeed, in human history, intergroup conflicts have often escalated from individual revenge to family- or group-based revenge, spiraling into blood feuds (Boehm, 2011). In all these cases, the costs to directly punish a perpetrator are high, but the risks of retaliation can be reduced or even eliminated by handing the task of punishment to specific agents or institutions (Cushman, 2015). Such delegation of punishment protects ordinary people and allows them to safely condemn violators and support the punishment meted out by the authorities – even excessively harsh punishment. It is far easier to cheer on the torturer or vote for “tough on crime” policies than it is to break the prisoner’s arm oneself or stand guard at a solitary confinement cell. Delegated punishment, however, requires justification, which a ruler or state has in general and can award to particular officials, such as judges, prison guards, or executioners. Institutionalized punishment guarantees legitimacy without having to establish it anew each time (Binder, 2002; Garland, 1990). As long as the institution is seen as reasonably consistent and fair, the community will support it and grant it the permission to punish some of norm-violating members.

15.3.4 Does Punishment Foster Cooperation? Leaving the institutionalized context, punishment in small groups can foster cooperation by decreasing the number of free riders and increasing the contributions individuals make to the group (Fehr & Gächter, 2000; Yamagishi, 1986). Punishment works best when it articulates community norms (Andrighetto et al., 2013; Xiao, 2018), teaches the violator (Cushman, 2015), brings about a response from the violator (Funk et al., 2014), and when it is gentle and deemphasizes power differences (Kochanska & Aksan, 2006). But punishment can go awry. When cooperators are punished, costly punishment no longer promotes cooperation (Rand et al., 2010), and the option to punish can lead to retaliation (Hopfensitz & Reuben, 2009). When social communities view punishment as unwarranted, it is less effective at correcting social behavior (Herrmann et al., 2008). Furthermore, impressions of thirdparty punishers are less than favorable (Dhaliwal et al., 2021), and even victims who punish are not seen very positively; instead, victims who forego punishment are seen as moral, trustworthy, and altruistic (Heffner & FeldmanHall, 2019). Punishment may not even be necessary for cooperation (Baumard, 2010). Punishment’s apparent power to increase cooperation in mixed-motive games is matched (Feinberg et al., 2014) or surpassed (Wu et al., 2016) by that of gossip, and it is also matched by communicated disapproval (Masclet et al., 2003). When we compare overall payoffs in these game contexts, both gossip and disapproval lead to better collective outcomes than does punishment (Dugar, 2010; Wu et al., 2016).

Blame and Punishment

15.3.5 Punishment Judgments Research has examined punishment not only as a behavior of imposing costs on another person but as ordinary people’s recommendations for the legal punishment a transgressor should suffer. Researchers have studied the kinds of information people are sensitive to and the various judgment biases they may fall prey to. On the information side, there is consistent evidence that people assign different degrees of punishment (e.g., length of prison term or monetary fines) as a function of violation severity, intentionality, foreseeability, and justification (Kneer & Machery, 2019; Robinson & Darley, 1995), largely consistent with what the law prescribes. In addition, however, studies document that punishment recommendations are susceptible to factors that should arguably not influence legal judgments – for example, the defendant’s ideology (Sood & Darley, 2012), physical or social attractiveness (e.g., Nemeth & Sosis, 1973), stereotypically Black appearance (Eberhardt et al., 2006), or the juror’s prejudice (Gamblin et al., 2021). But while several studies do confirm biased recommendations for degrees of punishment, numerous other studies failed to find similar biases in people’s verdicts – that is, judgments of whether a person is guilty or not (Block, 1991; Stewart, 1985). Why might people be more calibrated in their judgments of guilt than in their punishment recommendations? First, punishment and sentencing recommendations are inherently more discretionary, comparative, and subjective than are guilty verdicts (Tata, 1997), creating more room for errors and biases. Second, at least in the United States, people don’t normally make punishment recommendations – as jurors, their role is to come to a verdict, whereas judges decide on punishment. Thus, research that measures legal punishment recommendations may ask people for unusual, unfamiliar judgments, which may be more susceptible to biasing information. Third, previous studies often treated past transgressions, character, and motive as “biasing” factors when in fact they often are legally admissible for sentencing, if not for guilty verdicts (Guglielmo, 2015); so by and large, people’s judgments are consistent with the law. In no way does this imply that verdicts by research participants or real jurors are generally unbiased. One of the most enduring biases in the United States criminal system is racial discrimination, which exerts its impact through police work, prosecution decisions, pretrial detention, guilty pleas, testimony, all the way to verdicts and sentencing (Kansal, 2005; Mitchell et al., 2005; Sutton, 2013). Research is also growing to document similar biases toward other marginalized groups (e.g., Mirabito & Lecci, 2021). Such discrimination replays the hierarchical dominance patterns that have tilted punishment systems since early human settlement.

15.3.6 Understanding Blame We have seen that people rarely mete out punishment in everyday life, and punishment recommendations occur more often in research studies than in the courtroom. By contrast, blame judgments occur frequently and naturally in

363

364

    .   

everyday life. Interestingly, however, blame does not have the best of reputations in psychological research. Acts of blaming are often portrayed as distorted and dysfunctional. Some philosophical work, by contrast, has characterized blaming as more constructive moral criticism that calls out unacceptable behavior, affirms norms and values, and demands justice (Bell, 2012; Ciurria, 2019). Research on “complaints” in sociology and pragmatics further shows the important function that criticism plays in regulating interpersonal behavior and the detailed work that goes into staging and warranting this kind of criticism (Morris, 1988). Finally, recent research highlights the sophisticated information processing that seems to underlie blame judgments (Cushman, 2008; Monroe & Malle, 2019), at least when the community enforces standards of evidence (Malle et al., 2022). I now take a closer look at these opposing facets of blame, dividing the review into cognitive blame (judgments in the head) and social blame (acts of moral criticism).

15.3.6.1 Blame as Judgment An Information Processing Model of Blame. A substantial literature shows that blame assigned to a person varies with a number of factors: the specific norm that was violated, the causal contributions the person makes, whether the person acted intentionally, what mental states or reasons the person had and how justified they were, and whether the person could have and should have prevented the norm violation (Cushman, 2008; Guglielmo & Malle, 2017; Monroe & Malle, 2019; Nadler & McDonnell, 2012). My colleagues and I have aimed to integrate these findings in the Path Model of Blame (Malle et al., 2014), depicted in Figure 15.1. The model a) specifies the various pieces of information that people normally process en route to blame; b) identifies a systematic order of processing – such that, for example, a question of intentionality comes up only when a causal contribution has been perceived (Guglielmo & Malle, 2017); and c) predicts patterns of blame updates when new information enters the processing stream (Monroe & Malle, 2019). A unique feature of the model is that intentionality does not merely influence blame judgments, like in other models (Alicke, 2000; Cushman, 2008), but it bifurcates the processing into distinct search paths (Monroe & Malle, 2017): either for the transgressor’s reasons that motivated and may justify an intentional violation; or for evidence that the transgressor could have prevented an unintentional violation. The Path Model provides an orienting framework that allows us to examine a number of important questions about blame: to what extent it is motivationally biased, whether it is “intuitive,” and how it relates to emotions. How Much Does Blame Suffer from Motivated Bias? All human information processing, moral judgment included, is imperfect: People have attentional lapses, forget information, or fail to integrate information under time pressure. Of greater concern in the moral psychology literature have been questions of motivated bias (Alicke, 2000; Ditto, 2009; Nadler & McDonnell, 2012).

Blame and Punishment

Figure 15.1 The Path Model of Blame, adapted from Figure 2 of Malle et al. (2014). People form blame judgments by processing the depicted information components, typically in the four phases indicated. The fourth phase differs depending on whether the event is considered intentional (4i) or unintentional (4u).

Motivated moral judgment is usually described as a “desire to blame” (Alicke, 2000; Ames & Fiske, 2013), followed by information processing that strives to confirm the initial blame. However, when appropriate control conditions are implemented, people appear equally responsive to exacerbating information as to mitigating information (e.g., Monroe & Malle, 2019; Nadler & McDonnell, 2012, Experiment 2). Even without a general desire to blame, people might still show confirmation bias. Yet making an initial blame judgment and later revising it does not come with a confirmation bias (Monroe & Malle, 2019). Such a bias typically requires a more powerful source, such as the blamer’s relationship with the transgressor (Forbes & Stellar, 2022), the blamer’s ideology (Niemi & Young, 2016), or salient “extra-evidential” information (Alicke, 2000) – that is, information that one should not take into account when forming blame judgments. The normative standards that determine what is extra-evidential, however, have been debated. Some scholars proposed that, even for everyday moral judgments, the standards should be set by “philosophers, legal theorists and psychologists” (Alicke, 2008, p. 179). Unfortunately, these experts do not necessarily agree on the pertinent normative standards, and such standards vary across cultures and historical times. So perhaps it is best to sidestep disputes over the normative standards themselves and ask, descriptively, which

365

366

    .   

factors people take into account when forming blame judgments and how those factors influence blame. I will call these factors “exogenous” because they go beyond the canonical “endogenous” information components of blame, as assembled in the Path Model of Blame. The current body of research appears consistent with the interpretation that, even when making blame judgments that some have called biased, people follow the information processing path depicted in Figure 15.1, though exogenous factors can alter steps in this path (Guglielmo, 2015). I briefly review research on two groups of exogenous factors. As a first group, norms and values provide the standard against which people evaluate the initial violation (component 1 in Figure 15.1), which shapes the degree of downstream blame. In Sood and Darley (2012, Study 3), participants disapproved of a man who went to the supermarket in the nude, and they did even more so when he handed out flyers promoting a position on abortion opposite to their own. Participants’ blame (and recommended punishment) was greater presumably because they saw more or stronger norms violated. Likewise, people seem to apply different norms to intimate others than to strangers, finding strangers’ transgressions less acceptable than intimates’ (Weiss & Burgmer, 2021). Norms also influence what people accept as justifications for a given intentional norm violation (component 4i in Figure 15.1). For example, the Southern and Northern US tend to differ in how justified they find violent acts in service of defending one’s honor after an insult (Cohen & Nisbett, 1994); hence these groups will differ considerably in how much they blame such violent acts. Finally, people’s norms guide the obligations people impose on a person to prevent certain unintentional violations. For example, people with hierarchylegitimizing ideologies believe that someone with implicit racial bias is not obligated to prevent their own discriminatory acts that follow from this bias; as a result of this normative belief about prevention, they blame a person less who unintentionally committed such discrimination (Redford & Ratliff, 2016). Conversely, in liberal communities, people are obligated to avert microaggressions (Princing, 2019) whereas others criticize this obligation as “ridiculous” (Dickey, 2019). In all these cases, blame varies considerably across groups of people, not because of motivated bias but because they interpret the same events in light of different norms. Beliefs and attitudes toward the transgressor constitute the second group of exogenous factors. Several studies have shown that judgments of causality or preventability can be affected by negative character information (Alicke & Zell, 2009) or the perceiver’s attitude (Niemi & Young, 2016). Moreover, character, past behavior, and stereotypes can affect people’s inferences of a transgressor’s intentionality or reasons for acting (Sood, 2019). So while there is good evidence that exogenous beliefs and attitudes can insert themselves into the blame processing path, some authors have proposed that these exogenous factors directly influence blame (Alicke & Zell, 2009; Ciurria, 2019, chapter 4), effectively bypassing the endogenous blame processing. The current evidence,

Blame and Punishment

however, does not provide convincing support for such direct effects (Guglielmo, 2015). Exogenous and endogenous factors would have to be properly compared in one and the same study, but they are rarely jointly measured (Dukes & Gaither, 2017; Nadler & McDonnell, 2012); or, if they are measured, not all are included in model tests (e.g., Alicke & Zell, 2009; Nadler, 2012); or, if included, they do not yield the predicted direct effects (Mazzocco et al., 2004). There is no doubt that people sometimes blame unfairly, to protect their selfesteem, enact their prejudice, or protect their group. Extant evidence, however, suggests that even in these cases people rely on the same information components – causality, intentionality, reasons, and so on – on which they rely when blaming fairly. Is Blame Intuitive? When scholars characterize moral judgments as intuitions, they rarely specify the type of moral judgment they are referring to. Some describe moral intuitions as feelings of “approval or disapproval” (Haidt, 2001, p. 818) or as some action being “bad” (Clark & Winegard, 2019, p. 14), which we may call evaluations (Malle, 2021). To be moral evaluations, these intuitions rely on the relevant moral norms being activated in the observer, who then judges the relevant behavior as violating the norm. Evaluations incorporate explanations for simple causality (De Freitas & Alvarez, 2018) and potentially about the intentionality at least of visibly performed behaviors (Decety & Cacioppo, 2012). By contrast, blame judgments also incorporate information about the intentionality of unobserved actions, the agent’s specific, potentially justifying reasons, and the complex assessment of preventability obligations and capacities (Monroe & Malle, 2019) – the kind of processing that typically does not fall under “intuitions.” Suggestions that moral judgments are intuitions typically go hand in hand with the claim that people are “dumbfounded” when trying to justify their moral judgments (Haidt & Bjorklund, 2008, p. 197). This claim faces a number of challenges. First, researchers routinely ask participants for simple wrongness judgments, which rely primarily on citing the relevant violated norm (Malle, 2021). Proponents of the dumbfounding hypothesis treat a norm statement (e.g., “because it’s incest”) as a dumbfounding response, whereas others insist that a norm statement constitutes an actual justifying reason (Stanley et al., 2019). Second, proponents of the dumbfounding hypothesis instruct their experimenters to explicitly “undermine whatever reason the participant put forth in support of his/her judgment or action” (Haidt et al., 2000, p. 7), which arguably biases results in favor of the hypothesis (Gray et al., 2014; Royzman et al., 2015). Despite these favorable conditions, surprisingly few people give dumbfounding responses (e.g., 32 percent in McHugh et al., 2017; see Malle, 2021, p. 309); and in less coerced situations, dumbfounding drops below 20 percent (McHugh et al., 2020). These weak dumbfounding rates can hardly support the claim that moral judgments are by nature “intuitive.” Is there any dumbfounding evidence for blame judgments? Quite the contrary. Bucciarelli et al. (2008, Study 3) showed that people had no trouble explicating their blame judgments in a think-aloud protocol, and Voiklis et al.

367

368

    .   

(2016) documented that people provide rich and systematic explanations of their blame judgments. Importantly, these explanations referred to just the types of canonical information known to cause variations in blame judgments (e.g., the seriousness of the norm violation, intentionality, justified reasons). How Does Blame Relate to Emotion? Determining the role of emotion in blame requires heeding distinctions between affect, evaluations, and emotions. Affect is often understood as a nonrepresentational valenced feeling state (Neumann et al., 2001); evaluations are rapid valenced appraisals of some object (“this is bad”); and emotions are a large class of states (e.g., anger, resentment, or disgust) that are differentiated by appraisals – the cognitive processing of rich arrays of information (Scherer, 2013). Further, we must separate the different roles that these three phenomena can play in blame: They could fully constitute blame judgments, cause them, accompany (and possibly amplify) them, or be caused by them (Monin et al., 2007; Strohminger, 2017). Affect cannot constitute or cause blame because it lacks an object; but it can accompany blame. Evaluations of norm violating events lie at the beginning of any processing stream toward blame, so they make a causal contribution, but they do not process much of the information that blame judgments are normally based on and are therefore insufficient to fully cause, let alone constitute blame. Emotions such as anger rely on a number of appraisals (Ellsworth & Scherer, 2003) of just the kind of information that blame judgments respond to (e.g., causality, intentionality), so the two may co-emerge from this information processing. However, one can arrive at blame judgments by processing the relevant information without feeling angry, whereas it would be difficult to become angry before processing this kind of information (e.g., causality, intentionality). Emotions thus do not seem to constitute moral judgments but may accompany and amplify them. Researchers have attempted to induce anger to test whether it indeed amplifies blame or related moral judgments. Although some studies showed such impact (Ask & Pina, 2011; Lerner et al., 1998; Seidel & Prinz, 2013), others did not (Gamez-Djokic & Molden, 2016; Gawronski et al., 2018). Conversely, judgments of blame and responsibility can alter emotions or mediate between norm violations and emotions (Quigley & Tedeschi, 1996; Zajenkowska et al., 2021). Studies also found expressions of anger and moral outrage to arise after judgments of moral violations (Sasse et al., 2020), and response times of expressing anger seem to be no faster or even slower than response times to blame judgments (Cusimano et al., 2017). We can conclude that when people encounter a norm violation, they are likely to morally evaluate it, they may or may not experience feelings along with it, and they routinely process a canonical set of information. This information processing guides blame judgments and may generate emotions, but which specific emotions arise (e.g., anger, resentment, indignation) will depend on the specific processed information. These emotions then help scale the intensity of a socially communicated blame judgment (Drew, 1998). And this brings us to the phenomenon of blame as communicated moral criticism.

Blame and Punishment

15.3.6.2 Blame as Moral Criticism Social acts of blame, reproach, and rebuke serve several goals. They draw attention to a transgression (McGeer, 2012), signal the blamer’s commitment to a norm system (Shoemaker & Vargas, 2021) and try to enforce it (Bell, 2012); they try to change the transgressor’s ongoing or future behavior (Miller, 2003; Przepiorka & Berger, 2016), but they also simply express moral judgments and moral emotions (Sorial, 2016). Acts of blaming are costly, however, for all involved parties: the transgressor, the moral critic, and the community (Malle et al., 2022). The transgressor suffers a loss of public standing, damaged relationships, and just plain bad feelings. The moral critic faces potential retaliation from the transgressor (Balafoutas & Nikiforakis, 2012), damaged relationships, and possibly lack of support from the community. And the community itself carries the costs of escalating community strife (Allen, 2002). To keep these costs in check, most social communities impose norms on moral criticism (Coates & Tognazzini, 2012; Eriksson et al., 2017; Voiklis & Malle, 2018), which regulate the standards of evidence and discourse of blame (Friedman, 2013; Malle et al., 2022). When moral critics comply with these norms, they are also more likely to achieve the goals of their criticism. Complying with norms of blaming means that moral critics will process the available evidence and assign blame proportionally to the observed transgression. If they ignore evidence, they are bound to blame out of proportion – they will either overblame or underblame. Overblaming is itself a norm violation and will provoke rejection or retaliation from the transgressor and may cause long-term damage to a relationship (Fincham et al., 1987). Underblaming may downplay the violated norm and thus fail to change the offender’s behavior. However, mild, subtle forms of criticism for everyday transgressions (e.g., blatant littering; an able-bodied person using a disability parking spot) can be effective ways of changing behavior. In most social communities, even a cold stare or a snide remark may successfully communicate moral criticism (Miller, 2003; Molho et al., 2020). The critic thus affirms the offender as a respectable member of the moral community who is nonetheless worth criticizing (Bennett, 2002). The transgressor can of course influence the success of moral criticism. Blame demands a response (Drew, 1998; McGeer, 2012; Shoemaker, 2012), and transgressors have many options – on the one hand, they can deny, dismiss, or retaliate (Dersley & Wootton, 2000); on the other hand, they can explain, apologize, and offer repair (Walker, 2006; Watanabe & Laurent, 2020). Without such reconciling responses, the regulation of social relationships is bound to fail (Laforest, 2002). The community, finally, upholds the norms of blaming by demanding warrant from those who blame, especially when they launch premature, biased, or inaccurate criticism (Voiklis & Malle, 2018). By demanding such warrant, the community protects their members from unfair criticism. But it can also go too far in putting pressure on moral critics – for example, by suppressing moral criticism of those in power (Ciurria, 2019, chapter 9).

369

370

    .   

Forces That Alter Costs and Success Conditions. The costs of moral criticism vary with a number of factors that have implications for the norms and success conditions of the criticism (Malle et al., 2022). For example, costs vary with the addressee of the criticism. Second-person blame (expressed directly to the transgressor) comes with the risk of retaliation and fails more easily when overblaming occurs, but when communicated appropriately it can be immediately effective in regulating behavior. Third-person blaming (expressed to other parties) is safer and effective in affirming the community’s norm system, but any change in the transgressor’s behavior can be achieved only indirectly and with likely delay. Another influential variable that alters costs is the in-group or out-group status of the transgressor (Malle et al., 2022). When blaming in-group members, moral critics face pressure to heed the community norms that stem against disproportionate blaming and strife. But when the transgressor is outside the community, these norms lose their force. Critics may get away with sloppy information processing and unfair accusations, because the costs have been shifted almost entirely from critic and community to the transgressor. In the punishment literature, when an out-group member transgresses against an in-group member, sanctions are greatest – from mild monetary penalties (Bernhard et al., 2006) to death by lynching (Equal Justice Initiative, 2017) or state execution (Eberhardt et al., 2006). And initial results suggest that blame, too, is less regulated when applied to out-group members (Monroe & Malle, 2019). A recently emerging and powerful factor that alters costs is online discourse, especially on social media, which can deliver public shaming or social exclusion in just a few words (Klonick, 2015). Online moral critics bear fewer costs (Crockett, 2017), most obviously when they are anonymous. Even when identifiable, they launch their blame from the safe distance of a keyboard, which restricts the other’s retaliation and poses few risks to valued relationships; after all, those who are blamed, reviled, and excluded are typically strangers and outgroup members. Online blaming also imposes fewer costs on the community because that community is indeterminate. Social bonds that normally tie community members to each other are far weaker online, so people are less committed to norms (including norms of blaming) and put fewer constraints on derogations (Márquez-Reiter & Haugh, 2019). The looser restrictions on online moral criticism, however, can empower critics who have previously been silenced. The MeToo movement, for example, enabled acts of calling out celebrities or companies who transgressed, or documenting microaggressions and outright discrimination. Ideological camps differ in their reception of such amplified criticism – which some call “cancel culture” (Republican National Committee, 2020). Criticism of hurtful microaggressions is seen as a warranted rebuke in one community but as a ridiculous demand of political correctness in another community (Dickey, 2019). Such polarized reception stems from community differences in the norms for the original actions (e.g., political correctness, microaggression) as well as in the norms that govern criticism itself – including who is granted credibility and standing to

Blame and Punishment

criticize and how. Ultimately, online communities will have to calibrate what is a proportionate response to violations. We see the difficulty of such calibration in the following case: A critic used Twitter to blame two offenders for using the word dongle in what was interpreted to be a sexually offensive way, which then caused one of the offenders to lose his job after the publicity, but then the Twitter-posting critic was subsequently vilified and threatened by numerous people and fired from her own job for the act of public blaming (Brown, 2013).

15.4 Conclusions I have tried to make plausible the hypothesis that the distinct cultural histories of blame and punishment have shaped their distinct psychological underpinnings and social expressions. Today’s blame arose long ago as the primary sanctioning behavior in small, tight-knit, egalitarian communities, where moral criticism is often mild but effective, built on the power of reputation and the need to belong. Because of the costs of damaged relationships and the potential for retaliation, such blame is regulated by a set of norms that demand evidence and fair treatment – at least for members of one’s own community. Moral critics are expected to communicate their disapproval and to invite a response from the transgressor, with the goal of repairing and continuing the relationship and averting a repeat transgression. Such blame can be contested and mitigated by the transgressor’s response, and the critic may even admit a mistake and take back the criticism. This ideal can be missed in numerous ways, social media being a salient contemporary case. Punishment arose in larger, more distant, hierarchical communities. Acts of punishment would normally themselves be serious norm violations, were it not for the legitimacy granted to, or taken by, the punisher. Frequently from high up in the hierarchy, the punisher – a person or an institution – coercively imposes costs on the transgressor that cannot be taken back. Some forms of punishment step in where moral criticism has failed, but punishment still often stops communication rather than reinstating it or damages relationships rather than repairing them. One major shortcoming of current institutional punishment is its ineffectiveness in deterring crime and reforming individuals who committed crimes (Yukhnenko et al., 2020). As one alternative to the standard punitive system, restorative justice is an attempt to reconcile perpetrator and crime victim through a mediated conversation in which all thoughts and feelings are expressed, often restitution is agreed on, and the matter laid to rest (Braithwaite, 1999). Evidence shows that recidivism decreases considerably when restorative justice replaces legal punishment (Kennedy et al., 2019; Kuo et al., 2010). Restorative justice procedures are noticeably close to ancient and modern processes of moral criticism and reconciliation. A hope for the future is therefore that institutions more widely adopt constructive moral criticism as the powerful form of norm enforcement that it once was.

371

372

    .   

References Alicke, M. D. (2000). Culpable control and the psychology of blame. Psychological Bulletin, 126(4), 556–574. Alicke, M. D. (2008). Blaming badly. Journal of Cognition and Culture, 8(1–2), 179–186. Alicke, M. D., & Zell, E. (2009). Social attractiveness and blame. Journal of Applied Social Psychology, 39(9), 2089–2105. Allen, D. S. (2002). The world of Prometheus: The politics of punishing in democratic Athens. Princeton University Press. Amemiya, J., Mortenson, E., & Wang, M.-T. (2020). Minor infractions are not minor: School infractions for minor misconduct may increase adolescents’ defiant behavior and contribute to racial disparities in school discipline. American Psychologist, 75(1), 23–36. Ames, D. L., & Fiske, S. T. (2013). Intentional harms are worse, even when they’re not. Psychological Science, 24(9), 1755–1762. Andrighetto, G., Brandts, J., Conte, R., Sabater-Mir, J., Solaz, H., & Villatoro, D. (2013). Punish and voice: Punishment enhances cooperation when combined with norm-signalling. PLoS ONE, 8(6), Article e64941. Ask, K., & Pina, A. (2011). On being angry and punitive: How anger alters perception of criminal intent. Social Psychological and Personality Science, 2(5), 494–499. Atkinson, Q. D., Gray, R. D., & Drummond, A. J. (2008). MtDNA variation predicts population size in humans and reveals a major Southern Asian chapter in human prehistory. Molecular Biology and Evolution, 25(2), 468–474. Aurenche, O., Kozłowski, J. K., & Kozłowski, S. K. (2013). To be or not to be. . . Neolithic: “Failed attempts” at Neolithization in Central and Eastern Europe and in the Near East, and their final success (35,000–7000 BP). Paléorient, 39(2), 5–45. Balafoutas, L., & Nikiforakis, N. (2012). Norm enforcement in the city: A natural field experiment. European Economic Review, 56(8), 1773–1785. Bandy, M. S. (2004). Fissioning, scalar stress, and social evolution in early village societies. American Anthropologist, 106(2), 322–333. Baumard, N. (2010). Has punishment played a role in the evolution of cooperation? A critical review. Mind & Society, 9(2), 171–192. Baumard, N. (2011). Punishment is not a group adaptation. Mind & Society, 10(1), 1–26. Bell, M. (2012). The standing to blame: A critique. In D. J. Coates & N. A. Tognazzini (Eds.), Blame: Its nature and norms (pp. 263–281). Oxford University Press. Bennett, C. (2002). The varieties of retributive experience. The Philosophical Quarterly, 52(207), 145–163. Berger, J., & Hevenstone, D. (2016). Norm enforcement in the city revisited: An international field experiment of altruistic punishment, norm maintenance, and broken windows. Rationality and Society, 28(3), 299–319. Bernhard, H., Fischbacher, U., & Fehr, E. (2006). Parochial altruism in humans. Nature, 442(7105), Article 7105. Binder, G. (2002). Punishment theory: Moral or political? Buffalo Criminal Law Review, 5(2), 321–372. Block, B. P. (1991). Just convictions or just convictions? Issues in Criminological & Legal Psychology, 1(17), 20–24.

Blame and Punishment

Boehm, C. (1999). Hierarchy in the forest: The evolution of egalitarian behavior. Harvard University Press. Boehm, C. (2000). Conflict and the evolution of social control. Journal of Consciousness Studies, 7(1–2), 79–101. Boehm, C. (2011). Retaliatory violence in human prehistory. The British Journal of Criminology, 51(3), 518–534. Boserup, E. (1965). The conditions of agricultural growth: The economics of agrarian change under population pressure. Aldine. Bozan, J., Shao, H., & Hu, H. (2001). A concise history of China. University Press of the Pacific. Braithwaite, J. (1999). Restorative justice: Assessing optimistic and pessimistic accounts. Crime and Justice, 25, 1–127. Brown, E. (2013, March 25). Is Adria Richards a bully, or was she bullied by the internet? ZDNet. https://www.zdnet.com/article/is-adria-richards-a-bully-orwas-she-bullied-by-the-internet/ Bucciarelli, M., Khemlani, S., & Johnson-Laird, P. N. (2008). The psychology of moral reasoning. Judgment and Decision Making, 3(2), 121–139. Buckholtz, J. W., Martin, J. W., Treadway, M. T., Jan, K., Zald, D. H., Jones, O., & Marois, R. (2015). From blame to punishment: Disrupting prefrontal cortex activity reveals norm enforcement mechanisms. Neuron, 87(6), 1369–1380. Buikstra, J. E., Konigsberg, L. W., & Bullington, J. (1986). Fertility and the development of agriculture in the prehistoric Midwest. American Antiquity, 51(3), 528–546. Carlsmith, K. M., Darley, J. M., & Robinson, P. H. (2002). Why do we punish?: Deterrence and just deserts as motives for punishment. Journal of Personality and Social Psychology, 83(2), 284–299. Carneiro, R. L. (1970). A theory of the origin of the state. Science, 169(3947), 733–738. Chavez, A. K., & Bicchieri, C. (2013). Third-party sanctioning and compensation behavior: Findings from the ultimatum game. Journal of Economic Psychology, 39, 268–277. Ciurria, M. (2019). An intersectional feminist theory of moral responsibility. Routledge. Claessen, H. J. M. (2003). Aspects of law and order in Early State societies. In F. J. M. Feldbrugge (Ed.), The law’s beginnings (pp. 161–179). Martinus Nijhoff Publishers. Clark, C. J., & Winegard, B. M. (2019). Optimism in unconscious, intuitive morality. Behavioral and Brain Sciences, 42, Article e150. Coates, D. J., & Tognazzini, N. A. (2012). The contours of blame. In D. J. Coates & N. A. Tognazzini (Eds.), Blame: Its nature and norms (pp. 3–26). Oxford University Press. Cohen, D., & Nisbett, R. E. (1994). Self-protection and the culture of honor: Explaining Southern violence. Personality and Social Psychology Bulletin, 20(5), 551–567. Cottingham, J. (1979). Varieties of retribution. The Philosophical Quarterly, 29(116), 238–246. Crockett, M. J. (2017). Moral outrage in the digital age. Nature Human Behaviour, 1(11), 769–771. Cusimano, C., Thapa, S., & Malle, B. F. (2017). Judgment before emotion: People access moral evaluations faster than affective states. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. J. Davelaar (Eds.), Proceedings of the 39th Annual

373

374

    .   

Conference of the Cognitive Science Society (pp. 1848–1853). Cognitive Science Society. Cullen, F. T., Jonson, C. L., & Nagin, D. S. (2011). Prisons do not reduce recidivism: The high cost of ignoring science. The Prison Journal, 91(3, Suppl), 48S–65S. Cushman, F. (2008). Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment. Cognition, 108(2), 353–380. Cushman, F. (2015). Punishment in humans: From intuitions to institutions. Philosophy Compass, 10(2), 117–133. Darling-Hammond, S., Ruiz, M., Eberhardt, J. L., & Okonofua, J. A. (2023). The dynamic nature of student discipline and discipline disparities. Proceedings of the National Academy of Sciences, 120(17), 1–10. Decety, J., & Cacioppo, S. (2012). The speed of morality: A high-density electrical neuroimaging study. Journal of Neurophysiology, 108(11), 3068–3072. De Freitas, J., & Alvarez, G. A. (2018). Your visual system provides all the information you need to make moral judgments about generic visual events. Cognition, 178, 133–146. Dersley, I., & Wootton, A. (2000). Complaint sequences within antagonistic argument. Research on Language and Social Interaction, 33(4), 375–406. Dhaliwal, N. A., Patil, I., & Cushman, F. (2021). Reputational and cooperative benefits of third-party compensation. Organizational Behavior and Human Decision Processes, 164, 27–51. Dickey, R. (2019, May 28). It’s not that people deny the existence (although some do) it’s just that it’s ridiculous to expect people to keep track of them [Comment]. Quora.com. https://qr.ae/pGBil5 Ditto, P. H. (2009). Passion, reason, and necessity: A quantity-of-processing view of motivated reasoning. In T. Bayne & J. Fernández (Eds.), Delusion and selfdeception: Affective and motivational influences on belief formation (pp. 23–53). Psychology Press. Drew, P. (1998). Complaints about transgressions and misconduct. Research on Language & Social Interaction, 31(3–4), 295–325. Dubreuil, B. (2010). Human evolution and the origins of hierarchies: The state of nature. Cambridge University Press. Dugar, S. (2010). Nonmonetary sanctions and rewards in an experimental coordination game. Journal of Economic Behavior & Organization, 73(3), 377–386. Dukes, K. N., & Gaither, S. E. (2017). Black racial stereotypes and victim blaming: Implications for media coverage and criminal proceedings in cases of police violence against racial and ethnic minorities. Journal of Social Issues, 73(4), 789–807. Dumond, D. E. (1972). Population growth and political centralization. In B. Spooner (Ed.), Population growth: Anthropological implications (pp. 286–310). MIT Press. Dunbar, R. I. M. (1996). Grooming, gossip, and the evolution of language. Harvard University Press. Eberhardt, J. L., Davies, P. G., Purdie-Vaughns, V. J., & Johnson, S. L. (2006). Looking deathworthy: Perceived stereotypicality of black defendants predicts capitalsentencing outcomes. Psychological Science, 17(5), 383–386. Ellsworth, P. C., & Scherer, K. R. (2003). Appraisal processes in emotion. In R. J. Davidson, K. R. Scherer, & H. H. Goldsmith (Eds.), Handbook of affective sciences (pp. 572–595). Oxford University Press.

Blame and Punishment

Enloe, J. G. (2003). Food sharing past and present: Archaeological evidence for economic and social interactions. Before Farming: The Archaeology and Anthropology of Hunter-Gatherers, 2003(1), 1–23. Equal Justice Initiative. (2017). Lynching in America: Confronting the legacy of racial terror. https://eji.org/wp-content/uploads/2019/10/lynching-in-america-3d-ed080219.pdf Eriksson, K., Andersson, P. A., & Strimling, P. (2017). When is it appropriate to reprimand a norm violation? The roles of anger, behavioral consequences, violation severity, and social distance. Judgment and Decision Making, 12(4), 396–407. Farrington, K. (1996). Dark justice: A history of punishment and torture. Smithmark. Fehr, E., & Gächter, S. (2000). Cooperation and punishment in public goods experiments. The American Economic Review, 90(4), 980–994. Feinberg, M., Cheng, J. T., & Willer, R. (2012). Gossip as an effective and low-cost form of punishment. Behavioral and Brain Sciences, 35(1), 25–25. Feinberg, M., Willer, R., & Schultz, M. (2014). Gossip and ostracism promote cooperation in groups. Psychological Science, 25(3), 656–664. Fincham, F. D., Beach, S., & Nelson, G. (1987). Attribution processes in distressed and nondistressed couples: III. Causal and responsibility attributions for spouse behavior. Cognitive Therapy and Research, 11(1), 71–86. Flannery, K. V., & Marcus, J. (2012). The creation of inequality: How our prehistoric ancestors set the stage for monarchy, slavery, and empire. Harvard University Press. Forbes, R. C., & Stellar, J. E. (2022). When the ones we love misbehave: Exploring moral processes within intimate bonds. Journal of Personality and Social Psychology, 122(1), 16–33. Friedman, M. (2013). How to blame people responsibly. Journal of Value Inquiry, 47(3), 271–284. Funk, F., McGeer, V., & Gollwitzer, M. (2014). Get the message: Punishment is satisfying if the transgressor responds to its communicative intent. Personality and Social Psychology Bulletin, 40(8), 986–997. Gamblin, B. W., Kehn, A., Vanderzanden, K., Ruthig, J. C., Jones, K. M., & Long, B. L. (2021). A comparison of juror decision making in race-based and sexual orientation–based hate crime cases. Journal of Interpersonal Violence, 36(7–8), 3231–3256. Gamez-Djokic, M., & Molden, D. (2016). Beyond affective influences on deontological moral judgment: The role of motivations for prevention in the moral condemnation of harm. Personality and Social Psychology Bulletin, 42(11), 1522–1537. Garland, D. (1990). Punishment and modern society: A study in social theory. University of Chicago Press. Gawronski, B., Conway, P., Armstrong, J., Friesdorf, R., & Hütter, M. (2018). Effects of incidental emotions on moral dilemma judgments: An analysis using the CNI model. Emotion, 18(7), 989–1008. Gershoff, E. T., & Lee, S. J. (Eds.). (2020). Ending the physical punishment of children: A guide for clinicians and practitioners. American Psychological Association. Gignoux, C. R., Henn, B. M., & Mountain, J. L. (2011). Rapid, global demographic expansions after the origins of agriculture. Proceedings of the National Academy of Sciences, 108(15), 6044–6049.

375

376

    .   

Goodwin, G. P., & Gromet, D. M. (2014). Punishment. Cognitive Science, 5(5), 561–572. Gouldner, A. W. (1960). The norm of reciprocity: A preliminary statement. American Sociological Review, 25(2), 161–178. Gray, K., Schein, C., & Ward, A. F. (2014). The myth of harmless wrongs in moral cognition: Automatic dyadic completion from sin to suffering. Journal of Experimental Psychology: General, 143(4), 1600–1615. Griffith, A. (2020). God’s law and order: The politics of punishment in evangelical America. Harvard University Press. Guala, F. (2012). Reciprocity: Weak or strong? What punishment experiments do (and do not) demonstrate. Behavioral and Brain Sciences, 35(1), 1–15. Guglielmo, S. (2015). Moral judgment as information processing: An integrative review. Frontiers in Psychology, 6, Article 1637. Guglielmo, S., & Malle, B. F. (2017). Information-acquisition processes in moral judgments of blame. Personality and Social Psychology Bulletin, 43(7), 957–971. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834. Haidt, J., & Bjorklund, F. (2008). Social intuitionists answer six questions about moral psychology. In Moral psychology: Vol 2. The cognitive science of morality: Intuition and diversity (pp. 181–217). MIT Press. Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason [Unpublished manuscript]. Available at https://pdfs .semanticscholar.org/d415/e7fa2c2df922dac194441516a509ba5eb7ec.pdf]. University of Virginia. Heffner, J., & FeldmanHall, O. (2019). Why we don’t always punish: Preferences for non-punitive responses to moral violations. Scientific Reports, 9(1), Article 13219. Henn, B. M., Gignoux, C. R., Jobin, M., Granka, J. M., Macpherson, J. M., Kidd, J. M., Rodríguez-Botigué, L., Ramachandran, S., Hon, L., Brisbin, A., Lin, A. A., Underhill, P. A., Comas, D., Kidd, K. K., Norman, P. J., Parham, P., Bustamante, C. D., Mountain, J. L., & Feldman, M. W. (2011). Huntergatherer genomic diversity suggests a southern African origin for modern humans. Proceedings of the National Academy of Sciences, 108(13), 5154–5162. Herrmann, B., Thöni, C., & Gächter, S. (2008). Antisocial punishment across societies. Science, 319(5868), 1362–1367. Hershcovis, M. S., & Bhatnagar, N. (2017). When fellow customers behave badly: Witness reactions to employee mistreatment by customers. Journal of Applied Psychology, 102(11), 1528–1544. Hopfensitz, A., & Reuben, E. (2009). The importance of emotions for the effectiveness of social punishment. The Economic Journal, 119(540), 1534–1559. Kansal, T. (2005). Racial disparity in sentencing: A review of the literature. The Sentencing Project. https://www.opensocietyfoundations.org/publications/racial-dis parity-sentencing Kelly, R. C. (2000). Warless societies and the origin of war. University of Michigan Press. Kennedy, J. L. D., Tuliao, A. P., Flower, K. N., Tibbs, J. J., & McChargue, D. E. (2019). Long-term effectiveness of a brief restorative justice intervention. International Journal of Offender Therapy and Comparative Criminology, 63(1), 3–17.

Blame and Punishment

Klonick, K. (2015). Re-shaming the debate: Social norms, shame, and regulation in an internet age. Maryland Law Review, 75(4), 1029–1065. Knauft, B. M. (1994). Culture and cooperation in human evolution. In L. Sponsel & T. Gregor (Eds.), The anthropology of peace and nonviolence (pp. 37–67). Lynne Rienner. Kneer, M., & Machery, E. (2019). No luck for moral luck. Cognition, 182, 331–348. Kochanska, G., & Aksan, N. (2006). Children’s conscience and self-regulation. Journal of Personality, 74(6), 1587–1618. Kriss, P. H., Weber, R. A., & Xiao, E. (2016). Turning a blind eye, but not the other cheek: On the robustness of costly punishment. Journal of Economic Behavior & Organization, 128, 159–177. Kuo, S.-Y., Longmire, D., & Cuvelier, S. J. (2010). An empirical assessment of the process of restorative justice. Journal of Criminal Justice, 38(3), 318–328. Laforest, M. (2002). Scenes of family life: Complaining in everyday conversation. Journal of Pragmatics, 34(10–11), 1595–1620. Larsen, C. S. (1995). Biological changes in human populations with agriculture. Annual Review of Anthropology, 24(1), 185–213. Lee, R. B. (1972). Population growth and the beginnings of sedentary life among the !Kung bushmen. In B. Spooner (Ed.), Population growth: Anthropological implications (pp. 329–342). MIT Press. Lee, R. B. (1979). The !Kung San: Men, women, and work in a foraging society. Cambridge University Press. Lee, R. B., & Daly, R. H. (1999). The Cambridge encyclopedia of hunters and gatherers. Cambridge University Press. Lerner, J. S., Goldberg, J. H., & Tetlock, P. E. (1998). Sober second thought: The effects of accountability, anger, and authoritarianism on attributions of responsibility. Personality and Social Psychology Bulletin, 24(6), 563–574. MacManus, D., Dean, K., Al Bakir, M., Iversen, A. C., Hull, L., Fahy, T., Wessely, S., & Fear, N. T. (2012). Violent behaviour in UK military personnel returning home after deployment. Psychological Medicine, 42(8), 1663–1673. Malle, B. F. (2021). Moral judgments. Annual Review of Psychology, 72, 293–318. Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A theory of blame. Psychological Inquiry, 25(2), 147–186. Malle, B. F., Guglielmo, S., Voiklis, J., & Monroe, A. E. (2022). Cognitive blame is socially shaped. Current Directions in Psychological Science, 31(2), 169–176. Márquez-Reiter, R., & Haugh, M. (2019). Denunciation, blame and the moral turn in public life. Discourse, Context & Media, 28, 35–43. Masclet, D., Noussair, C., Tucker, S., & Villeval, M.-C. (2003). Monetary and nonmonetary punishment in the voluntary contributions mechanism. American Economic Review, 93(1), 366–381. Mazzocco, P. J., Alicke, M. D., & Davis, T. L. (2004). On the robustness of outcome bias: No constraint by prior culpability. Basic and Applied Social Psychology, 26(2–3), 131–146. McGeer, V. (2012). Civilizing blame. In D. J. Coates & N. A. Tognazzini (Eds.), Blame: Its nature and norms (pp. 162–188). Oxford University Press. McHugh, C., McGann, M., Igou, E. R., & Kinsella, E. L. (2017). Searching for moral dumbfounding: Identifying measurable indicators of moral dumbfounding. Collabra: Psychology, 3(1, Art. 23), 1–24.

377

378

    .   

McHugh, C., McGann, M., Igou, E. R., & Kinsella, E. L. (2020). Reasons or rationalizations: The role of principles in the moral dumbfounding paradigm. Journal of Behavioral Decision Making, 33(3), 376–392. Messerschmidt, M. (2005). Die Wehrmachtjustiz 1933–1945. Schöningh. Miller, G. P. (2003). Norm enforcement in the public sphere: The case of handicapped parking. George Washington Law Review, 71, 895–933. Milner, G. R., Anderson, E., & Smith, V. G. (1991). Warfare in Late Prehistoric WestCentral Illinois. American Antiquity, 56(4), 581–603. Mirabito, L. A., & Lecci, L. (2021). The impact of anti-gay bias on verdicts and sentencing with gay defendants. Journal of Gay & Lesbian Social Services: The Quarterly Journal of Community & Clinical Practice, 33(1), 32–55. Mitchell, T. L., Haw, R. M., Pfeifer, J. E., & Meissner, C. A. (2005). Racial bias in mock juror decision-making: A meta-analytic review of defendant treatment. Law and Human Behavior, 29(6), 621–637. Molho, C., Tybur, J. M., Van Lange, P. A. M., & Balliet, D. (2020). Direct and indirect punishment of norm violations in daily life. Nature Communications, 11(1), Article 1. Monin, B., Pizarro, D. A., & Beer, J. S. (2007). Deciding versus reacting: Conceptions of moral judgment and the reason-affect debate. Review of General Psychology, 11(2), 99–111. Monroe, A. E., & Malle, B. F. (2017). Two paths to blame: Intentionality directs moral information processing along two distinct tracks. Journal of Experimental Psychology: General, 146(1), 123–133. Monroe, A. E., & Malle, B. F. (2019). People systematically update moral judgments of blame. Journal of Personality and Social Psychology, 116(2), 215–236. Mooijman, M., & Graham, J. (2018). Unjust punishment in organizations. Research in Organizational Behavior, 38, 95–106. Moore, B. (2001). Cruel and unusual punishment in the Roman Empire and dynastic China. International Journal of Politics, Culture, and Society, 14(4), 729–772. Morris, G. H. (1988). Finding fault. Journal of Language and Social Psychology, 7(1), 1–25. Muhammad, K. G., Gottschalk, M., & Thompson, H. A. (2015). The underlying causes of rising incarceration: Crime, politics, and social change. In J. Travis, B. Western, & S. Redburn (Eds.), The growth of incarceration in the United States: Exploring causes and consequences (pp. 104–129). National Academies Press. Nadelhoffer, T., Heshmati, S., Kaplan, D., & Nichols, S. (2013). Folk retributivism and the communication confound. Economics and Philosophy, 29(2), 235–261. Nadler, J. (2012). Blaming as a social process: The influence of character and moral emotion on blame. Law and Contemporary Problems, 75(2), 1–31. Nadler, J., & McDonnell, M.-H. (2012). Moral character, motive, and the psychology of blame. Cornell Law Review, 97, 255–304. Nemeth, C., & Sosis, R. H. (1973). A simulated jury study: Characteristics of the defendant and the jurors. The Journal of Social Psychology, 90(2), 221–229. Neumann, R., Seibt, B., & Strack, F. (2001). The influence of mood on the intensity of emotional responses: Disentangling feeling and knowing. Cognition and Emotion, 15(6), 725–747.

Blame and Punishment

Niemi, L., & Young, L. (2016). When and why we see victims as responsible: The impact of ideology on attitudes toward victims. Personality and Social Psychology Bulletin, 42(9), 1227–1242. Okonofua, J. A., & Eberhardt, J. L. (2015). Two strikes: Race and the disciplining of young students. Psychological Science, 26(5), 617–624. Pedersen, E. J., McAuliffe, W. H. B., Shah, Y., Tanaka, H., Ohtsubo, Y., & McCullough, M. E. (2020). When and why do third parties punish outside of the lab? A cross-cultural recall study. Social Psychological and Personality Science, 11(6), 846–853. Peregrine, P. N., Ember, C. R., & Ember, M. (2007). Modeling state origins using crosscultural data. Cross-Cultural Research, 41(1), 75–86. Pesta, R. (2022). School punishment, deterrence, and race: A partial test of defiance theory. Crime & Delinquency, 68(3), 463–494. Pinker, S. (2011). The better angels of our nature: Why violence has declined. Viking. Pratt, J., Brown, D., Brown, M., Wallsworth, S., & Morrison, W. (Eds.). (2005). The new punitiveness: Trends, theories, perspectives. Willan. Princing, M. (2019, September 3). What microaggressions are and how to prevent them. Right as Rain by UW Medicine. https://rightasrain.uwmedicine.org/life/rela tionships/microaggressions Przepiorka, W., & Berger, J. (2016). The sanctioning dilemma: A quasi-experiment on social norm enforcement in the train. European Sociological Review, 32(3), 439–451. Quigley, B. M., & Tedeschi, J. T. (1996). Mediating effects of blame attributions on feelings of anger. Personality and Social Psychology Bulletin, 22(12), 1280–1288. Rand, D. G., Armao IV, J. J., Nakamaru, M., & Ohtsuki, H. (2010). Anti-social punishment can prevent the co-evolution of punishment and cooperation. Journal of Theoretical Biology, 265(4), 624–632. Redford, L., & Ratliff, K. A. (2016). Hierarchy-legitimizing ideologies reduce behavioral obligations and blame for implicit attitudes and resulting discrimination. Social Justice Research, 29(2), 159–185. Redford, L., & Ratliff, K. A. (2018). Pride and punishment: Entitled people’s selfpromoting values motivate hierarchy-restoring retribution. European Journal of Social Psychology, 48(3), 303–319. Redman, C. L. (1978). The rise of civilization: From early farmers to urban society in the ancient Near East. W. H. Freeman. Republican National Committee. (2020). Resolution upholding the first amendment to the constitution of the United States of America in the response to the coronavirus pandemic and the cancel culture movement. https://bit.ly/4gpQLVs Robinson, P. H., & Darley, J. M. (1995). Justice, liability, and blame: Community views and the criminal law. Westview Press. Royzman, E. B., Kim, K., & Leeman, R. F. (2015). The curious tale of Julie and Mark: Unraveling the moral dumbfounding effect. Judgment and Decision Making, 10(4), 296–313. Sasse, J., Halmburger, A., & Baumert, A. (2020). The functions of anger in moral courage—Insights from a behavioral study. Emotion, 22(6), 1321–1335. Scherer, K. R. (2013). The nature and dynamics of relevance and valence appraisals: Theoretical advances and recent evidence. Emotion Review, 5(2), 150–162.

379

380

    .   

Seidel, A., & Prinz, J. (2013). Mad and glad: Musically induced emotions have divergent impact on morals. Motivation and Emotion, 37(3), 629–637. Service, E. R. (1966). The hunters. Prentice-Hall. Shoemaker, D. (2012). Blame and punishment. In D. J. Coates & N. A. Tognazzini (Eds.), Blame: Its nature and norms (pp. 100–118). Oxford University Press. Shoemaker, D., & Vargas, M. (2021). Moral torch fishing: A signaling theory of blame. Noûs, 55(3), 581–602. Sidanius, J., Mitchell, M., Haley, H., & Navarrete, C. D. (2006). Support for harsh criminal sanctions and criminal justice beliefs: A social dominance perspective. Social Justice Research, 19(4), 433–449. Sidanius, J., & Pratto, F. (1999). Social dominance: An intergroup theory of social hierarchy and oppression. Cambridge University Press. Silberbauer, G. (1982). Political process in G/wi bands. In E. Leacock & R. Lee (Eds.), Politics and history in band societies (pp. 23–36). Cambridge University Press. Sood, A. M. (2019). Attempted justice: Misunderstanding and bias in psychological constructions of critical attempt. Stanford Law Review, 71(3), 593–686. Sood, A. M., & Darley, J. M. (2012). The plasticity of harm in the service of criminalization goals. California Law Review, 100(5), 1313–1358. Sorial, S. (2016). Performing anger to signal injustice: The expression of anger in victim impact statements. In C. Abell, J. Smith, C. Abell, & J. Smith (Eds.), The expression of emotion: Philosophical, psychological and legal perspectives (pp. 287–310). Cambridge University Press. Stanley, M. L., Yin, S., & Sinnott-Armstrong, W. (2019). A reason-based explanation for moral dumbfounding. Judgment and Decision Making, 14(2), 120–129. Stewart, J. E. (1985). Appearance and punishment: The attraction-leniency effect in the courtroom. The Journal of Social Psychology, 125(3), 373–378. Strohminger, N. (2017). Four unwarranted assumptions about the role of emotion in moral judgment. In C. Price & E. Walle (Eds.), Emotion Researcher: ISRE’s sourcebook for research on emotion and affect. http://emotionresearcher.com/ four-unwarranted-assumptions-about-the-role-of-emotion-in-moral-judgment/ Sutton, J. R. (2013). Structural bias in the sentencing of felony defendants. Social Science Research, 42(5), 1207–1221. Tata, C. (1997). Conceptions and representations of the sentencing decision process. Journal of Law and Society, 24(3), 395–420. Tonry, M. (2009). Explanations of American punishment policies: A national history. Punishment & Society, 11(3), 377–394. Voiklis, J., Kim, B., Cusimano, C., & Malle, B. F. (2016). Moral judgments of human vs. Robot agents. In Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 486–491). IEEE. Voiklis, J., & Malle, B. F. (2018). Moral cognition and its basis in social cognition and social regulation. In K. Gray & J. Graham (Eds.), Atlas of moral psychology (pp. 108–120). Guilford Press. Walker, M. U. (2006). Moral repair: Reconstructing moral relations after wrongdoing. Cambridge University Press. Watanabe, S., & Laurent, S. M. (2021). Volition speaks louder than action: Offender atonement, forgivability, and victim valuation in the minds of perceivers. Personality and Social Psychology Bulletin, 47(6), 1020–1036.

Blame and Punishment

Weiss, A., & Burgmer, P. (2021). Other-serving double standards: People show moral hypercrisy in close relationships. Journal of Social and Personal Relationships, 38(11), 3198–3218. White, L. A. (2007). The evolution of culture: The development of civilization to the fall of Rome. Left Coast Press. Wiessner, P. (2005). Norm enforcement among the Ju/’hoansi Bushmen. Human Nature, 16(2), 115–145. Wilson, P. J. (1988). The domestication of the human species. Yale University Press. Woodburn, J. (1982). Egalitarian societies. Man, 17(3), 431–451. Wu, J., Balliet, D., & Van Lange, P. A. M. (2016). Gossip versus punishment: The efficiency of reputation to promote and maintain cooperation. Scientific Reports, 6, Article 23919. Xiao, E. (2018). Punishment, social norms, and cooperation. In J. C. Teitelbaum & K. Zeiler (Eds.), Research handbook on behavioral law and economics (pp. 155–173). Edward Elgar Publishing. Xiao, E., & Houser, D. (2005). Emotion expression in human punishment behavior. Proceedings of the National Academy of Sciences of the United States of America, 102(20), 7398–7401. Yamagishi, T. (1986). The provision of a sanctioning system as a public good. Journal of Personality and Social Psychology, 51(1), 110–116. Yukhnenko, D., Sridhar, S., & Fazel, S. (2020). A systematic review of criminal recidivism rates worldwide: 3-year update. Wellcome Open Research, 4. Zajenkowska, A., Prusik, M., Jasielska, D., & Szulawski, M. (2021). Hostile attribution bias among offenders and non-offenders: Making social information processing more adequate. Journal of Community & Applied Social Psychology, 31(2), 241–256.

381

16 Moral Communication Friederike Funk and Victoria McGeer

The title of this chapter contains two substantial and contestable terms. Each highlights a central feature of human psychology and behavior: first, that we understand and operate in accord with a set of norms that are distinctively moralized; and, second, that we communicate with one another – using language, of course, but also nonlinguistically. But how do these features of human psychology and behavior interact? Our aim in this chapter is to explore whether there is anything distinctive about moral communication over and above the fact that such communication is specifically focused on moralized norms and behavior. The chapter is divided into two parts. In Section 16.1, we sketch the wider background to our topic, focusing in Section 16.1.1 on people’s proclivity for engaging in norm-governed behavior of various different kinds, and in Section 16.1.2 on the unique features of human communication that exploit and reinforce people’s capacity for norm-governed behavior. In Section 16.1.3, we consider what distinguishes moralized norms from other kinds of norms, both conventional and nonconventional (e.g., norms of rationality). We suggest that a satisfying answer to this question must emphasize the distinctive set of blaming emotions or “reactive attitudes” that people experience toward transgressors of moralized norms, inclining them to punish (rather than simply correct) transgressors (Strawson, 1962). We conclude the first part of the chapter with the suggestion that punishment in this context is best understood as a specialized form of moral communication, with various potential messages directed toward transgressors themselves but also toward others in the community. In Section 16.2, we explore the advantages of regarding human punishment as a communicative reaction to violated moralized norms. This perspective is not mainstream in psychological research on punishment mechanisms, which is more traditionally focused on the motivations of retribution and deterrence. In 16.2.1, we review the history of this research, explaining why many theorists are drawn to the view that the psychology of punishment is deeply retributive with moral payback as its raison d’être. Against this conclusion, we argue in Sections 16.2.2 and 16.2.3 that empirical findings are more deeply supportive of the communicative view. In Section 16.2.4, we consider a range of empirical questions that the communicative view opens up. We conclude with some general theoretical reflections on the interdisciplinary advantages of regarding punishment as a distinctive and potentially modifiable form of moral communication. 382

Moral Communication

16.1 Setting Moral Communication in Context: Psychological Building Blocks 16.1.1 A Norm-Governed Form of Life Human beings are distinctively normative creatures, investing in attitudes and behavior that are regulated, and so shaped, by a wide variety of socially developed and endorsed norms. By “norms,” we mean rules, principles, and standards (whether formal or informal) that govern what counts as appropriate or proper behavior in the various activities in which humans engage. This covers a lot of ground, much of it outside the moral domain. As many cognitive theorists are now beginning to stress, humans are a thoroughly enculturated species, developing many unique and sophisticated cognitive/behavioral capacities via the internalization of norm-governed social practices that are developed and passed down through successive generations. (For recent representative discussion, see Hutchins, 2014; McGeer, 2020; Sterelny, 2012; see also Chapter 4, this volume.) To give just a taste of such normative enculturation: we learn to speak, read, and write a public language; we learn to enumerate and calculate an endless variety of things using a standardized number system; we learn to regulate our beliefs and desires in accord with communally endorsed evidential, practical, and inferential “rules of reason”; we learn to organize our days in accord with time- and calendar-keeping practices; we learn to recognize and manipulate a broad range of task-specific tools; we learn to greet each other, stand at an appropriate social distance from one another, manage turn-taking with one another in a huge variety of contexts, and so on in staggering detail. In short, we learn to converse together, work together, play together, eat together, and in general navigate complex built environments in relatively smooth, expectable, and cooperative ways by internalizing a remarkable variety of explicit and implicit norms and meta-norms (i.e., norms for resolving normative disputes, or norms for setting new norms, etc.). Such norms may be shared across cultures or be deeply culturally inflected; they may be specific to particular tasks and/or types of agents occupying variously defined social roles; they may be formally codified or informally and implicitly adopted; they may be policed with rigor or with relative nonchalance, inviting greater plasticity and individual maneuverability; they may be central to our sense of social identity or relatively peripheral to that identity. But regardless of their variation along these different dimensions, norms are, for human beings, as ubiquitous and necessary as the air we breathe. We only become cognitively sophisticated, mutually recognizable and reliable agents, able to engage in complex and coordinated activities with one another, because of our unique capacity to generate and internalize the rich and intricate set of norm-governed practices that pervade our existence. This richly elaborated norm-governed form of life implicates a distinctive kind of psychology. Human beings are not only sensitive to the presence of

383

384

                

norms, acquiring them readily via environmental cues and feedback from others, they are motivated to comply with them and police transgressions against them in a variety of ways (e.g., by way of simply pointing out “that’s not how it’s done”). A key indication of this sensitivity to norms can be found in the special qualities of human imitation. For instance, unlike other social animals (including especially other primates), children from the age of 24 months begin to engage in a phenomenon called “over-imitation”: faithfully copying others’ behavior even when that behavior is instrumentally unnecessary for achieving a goal. For instance, children continue a learned routine of tapping on the top of a transparent box before pulling a lever that releases a treat, even though it is visually manifest that pulling the lever is sufficient for getting the treat (Horner & Whiten, 2005). While there is more flexibility in this early emerging behavior than originally supposed, research now shows that so-called over-imitation is significantly driven by imitators interpreting the modeled action sequence in explicitly normative terms: That is, imitators see the model as demonstrating the “right” or “proper” way to engage in the activity, regardless of the instrumental value of the actions undertaken (Hoehl et al., 2019; Keupp et al., 2013).1 Indeed, children seem biased toward a normative interpretation of others’ behavior (at least in many contexts, especially involving adult–child interactions), indicated not only by their own proclivity for over-imitation but also, more significantly, by the spontaneous protests and criticisms they make against those who engage in an activity without faithfully replicating what the model has done (“not like that!”) and by their own instructive modeling of the activity to others (Kenward, 2012). Somewhat older children (4–5 years of age) will take account of moral costs in their imitative behavior. For instance, in contexts that would normally elicit normatively driven “over-imitation,” they are reluctant to reproduce a model’s actions, and object to others who do, if such actions entail the destruction of a valuable object belonging to the experimenter (Keupp et al., 2016). These children show by their spontaneous protests and expression of moral concern (“she will be very sad if you do that”) that not all normative considerations are treated equally. This jibes with findings by Turiel and others that preschoolers are sensitive to a “moral/conventional” distinction, with this sensitivity emerging from about the age of 3 (Nucci, 2001; Smetana, 1993; Turiel, 1983). Since children begin to engage in normatively driven “over-imitation” somewhat earlier (by the age of 2), this may indicate a more general sense of “normativity” that gets refined with experience and/or the development of linguistic competence. Findings suggest that language comprehension is related to an earlier emergence of this distinction (Smetana & Braeges, 1990), possibly indicating that sensitivity to different kinds of norms is communicatively 1

Consequently, some question the aptness of the term “over-imitation,” since the apparently irrelevant action element “is not something over and above the conventional activity, but an inherent part of it” (Keupp et al., 2016, p. 92).

Moral Communication

bootstrapped. Other lines of research highlight the importance of communication in supporting children’s development of myriad cognitive, social, and emotion-regulation capacities (Király et al, 2013).

16.1.2 Communicative Interaction of a Distinctively Human Kind Facilitates Normative Enculturation Our readiness to detect norm-governed behavior in those around us, to engage in such behavior ourselves, and to police others when they fail to do likewise is a deep-seated feature of human psychology. But such normative proclivities are not an inflexible response to the world; rather, they constitute a predisposition or bias that can be triggered or inhibited by a variety of contextual cues, some of which may be discerned simply by observing the activities of others (Hoehl et al., 2019). However, people generally do not rely on subtle observational cues to discern whether others’ behavior is norm-governed or not. Real-world interactions are typically rife with communicative cues, both linguistic and nonlinguistic, that explicitly mark how we are supposed to interpret what others are doing. And children are sensitive to such cues even from the age of 14 months, understanding the pedagogic intent behind a model’s behavior and revealing in their imitative responses a readiness to learn just what the model communicatively intends that they should (Gergely & Király, 2019; Király et al., 2013). This turns our attention to the special qualities of human communication that make it particularly apt for facilitating normative enculturation. While the use of language itself is a uniquely human phenomenon, theorists have lately drawn attention to more fundamental features of human communication that distinguish it from the communicative practices of other animals. These are not obvious. For instance, in addition to vocalizing and displaying a range of relatively inflexible signaling behaviors in response to environmental cues, chimpanzees regularly gesture to one another to solicit nursing, food, or grooming in ways that are learned, flexible, and even idiosyncratic (Goodall, 1986). Such gestures are intentional acts of communication (Grice, 1969; Moore, 2016): They are directed toward particular others when, and only when, those others are attending and are appropriate targets of a particular solicitation; a given gesture may also be modified to suit the situation (Kaminski et al., 2004; Tomasello & Call, 2019; Yamamoto et al., 2012). Still, while these are sophisticated communicative acts that depend on certain mentalizing skills (for instance, monitoring attention and likely even understanding what others can and cannot see), chimpanzees and other great apes will only use gestures in a proto-imperative way: to get others to do specific concrete things as a way of realizing their own goals.2 They do not use gestures – in particular, the pointing gesture – in a proto-declarative way: to show others something or simply to share attention for the pleasure of it (Tomasello & Call, 2019). 2

This is not to say great apes do not engage in cooperative behavior: As Yamamoto et al. (2012) emphasize, they can and do help each other accomplish their individual goals.

385

386

                

This makes for a puzzling difference in the communicative behavior of great apes in contrast with humans, even at the nonlinguistic gestural level. In human interactions, the pointing gesture is naturally used and generally understood despite serving a variety of functions – proto-imperative (requesting something), proto-declarative (showing something), proto-informative (e.g., indicating the location of a hidden object), and proto-interrogative (e.g., soliciting more information about an indicated object). Such multifunctional use and understanding emerges in children as young as 12–14 months (Southgate et al., 2007; Tomasello et al., 2007) – and this despite the fact that the (prelinguistic) mentalizing skills of children at this stage of development are, in many respects, not hugely more sophisticated than those of chimpanzees and bonobos. So why this difference? Drawing now on decades of comparative primate research, Tomasello and colleagues argue that a key determinant of this difference lies in one “small” cognitive and motivational factor that ramifies in powerful ways (Tomasello et al., 2005). Humans are distinctive insofar as their social activities are permeated by an irreducibly collective (shared, or “we”) intentionality (Gilbert, 1994; Rakoczy & Tomasello, 2007). When people interact with one another, they are not simply solo operators, aiming to engineer another’s behavior simply to forward their singular aims. Rather, humans often take themselves to be doing things together, naturally forming shared intentions to act in coordinated and mutually supportive ways in the context of joint activities: we are playing a game together; we are finding a hidden object together; we are looking at something together. And from here it seems a short step to structuring and understanding such activities in explicitly normative terms. Hence, the capacity for shared or collective intentionality, emerging early in development and potentially unique to the human species, undoubtedly plays a foundational role in explaining people’s distinctive proclivity to adopt norm-governed attitudes and behavior. The capacity for shared intentionality thus heralds a fundamental change in the nature of human communication. The transformation is from an activity that is essentially behavior-focused and unidirectional (seen, for instance, in great apes) to an activity that is essentially attitude-focused and dialogical. Whether people communicate in language or in a variety of nonlinguistic ways (using ostensive eye contact, emotional expressions, prosody, bodily gestures and/or other symbolic acts), they invariably regard one another as potential candidates for shared intentionality, presupposing and reinforcing the joint activities and/or shared frames of reference that give meaning to their particular communicative acts (see, e.g., Tomasello et al., 2005). Hence, in communicating with others, people are not just aiming to shape or regulate others’ behavioral responses as a way of realizing their individual goals. People are aiming to make others understand the point of their communicative behavior – whether it be asking for something, sharing attention, sharing pertinent information, committing to some course of action, and so on in wide variety. Hence, in communicating with others, people invariably solicit a suitable communicative response

Moral Communication

from their interlocutors in turn – a reaction that makes sense in light of the shared understanding and collaborative support people aim to achieve. This is the sense in which human communication is essentially dialogical. In the context of shared norm-governed activities, we see a special instance of this insofar as people’s communicative efforts are naturally bent toward signifying, in both word and deed, that there are proper ways of behaving that they expect others to understand and internalize. This message may be conveyed ex ante by instruction and modeling or ex post by protest and correction. But either way, people’s aim is not simply to elicit a suitable behavior response from the other. Qua communicators, people are dialogically focused on soliciting an appropriate attitudinal response – namely, others’ appreciation and acceptance that “this is the way we do things around here.”

16.1.3 Moralized Norms and Communication Against this rich background of communicatively supported normative enculturation, we now pose the following two questions: What is distinctive about the norms people identify as “moral” – and/or attitudes and behavior governed by such norms? And is there anything distinctive about communication regarding such norms? As a preliminary matter, we note there is a persisting tendency within the moral-psychological literature to treat norms as essentially dividing into two types, “moral” and “conventional” (Nucci & Nucci, 1982; Smetana & Braeges, 1990; Turiel, 1983). But, as the foregoing discussion makes clear, people are variously invested in a rich variety of norms as a matter of social and practical identity, and many of these norms defy easy binary categorization: Either they have elements of both the “moral” and the “conventional” (such as norms of “fair play”), or they belong to a category of their own (such as “norms of rationality”). This naturally complicates the search for criteria by which moralized norms can generally be distinguished from the rest. But, as our discussion in this section will show, it also places new emphasis on the nature of our communicative interactions in tracking this distinction. In their quest to elucidate the distinctive nature of moralized norms, theorists have generally pursued two main lines of inquiry. The first considers the substantive content of the norms themselves. The second considers the psychological attitudes people take toward such norms, signaling a distinctive kind of normative commitment taken toward the content in question, whatever it may be. We discuss each of these approaches in turn, noting that a satisfying answer to the question may well appeal in some measure to both content and attitude. With regard to the substantive content of moralized norms, theorists generally agree that this is something of a moving target. In any given society, ordinary people may make an intuitive distinction between moral norms and other kinds of rules, principles, conventions, or standards; but there are invariably borderline areas of disagreement. This problem is only compounded across different cultures; some communities operate with a relatively expanded

387

388

                

conception of moral norms, others with a relatively contracted conception (e.g., as regards matters of food, clothing, or sexual preference). Yet even amid this variability, certain “themes” are discernible in the norms generally identified as moral. For instance, as Sripada and Stich (2006, p. 283) observe, most societies have “rules that prohibit killing, physical assault, and incest”; “rules promoting sharing, reciprocating, and helping”; “rules regulating sexual behavior among various members of society”; and “at least some rules that promote egalitarianism and social equality.” Is there any underlying unity discernible in this motley collection of moralized norms? This is a matter of ongoing debate. At one extreme, reductive monists aim to show that all moralized norms are grounded in a dominant substantive concern. Accounts differ as to the nature of this concern: For instance, some argue that moralized norms are all concerned with prohibiting “harm” (Gray et al., 2012); others, that they are concerned with prohibiting “unfairness/injustice” (Baumard et al., 2013; Piazza et al., 2019). But this has been a challenging position to defend, especially in a cross-cultural context (see for instance Berniunas ¯ et al., 2016). A less extreme form of monism discerns some unity among moralized norms at a higher level of generality – for example, they are specifically concerned with placing a check on selfish goals or interests where such interests may threaten those of the community (Fehr & Fischbacher, 2004). Undoubtedly this is true of many such norms; but, again, others seem to escape this characterization – at least in any direct sense. For instance, it is hard to see how moralized norms regarding treatment of the dead or eating certain kinds of food are specifically geared toward checking selfish goals or interests, though they may serve certain communal goals or interests (such as stabilizing a sense of group identity). Yet not all moralized norms are apt for serving communal goals or interests either, at least on a broad scale: For instance, many societies endorse “agent-relative” norms that permit (or even obligate) individuals to prioritize the needs of close relations over the community itself or other members within it, at least to some degree. In light of these difficulties, other theorists take an avowedly pluralist approach to the content of moralized norms, arguing in addition that there is significant cross-cultural variation in the kind of “disinterested” concerns individuals are required to prioritize in their attitudes and behavior. Such moralized norms might prioritize a range of concerns for particular others (families, friends, strangers in need. . .), a range of concerns for maintaining the community as a whole (including its structure and operations), and even a range of concerns that relate to respecting or implementing some transcendentally determined cosmic order (such as “God’s will” or “Dhamma”) (McGeer, 2008; Shweder et al., 1997). This pluralistic approach leads naturally to a different sort of question – namely, why human beings should have such a range of disinterested concerns, however much these require development or stabilization via communally endorsed norms. This is a challenging explanatory project in itself, perhaps explaining the continuing allure of monism for many theorists. It may also recommend taking a different approach to characterizing the distinctiveness of moral norms

Moral Communication

altogether – one that focuses less on their content and more on the psychological attitudes people take toward them (Nucci, 2001; Skitka et al., 2021). In keeping with this second approach, there are a range of attitudes that theorists have identified as seemingly peculiar to moralized norms. Following Sripada and Stich (2006), we group these attitudes under three headings: motivation for compliance, the metaphysical status attributed to the norms themselves, and reaction to transgressions (cf. Skitka et al., 2021). To give Sripada and Stich’s summary characterization of the attitudes under each heading: 1) people find moralized norms “intrinsically motivating”; 2) they view them as having “independent normativity”; and 3) they react to their violation in a distinctively punitive way, characteristically involving a range of equally distinctive “reactive” emotions (condemnatory anger, resentment, indignation, guilt, shame, and remorse – depending, of course, on whether the transgressor is other or self ) (Strawson, 1962). We discuss each of these features in turn. To say that people find moralized norms (1) “intrinsically motivating” is to say they experience norm compliance as an end in itself. They do not comply with such norms simply for instrumental reasons – because of other benefits they may gain through norm compliance (e.g., reputational benefits, reciprocity benefits, realization of cooperation-dependent goals) – or, indeed, to avoid certain costs noncompliance may bring (e.g., retaliation, cooperationdependent goal frustration). As Sripada and Stich (2006) emphasize, compliance is its own reward, other potential benefits notwithstanding. The problem is that this psychological feature does not seem distinctively related to moralized norms. As our earlier discussion of normative enculturation suggests, humans are heavily dependent on developing normatively shaped attitudes and behavior of a more general kind, enabling a huge and sophisticated array of distinctively human capacities (including, in particular, linguistic capacities, but many others as well). Such deep enculturation would surely be impossible without some psychological machinery innately geared toward norm detection and compliance, where people regulate their behavior, not just in imitation of what others do, but in light of their explicit pedagogical and corrective feedback and where discovering the “right way” to do something is rewarding in itself. This, we think, is amply demonstrated in humans’ speciesspecific proclivity for over-imitation, emerging at least as early, if not prior to, a sensitivity to specifically moralized norms. Next we consider (2) “independent normativity,” the idea that people view moralized norms as specifying what they ought to do “independently of any legal or social institution or authority.” Such norms are viewed as having the metaphysical status of “objective” truths, implying generality of scope across different times and places (Kelly & Stich, 2007; Nucci & Nucci, 1982; Skitka et al., 2021; Smetana, 1993; Turiel, 1983).3 It has been suggested that people are 3

Various notions of “objectivity” may be in play here without undermining the sense of “independent normativity” at issue (for discussion, see Goodwin & Darley, 2008).

389

390

                

intrinsically motivated to comply with moralized norms because they view them as having independent normativity.4 Be that as it may, we think these two phenomena are psychologically separable: People can be intrinsically motivated to comply with norms they do not regard as having independent normativity. Still, the question remains: Is independent normativity/objective validity really a distinctive feature of moralized norms? We think not. Rational norms – for instance, valid rules of inference – are generally held to have “independent normativity,” but it would be strange to classify them as moral norms (see Southwood, 2011 for a similar observation). Finally, we come to the third psychological feature associated with moralized norms: the very strong disposition (3) to react to norm violations of this kind in a distinctively punitive way. Here there is some indication of a genuine difference in people’s attitudes to different sorts of norms. For while all norm violations attract notice, how individuals respond to such violations can vary widely depending on a range of factors, including how they are related to the transgressor, the nature of the norm (moral or nonmoral), the transgressor’s degree of normative understanding, the transgressor’s intention in violating the norm, the consequences of violating the norm, and so on. Such factors may influence whether any norm-enforcing action is called for at all (some norm violations simply call for understanding, or even appreciation, of the novelty so displayed). But even when the transgression calls for some norm-enforcing response, this too can take a variety of material and/or psychological forms, ranging from gently corrective to strongly sanctioning – what some might call “punitive.” However, this use of the term neglects an important dimension of norm-enforcement that is only incidentally related to the severity of sanction imposed – namely, what the sanction in question is meant to express. We suggest that what makes the punitive reaction distinctive is a blaming attitude, or set of attitudes, that underlies it and that the punitive reaction is characteristically taken to express (resentment, indignation, personal offence, and condemnation) (Strawson, 1962). In the context of state-authorized sanctions, Feinberg (1965) argues that we make an intuitive distinction between “punishments” and “penalties” on this ground: Punishment is a legal sanction that expresses the state’s condemnation of both norm-violator and normviolation, whereas “penalty” is a legal sanction that expresses no such thing; it is meant to operate on (potential) norm-violators as a purely instrumental deterrent.5 Compare, for instance, the way we might respond to someone who 4

5

Interestingly, Kelly and Stich (2007) argue for a causal connection in the other direction: People judge certain norms to have independent normativity because they find them intrinsically motivating. But if finding norms intrinsically motivating is a more general psychological phenomenon, as we suggest here, then this causal linkage would also be called into question. The term “punishment” is sometimes used to cover what is here meant by penalty – for instance, in the context of operant conditioning in which negative feedback activates an associative learning process in the target (e.g., animals) and thus shapes that target’s behavior. Our use of the term “punishment” here and throughout the chapter is intended to pick out a narrower class of sanctions – specifically, those that carry normative weight in virtue of the communicative content they are meant to convey. In relation to this, we note that nonhuman animals “punish”

Moral Communication

violates a rational norm versus someone who violates (what we take to be) a moral norm. Even supposing the transgressor is at fault in both cases (i.e., we take them to be normatively competent in the relevant domain), the faulty reasoner may attract (at worst) derision or contempt, whereas the faulty moral agent attracts some degree of reprobative blame – that is, blame that characteristically expresses resentment or indignation. Why do these norm violations attract such different responses? We think a satisfying answer to this question must finally appeal to the substantive nature of the norms themselves: in particular, to the idea that moralized norms generally require us to give due weight to a range of disinterested, mainly other-regarding concerns versus our own idiosyncratic interests or desires (whether “selfish” or not) (Voiklis & Malle, 2017). To violate these norms is thus to prioritize our own interests and desires in a way that shows disregard or disrespect for a communally endorsed conception of the moral order. And though other kinds of norms are communally endorsed, violating them is not seen to be essentially self-prioritizing in the manner of a moral norm violation. In short, as moral philosophers are wont to emphasize, people view (normatively competent) violators of moral norms as expressing a distinctive attitude in their transgressive behavior – an attitude of disrespect, disregard, or even “ill will” toward others in the moral community, or to the community at large (Strawson, 1962). And it is this expressed attitude that calls for a distinctive response: one that communicates condemnation of the transgressor’s attitude, in addition to whatever (psychological or material) cost is imposed for the transgressive act itself. But is the mere expression of condemnation sufficient to make punishment a genuinely communicative act? As we have argued, human communication is essentially dialogical in nature. Hence, if punitive responses to moral transgressions are genuinely communicative, they should be psychologically and/or normatively linked with an expectation of (perhaps even demand for) a suitable response from the target audience – for instance, from transgressors themselves, who as communicative targets of punishment are thereby called upon to regret and renounce their immoral acts and attitudes. A communicative view of punishment thus entails that punishment has a certain forward-looking drive or rationale, one that ultimately seeks the ratification of shared moral norms from its target audience. As we have noted, the target audience may – and perhaps should – include transgressors themselves; but as a communal or even state-authorized act, it is likely to include others as well, in whose name the punishment is implicitly or explicitly imposed (Feinberg, 1965). A communicative view of punishment has been explored and defended in the philosophical literature, most notably by Anthony Duff (2001) (for a sample of views, see also Bennett, 2008; Braithwaite & Pettit, 1990; Feinberg, 1965; much less than humans, possibly because their punitive acts do not represent intentional communicative content, and learning from punishment, in turn, may be less based on inferring social evaluative feedback (see Ho et al., 2017; Raihani et al., 2012).

391

392

                

Lacey, 1988; von Hirsch, 1993).6 While this work focuses on seeking an adequate justification for the practice of punishment, our concern in the remainder of this chapter takes a more empirical direction. In what follows, we focus specifically on people’s actual motivations for punishment and how a communicative framework can forward psychological research on this topic.

16.2 Communication and the Psychology of Punishment 16.2.1 A Brief History of Punishment Research In the psychological literature, punishment has been somewhat broadly defined as “intentionally providing another person with negative or unwanted outcomes, as a motivated response to the perception that this person has violated widely shared norms, values, or rules” (van Prooijen, 2018, p. 10). (For a narrower treatment of punishment, see Chapter 15, this volume.) Decades of research into people’s punishment behavior indicate that it is motivated by several factors (for recent reviews, see Raihani & Bshary, 2019; van Prooijen, 2018; Wenzel & Okimoto, 2016). Broadly, this research can be grouped into two different kinds. One broad kind comes from behavioral economics and evolutionary psychology and primarily focuses on the ultimate functions of punishment and its adaptive value in the context of evolution. Traditionally, this area of research has operationalized punishment as payoff reductions in the context of incentivized economic games. The most influential explanation for punishment as an adaptive mechanism over the past two decades is that it stimulates cooperation within groups by way of deterring wrongdoers, as well as communicating and reaffirming the group’s norms and values (Fehr & Fischbacher, 2004). Other functional explanations highlight punishment’s competitive functions and beneficial effects for punishers, for instance, reestablishing their status and relative level of resources (Krasnow et al., 2012; Raihani & Bshary, 2019), as well as signaling their trustworthiness to others (Jordan et al., 2016). The second broad kind of research on punishment comes from social psychology and focuses on the proximate mechanisms of punishment, examining why people punish in a given situation. In line with findings from economics and evolutionary psychology, social psychological research has identified the importance of rebalancing status and power and restoring value consensus (e.g., justice restoration theory, Okimoto & Wenzel, 2008; Shnabel & Nadler, 2008). Still, over the last few decades this research tradition has been largely shaped by traditional philosophical theories regarding the appropriate justification for punishment. In this normative tradition, deontological justifications, 6

Within this broad family, theorists vary with regard to: 1) the message(s) appropriately communicated via punishment; 2) the appropriate target audience; 3) the appropriate means or manner of communicative punishment; and 4) the appropriate (sought-for) response.

Moral Communication

historically associated with Kant, are purely retributive and backward-looking: The offender simply deserves to be punished as a reaction to their wrongdoing. By contrast, a consequentialist approach, historically associated with Bentham, justifies punishment in terms of specific forward-oriented consequences: For instance, it serves as a general deterrent by imposing a cost for criminal behavior and/or serves as a specific deterrent by incapacitating offenders; it may also prevent future crime by changing their behavior (rehabilitation). Numerous psychological studies have investigated to what extent retribution (as the deontological factor) versus deterrence (as the main consequentialist factor) drives people’s punishment-related reactions (for reviews, see van Prooijen, 2018; Wenzel & Okimoto, 2016), whereas other philosophical justifications (especially focusing on the expressive and communicative dimensions of punishment) have received comparatively little attention. Results from these studies indicate more support for retribution than deterrence as punishment motive. For instance, in line with retributive theories, studies show that people punish proportionally to the seriousness of a transgression and that people’s punishment recommendations are less affected by factors central to deterrence theories, such as likelihood of detection or transgression frequency (e.g., Carlsmith et al., 2002; Carlsmith et al., 2008). In addition, if people can select which transgression-relevant information they would like to know before they recommend punishment, they select information on transgression seriousness first versus information related to deterrence, such as transgression frequency (Carlsmith, 2006; Keller et al., 2010). A further set of studies has aimed to isolate retributive motives by studying people’s punitive responses toward animals that have attacked human beings (Goodwin & Benforado, 2015).7 In line with the retributive view, the researchers found that people rate animals as “deserving” to be killed for their violent attacks; furthermore, people’s support for inflicting pain on animals during their execution increases with the perceived severity of the attacks. In sum, this body of social psychological research has been taken to support the view that people are first and foremost retributive in their punishment behavior, as they mainly care about giving transgressors what they “deserve.” It is important to stress that most of these findings have been interpreted within a binary choice theoretical framework – retribution versus deterrence, thereby sidelining the potential role of communication as a key motivating factor. Nevertheless, researchers have recognized that communication may still play an important role in people’s punishment behavior. Subsequent work has thus aimed to control for this using carefully designed behavioral experiments. For instance, in so-called hidden punishment experiments, people are purportedly barred from using punishment in an immediately communicative way

7

Both general and specific deterrence were purportedly excluded as potential motivating factors – general deterrence for obvious reasons (other animals do not learn from what happens to the attacker) and specific deterrence because punishment was measured as support for having the animal killed, with various degrees of suffering.

393

394

                

because the targets of their punishment will not be informed of the punishment they receive (Crockett et al., 2014; Nadelhoffer et al., 2013). Yet a small percentage of people still punish under these conditions. Similarly, studies have shown that people punish in repeated interactions, even if their targets will not be informed about whether or how much they have been punished until the very end of the experiment, apparently removing its immediate communicative rationale (Fudenberg & Pathak, 2010). Many researchers conclude that this argues in favor of a purely, or at least predominantly, retributive punitive psychology. (For a critique of this conclusion, see Chapter 15, this volume, Section 15.3.3.) We think the cited results do not make a strong case against the idea that punishment has a communicative dimension. In the first place, the putative isolation of retributive motives in controlled laboratory studies does not entail the stronger conclusion that people’s punishment motives are purely retributive. As an analogy, imagine a laboratory study that examines why people wear clothes. Assume that researchers want to study the effect of fashion preferences. If the results showed that people put on clothes of certain colors even if these clothes are hard to wear, it would be valid to conclude that fashion preferences affect people’s choice of clothing. Yet, it would not be a valid conclusion that comfort (or other factors such as warmth) play no role in people’s clothing decisions in everyday life. Indeed, one or more of these factors may remain an important, even dominating motivation in naturalistic conditions. Returning now to the punishment case, even if “hidden punishment” occurs, other factors may continue to play an important, perhaps dominating role, in more naturalistic conditions. Hence, it does not rule out communication as a proximate mechanism for punishment.8 Second, if communication is an inherent feature of ordinary punishment behavior, studies on hidden punishment may simply demonstrate that people will engage in communicative behaviors even if they know their putative audience cannot hear them. Such behavior is a common occurrence in everyday life. People may show their communicative intent (verbally or otherwise) even when they know their target audience is not “listening” in any demonstrable way. They may do so for many reasons: to express who they are, to get things off their chest, to rehearse what to say if a suitable occasion arises, to solicit (imagined) advice, to (imaginatively) resolve disputes, or simply to seek comfort in the absent person’s (imagined) presence. Of course, “communicative behavior” could be defined in such a way as to require successful uptake in a target audience. But since our goal is to understand the psychological mechanisms motivating specific behaviors, we here take communicative intent to be independent of uptake. Building on our earlier discussion of human communication, the point we make here is that performing actions with communicative intent is likely to be a deeply rooted psychological proclivity in beings that are 8

We add that punishment in hidden punishment cases may also aim at other things than retribution, such as reducing the imbalance of resources (see, for instance, Raihani & Bshary, 2019).

Moral Communication

naturally oriented toward securing shared intentionality with others – a proclivity that is not easily “shut off” or subject to rational control. Thus, even in the context of anonymous single-shot punishment, it is certainly possible that punishers are likewise engaging in communicative behavior, even if potential targets of their communicative acts are not physically present.9 These considerations point toward the same conclusion: Even if retributive aspects can be isolated experimentally, such findings do not establish that punishment is primarily retributive. Indeed, several research findings suggest that people’s punishment goals are context-dependent, as context changes the salience of punishment-relevant aspects of a given situation (e.g., salience of the crime itself, the offender, or the community; see Gromet & Darley, 2009; Twardawski et al., 2020). For instance, people tend to put more emphasis on the need for deterrence and rehabilitation if the longer-term consequences of punishment are made salient. This already undermines the rationale for a binary choice between “the two” punishment motives (retribution versus deterrence). We think psychological studies would benefit from adopting a research framework that remains open to a potential range of several punishment motives (e.g., Fitness & Peterson, 2008), while perhaps also discerning a unifying thread that may be present in most or all of them. To this end, we think it helpful to regard punishment as a form of moral communication, capable of conveying a range of potential messages. As we have noted elsewhere (Funk et al., 2014; McGeer & Funk, 2017), a first advantage to this shift in framework is that it resolves a major ongoing debate between retributive and consequentialist theories of punishment: whether people’s punishment motives are, or should be, primarily backward-looking or primarily forward-looking. Under a communicative paradigm, punishment can be both retributive and instrumental (see also Duff, 2001). A second advantage of the communicative approach is that it frames punishment as a specifically moral response to wrongdoing – in that way, it satisfies retributivists’ normative ambitions; but it does so without relying on the strange moral alchemy that lies at the root of traditional retributivist views, that returning suffering for suffering somehow rectifies a moral wrong. The point of punishment, on the communicative view, is to convey a suitably targeted, expressively powerful, morally loaded message; it is not to inflict suffering as an end in itself (cf. Feinberg, 1965). Given these normative advantages, it is not surprising that the communicative view of punishment has received considerable attention and defense in the philosophical community. But, as a descriptive theory regarding people’s actual punishment motives, psychologists have given it surprisingly little attention in the design of their empirical studies over the past two decades. Luckily, times are changing.

9

For a similar idea, that punishers engage in signaling behavior as a heuristic even if nobody is “watching” in one-shot anonymous interactions, see Jordan and Rand (2020).

395

396

                

16.2.2 Punishment as Communication: An Emerging Trend in Psychological Research Recently, psychologists have started to study the communicative aspects of punishment experimentally and to differentiate them from retributive components. A first line of findings comes from research on the hedonic effects of punishment. From a retributive perspective, one could expect punishment in itself to be satisfying and restore people’s sense of justice. Yet this does not seem to be the case. In general, findings from social psychological studies on the hedonic consequences of punishment show that there are positive as well as negative effects of punishing (Eadeh et al., 2017). In particular, punishers are very sensitive to whether punishment sends a message and how that message is received. If punishers do not hear back from wrongdoers after punishment, they ruminate and are dissatisfied (Carlsmith et al., 2008). Yet when punished wrongdoers indicate that they understand that they are being punished (Gollwitzer et al., 2011) and especially if wrongdoers indicate a change in their behavior and attitude (Fischer et al., 2022; Funk et al., 2014), punishers’ sense of justice is restored. Numerous studies have shown that people have less desire to punish offenders who are remorseful (e.g., Corwin et al., 2012). These findings fit well with the view that punishment is ideally a form of bidirectional (i.e., dialogical) moral communication: To wit, punishers intentionally express something via punishment that punished individuals are meant to understand – punishment carries symbolic meaning; and, likewise, punishers adjust their punishment behavior in reaction to intentional messages they receive back from the wrongdoers (for similar thoughts, see for instance Boon & Yoshimura, 2020; Nahmias & Aharoni, 2018). A second line of research explores the necessity of punishment’s materially punitive edge. In support of the communicative view, studies show that people expect harmless but symbolic punishment to be as effective as harmful punishment if its message is clear (Sarin et al., 2021). Similarly, in real interactions, people accept trade-offs between punishment and other forms of moral communication. For instance, they reduce their punishment if they can also send along a note of disapproval (Xiao & Houser, 2005), and they opt for the less severe punishment if they can let the wrongdoer know why they are being punished (Molnar et al., 2023). Once again, these findings highlight that punishment behavior does not seem to be solely about punitive paybacks and proportionality as a purely retributive account of punishment would suggest.

16.2.3 Toward a Systematic Approach to Moral Communication: Content, Targets, and Means We turn now to a consideration of how conceptualizing punishment as communication can provide researchers with a scaffold to systematically study and understand how people react to wrongdoing. If the underlying psychological mechanism of the urge to punish is communicative in nature, then three

Moral Communication

interrelated questions emerge as important to explore in depth: What is being communicated? Who is the target of communication? And how is this communicative content conveyed? Ultimately, a better understanding of these dimensions of punishment will allow theorists to question the efficacy of punishment in achieving its communicative goals and to invite comparison with other potentially more effective forms of moral communication. The Content of Communication. What people aim to communicate via punishment is closely related to how people have construed the wrongdoing and what it meant to them psychologically. Did people construe it as a violation of their own personal status, for instance, or did they mainly see it as a violation of group norms? Punishment may communicate that someone has been wronged, or it may send a message reaffirming the victim’s standing, their values, or their moral identity (see, e.g., Okimoto & Wenzel, 2008; Shnabel & Nadler, 2008). It may effectively “say” to others: “I won’t allow anybody to walk all over me” (Crombag et al., 2003), or “I’m a trustworthy person because I condemn such wrongdoing” (Jordan & Rand, 2020). It may communicate “I’m a member of a particular moral tribe”; “I care about a set of norms and their breaches, and I’m disposed to police the norms in question” (Shoemaker & Vargas, 2019). It may communicate a desire regarding the future – for example, that the wrongdoer feels remorse and commits to change (Funk et al., 2014); it may even communicate a desire that wrongdoers suffer in light of their wrongdoing. And such desires might be essentially self-regarding (e.g., related to a victim’s need to restore their own standing) or they might be essentially other-regarding (e.g., related to a victim’s desire to prevent the victimization of others). There is a range of possibilities, with varying degrees of moral content. For researchers, it would be interesting to investigate factors that modulate expressed moral content; for instance, which message gets priority in what contexts – and why. Importantly, once punishment is conceptualized as a nonlinguistic form of moral communication, it has one salient advantage over explicitly linguistic forms of communication: It may carry many symbolic messages simultaneously, either to the same target or to several targets, and potentially each with a different grade of urgency or importance to the punisher. The Target of Communication. An interesting feature of punishment as communication is that the message may or may not be targeting the recipient of punishment. Obviously, punishment may communicate something to the wrongdoer, as previous research on the hedonic effects of punishment suggests (e.g., Funk et al., 2014; Gollwitzer et al., 2011). But, as we noted earlier, the message may also be intended for others – for example, for particularly vulnerable members of the community (“You will be supported and protected”) or for the community as a whole (“Let’s all agree that this is a serious moral wrong that should be generally condemned”) (see also Feinberg, 1965). Notably, the two consequentialist punishment motives, specific prevention and general prevention, can be accommodated, albeit reconfigured, within this communicative framework. While in the traditional picture, punishment is viewed as a relatively crude behavioral-conditioning device, it is here explicitly conceptualized

397

398

                

as sending a message to the original wrongdoer or to the public at large that such wrongdoings will not be tolerated. A specific example on using punishment to communicate with various possible targets can be found within the legal system. While offenders are clearly the target of institutionalized punishment, the criminal justice system explicitly aims to communicate the validity of norms to several other targets, including the victims and their families, future potential wrongdoers, as well as potential voters and concerned citizens who are afraid of crime and who are supposed to get the message that wrong deeds are taken seriously (Feinberg, 1965; Sunstein, 1996). The Means of Communication. There are many different forms of punishment and many ways in which punishment can be administered. Sometimes people react to wrongdoings with punishment that is proportional to the severity of the transgression (Carlsmith et al., 2002) and may use retaliation as a quick default mode of communication. Within the communicative framework, this core retributive principle can be expressively reconceptualized. “Proportional” punishment may communicate to the community “this is how wrong it was,” or to the offender “this is how much you need to change” (Duff, 2001). Still, the communicative framework can also explain why, at other times, a disproportional, more lenient reaction may be observed. In such cases, particular contextual factors may ensure that a more lenient punishment (or other symbolic gesture) is sufficient to convey a particular message effectively (see also Sarin et al., 2021). For instance, community service is often sufficient in restorative justice settings to assure all the stakeholders that the crime has been taken seriously and suitable reparative measures have been undertaken (Johnstone, 2013; Rossner & Bruce, 2016; Shapland et al., 2006). Or, to go to another example, a skeptically raised eyebrow may be enough to publicly lower a perpetrator’s status and reputation in the eyes of others. At the other extreme, punishers may decide not to interact again with the wrongdoer at all in order to communicate condemnation of the wrongdoer’s actions and attitudes (to the wrongdoer and/or to the group as a whole), to reaffirm solidarity with whatever group norms the wrongdoer has violated, and, finally, to endorse by way of enacting group norms for dealing with wrongdoers.

16.2.4 Putting the Communicative Framework to Work in Current and Future Research Application to Current Findings. We here suggest that some otherwise puzzling results in punishment research may be profitably addressed by adopting the communicative framework we recommend – for instance, the mixed profile of findings related to harsher and/or more lenient punishment in intra- versus intergroup settings. On the one hand, studies show that deviant in-group transgressors are evaluated more negatively than deviant out-group transgressors (the so-called black-sheep effect; Marques et al., 1988). In addition, there is an independent effect of out-group leniency where punishers adjust their punishment downward if the transgressor belongs to a group that is of lower status than their own group (Braun & Gollwitzer, 2012). On the other hand, there are also

Moral Communication

instances of in-group leniency (“in-group favoritism”), that is, in-group transgressors are punished less harshly than out-group transgressors (see, e.g., Sommers & Ellsworth, 2000, or, for research findings from developmental psychology with children as participants, see Jordan et al., 2014). These apparently contradictory results may possibly be disentangled by way of identifying who the primary communicative target of punishment is meant to be (along with identifying the exact message it is meant to convey). For instance, is the primary communicative target the actual wrongdoer, in which case in-group leniency might be preferred if the message is regarded as effectively conveyed in a nonpunitive way (see, e.g., Sarin et al., 2021)? Or are other in-group members the primary target, so that punishment would validate the group’s morals under attack, potentially resulting in harsher treatment of the black sheep? Or is punishment about confirming the victim’s membership status in the group? Or, finally, are out-group members the primary communicative target, which may result in either harshness or leniency, depending on what exactly the intended message is meant to be? Again, these examples highlight that punishment may be a means of communication, and the content depends on how the transgression has been construed. In emphasizing that the punished entity is not necessarily the target of communication, research findings on punishing animals (Goodwin & Benforado, 2015) need not be interpreted in line with a purely retributive perspective. Instead, animalpunishers might be sending a message to those humans who observe the punishment, to the group as a whole, or even to the experimenter who will see participants’ response choices. Animals may also serve as a symbolic proxy for human beings, so that punishing them sends a message about what kinds of behavior in human beings would be considered wrong and subject to group condemnation. Similarly, identifying the target of communication helps in understanding the psychology of displaced punishment (i.e., when punishers do not punish the actual transgressor but someone else). Research on displaced revenge has illustrated that punishment can be satisfying if it does not punish the original wrongdoer but another member of the target group (Sjöström & Gollwitzer, 2015) or even a person who is symbolically similar to the original transgressor (Washburn & Skitka, 2015). These findings are hard to square with a purely retributive psychology of punishment but make sense on the communicative view. Scaffolding Future Research. A communicative view of punishment invites researchers to further explore the content, target, and means of this robust human phenomenon, leading to a broader understanding and integration of the various findings uncovered thus far as well as inspiring work that still needs to be done. For instance, investigating punishment as communication generates interesting new research questions regarding the difference as well as the commonalities between second- and third-party punishment (i.e., punishment carried out by victims or observers). Is it the content, the target(s), and/or the means of communication that differ between them? Do second- versus thirdparty punishers, for instance, care differently about reaching various targets of communication (such as the transgressor and/or the community), potentially because they react more automatically versus deliberately in a given situation?

399

400

                

Such future research should also look into the ontogeny of punishment (and its alternatives) as communication. Adding to the many interesting studies about children as punishers (e.g., McAuliffe et al., 2015; Riedl et al., 2015), it would be worth studying at what age children start using punishment in a dialogical, communicative way. In addition, while gaining a better understanding about how socialization and cultural learning affect children’s displayed punishment behavior (see, e.g., Salali et al., 2015; Wu & Gao, 2018) and how punishment behavior differs between children and adults (e.g., related to second- versus third-party punishment or in regards to being sensitive toward intent and outcome; see Bernhard et al., 2020), it would be worth studying how content, target, and/or means of punishment as communication may change over the course of children’s development. Nonpunitive Forms of Moral Communication. Importantly, people can react to wrongdoing with communicative acts that do not involve retributive or punitive elements. Indeed, the prevalence of punishment behavior in economic games varies between subjects from different societies (e.g., Henrich et al., 2006). In addition, if nonpunitive options are included in research paradigms, victims often prefer compensation over punishment (Heffner & FeldmanHall, 2019; for similar findings with children as participants see for instance Yang et al., 2021). Moreover, third-party observers often choose to help the victim instead of punishing the perpetrator (Chavez & Bicchieri, 2013; for children as participants see Lee & Warneken, 2020), prioritizing communicative acts that acknowledge the victim’s value as a means of restoring justice. All of these reactions to injustice carry meaning and can be considered vehicles of communications. Hence, a final advantage of the communicative approach we advocate here is that it has resources for explaining not just why people punish but why they sometimes refrain from punishing. Ideally, a theory of punishment will explain both. Notably, a purely retributive framework is not up to this task and cannot make any predictions. By its logic, so long as there are wrongdoers, they simply deserve to be punished. By contrast, punishment from a framework of moral communication may help predict when and why people might forego punishment. What was the wrong? What did the wrong communicate to begin with? How does it affect me as a victim or as an observer? What do I think about how others think it affects me? The construal of wrongs determines how people react to them. Future research would benefit from systematically using the framework of moral communication to identify the aspects that make punishment (versus compensation, helping, etc.) the most likely reaction to injustice, thereby more fully explaining why people do in fact punish. Identifying the situations in which people’s underlying motives can also be addressed without punishment has important normative implications. If people’s need for punishment changes depending on the availability of other communicative options that are perceived to be suitable in a given setting, it would be desirable to educate people about these effects in order to de-escalate conflicts and to avoid a ratcheting-up effect of blame (McGeer, 2013; McGeer & Funk, 2017). It would then directly benefit victims, transgressors, and observers of injustice to make such options available in the broader interests of “restoring

Moral Communication

justice,” as advocates of this explicitly named movement strongly recommend (Braithwaite, 2002; Braithwaite & Pettit, 1990; Johnstone, 2013; Strang, 2002).

16.3 Conclusion In this chapter, we have examined our human concern with, and commitment to, practices of moral communication – communicating about people’s attitudes and behavior regarding a distinctive range of moralized norms. In the first part of the chapter, our aim was to situate people’s concern with moralized norms in the context of their more general commitment to a norm-governed form of life. This enabled us to zero in on what is distinctive about moralized norms from a more encompassing perspective, leading us to place special emphasis on people’s characteristically punitive response to their violation, a response that invariably expresses moral disapproval or condemnation of the transgressors’ acts and attitudes. This led to our primary hypothesis that punishment (versus other kinds of merely instrumental negative sanction, e.g., fines) is fruitfully understood as a distinctive form of (essentially dialogical) moral communication. Our aim in the second part of the chapter was to put this hypothesis to work in an empirical context, showing how it profitably reframes many of the debates over existing data on the psychology of punishment, as well as opening up a number of interesting avenues for further research. We close by reinforcing three critical points made salient by this communicative approach: 1) as a nonlinguistic form of communication, punishment is particularly apt for conveying a number of distinct messages to different target audiences at once (i.e., it can carry multiple symbolic meanings); 2) as a materially and/or psychologically costly form of communication, punishment has a distinctive power, conveying a message that is difficult to ignore (for instance, that transgressions of these sorts of norms will simply not be tolerated); and, finally, 3) as a form of human communication, punishment is essentially attitudefocused (i.e., concerned about other people’s mental states) as well as bidirectional or dialogical, aiming to elicit an appropriate response from its audience given the message conveyed. This last point is particularly worth emphasizing, as it opens the door to a range of questions concerning how people’s taste for punishment can be modulated (in terms of severity and/or means) once its communicative raison d’être has been sufficiently brought to light.

References Baumard. , N., André, J.-B., & Sperber, D. (2013). A mutualistic approach to morality: The evolution of fairness by partner choice. Behavioral and Brain Sciences, 36(1), 59–78. Bennett, C. (2008). The apology ritual: A philosophical theory of punishment. Cambridge University Press.

401

402

                

Bernhard, R. M., Martin, J. W., & Warneken, F. (2020). Why do children punish? Fair outcomes matter more than intent in children’s second- and third-party punishment. Journal of Experimental Child Psychology, 200, Article 104909. Berniunas, ¯ R., Dranseika, V., & Sousa, P. (2016). Are there different moral domains? Evidence from Mongolia. Asian Journal of Social Psychology, 19(3), 275–282. Boon, S. D., & Yoshimura, S. M. (2020). Revenge as social interaction: Merging social psychological and interpersonal communication approaches to the study of vengeful behavior. Social and Personality Psychology Compass, 14(9), Article e12554. Braithwaite, J. (2002). Restorative justice and responsive regulation. Oxford University Press. Braithwaite, J., & Pettit, P. (1990). Not just deserts: A republican theory of criminal justice. Oxford University Press. Braun, J., & Gollwitzer, M. (2012). Leniency for out-group offenders. European Journal of Social Psychology, 42(7), 883–892. Carlsmith, K. M. (2006). The roles of retribution and utility in determining punishment. Journal of Experimental Social Psychology, 42(4), 437–451. Carlsmith, K. M., Darley, J. M., & Robinson, P. H. (2002). Why do we punish? Deterrence and just deserts as motives for punishment. Journal of Personality and Social Psychology, 83(2), 284–299. Carlsmith, K. M., Wilson, T. D., & Gilbert, D. T. (2008). The paradoxical consequences of revenge. Journal of Personality and Social Psychology, 95(6), 1316–1324. Chavez, A. K., & Bicchieri, C. (2013). Third-party sanctioning and compensation behavior: Findings from the ultimatum game. Journal of Economic Psychology, 39, 268–277. Corwin, E. P., Cramer, R. J., Griffin, D. A., & Brodsky, S. L. (2012). Defendant remorse, need for affect, and juror sentencing decisions. Journal of the American Academy of Psychiatry and the Law Online, 40(1), 41–49. Crockett, M. J., Özdemir, Y., & Fehr, E. (2014). The value of vengeance and the demand for deterrence. Journal of Experimental Psychology: General, 143(6), 2279–2286. Crombag, H., Rassin, E., & Horselenberg, R. (2003). On vengeance. Psychology, Crime & Law, 9(4), 333–344. Duff, R. A. (2001). Punishment, communication and community. Oxford University Press. Eadeh, F. R., Peak, S. A., & Lambert, A. J. (2017). The bittersweet taste of revenge: On the negative and positive consequences of retaliation. Journal of Experimental Social Psychology, 68, 27–39. Fehr, E., & Fischbacher, U. (2004). Third-party punishment and social norms. Evolution and Human Behavior, 25(2), 63–87. Feinberg, J. (1965). The expressive function of punishment. The Monist, 49(3), 397–423. Fischer, M., Twardawski, M., Strelan, P., & Gollwitzer, M. (2022). Victims need more than power: Empowerment and moral change independently predict victims’ satisfaction and willingness to reconcile. Journal of Personality and Social Psychology, 123(3), 518–536. Fitness, J., & Peterson, J. (2008). Punishment and forgiveness in close relationships: An evolutionary, social-psychological perspective. In J. P. Forgas & J. Fitness (Eds.), Social relationships: Cognitive, affective, and motivational processes (pp. 255–269). Psychology Press.

Moral Communication

Fudenberg, D., & Pathak, P. A. (2010). Unobserved punishment supports cooperation. Journal of Public Economics, 94(1–2), 78–86. Funk, F., McGeer, V., & Gollwitzer, M. (2014). Get the message: Punishment is satisfying if the transgressor responds to its communicative intent. Personality and Social Psychology Bulletin, 40(8), 986–997. Gergely, G., & Király, I. (2019). Natural pedagogy of social emotions. In D. Dukes & F. Clément (Eds.), Foundations of affective social learning: Conceptualizing the social transmission of value (pp. 87–114). Cambridge University Press. Gilbert, M. (1994). Me, you, and us: Distinguishing “egoism,” “altruism,” and “groupism.” Behavioral and Brain Sciences, 17(4), 621–622. Gollwitzer, M., Meder, M., & Schmitt, M. (2011). What gives victims satisfaction when they seek revenge? European Journal of Social Psychology, 41(3), 364–374. Goodall, J. (1986). The chimpanzees of Gombe: Patterns of behavior. Harvard University Press. Goodwin, G. P., & Benforado, A. (2015). Judging the goring ox: Retribution directed toward animals. Cognitive Science, 39(3), 619–646. Goodwin, G. P., & Darley, J. M. (2008). The psychology of meta-ethics: Exploring objectivism. Cognition, 106(3), 1339–1366. Gray, K., Waytz, A., & Young, L. (2012). The moral dyad: A fundamental template unifying moral judgment. Psychological Inquiry, 23(2), 206–215. Grice, H. P. (1969). Utterer’s meaning and intentions. The Philosophical Review, 78(2), 147–177. Gromet, D. M., & Darley, J. M. (2009). Punishment and beyond: Achieving justice through the satisfaction of multiple goals. Law & Society Review, 43 (1), 1–38. Heffner, J., & FeldmanHall, O. (2019). Why we don’t always punish: Preferences for non-punitive responses to moral violations. Scientific Reports, 9(1), 1–13. Henrich, J., McElreath, R., Barr, A., Ensminger, J., Barrett, C., Botyanatz, A., Cardenas, J. C., Gurven, M., Gwako, E., Henrich, N., Lesorogol, C., Marlowe, F., Tracer, D., & Ziker, J. (2006). Costly punishment across human societies. Science, 312, 1767–1770. Ho, M. K., MacGlashan, J., Littman, M. L., & Cushman, F. (2017). Social is special: A normative framework for teaching with and learning from evaluative feedback. Cognition, 167, 91–106. Hoehl, S., Keupp, S., Schleihauf, H., McGuigan, N., Buttelmann, D., & Whiten, A. (2019). ‘Over-imitation’: A review and appraisal of a decade of research. Developmental Review, 51, 90–108. Horner, V., & Whiten, A. (2005). Causal knowledge and imitation/emulation switching in chimpanzees (Pan troglodytes) and children (Homo sapiens). Animal Cognition, 8(3), 164–181. Hutchins, E. (2014). The cultural ecosystem of human cognition. Philosophical Psychology, 27(1), 34–49. Johnstone, G. (2013). Restorative justice: Ideas, values, debates. Routledge. Jordan, J. J., Hoffman, M., Bloom, P., & Rand, D. G. (2016). Third-party punishment as a costly signal of trustworthiness. Nature, 530(7591), 473–476. Jordan, J. J., McAuliffe, K., & Warneken, F. (2014). Development of in-group favoritism in children’s third-party punishment of selfishness. Proceedings of the National Academy of Sciences, 111(35), 12710–12715.

403

404

                

Jordan, J. J., & Rand, D. G. (2020). Signaling when no one is watching: A reputation heuristics account of outrage and punishment in one-shot anonymous interactions. Journal of Personality and Social Psychology, 118(1), 57–88. Kaminski, J., Call, J., & Tomasello, M. (2004). Body orientation and face orientation: Two factors controlling apes’ begging behavior from humans. Animal Cognition, 7(4), 216–223. Keller, L. B., Oswald, M. E., Stucki, I., & Gollwitzer, M. (2010). A closer look at an eye for an eye: Laypersons’ punishment decisions are primarily driven by retributive motives. Social Justice Research, 23(2–3), 99–116. Kelly, D., & Stich, S. (2007). Two theories about the cognitive architecture underlying morality. In P. Carruthers, S. Laurence, & S. Stich (Eds.), The innate mind: Vol. 3. Foundations and the future (pp. 348–366). Oxford University Press. Kenward, B. (2012). Over-imitating preschoolers believe unnecessary actions are normative and enforce their performance by a third party. Journal of Experimental Child Psychology, 112(2), 195–207. Keupp, S., Bancken, C., Schillmöller, J., Rakoczy, H., & Behne, T. (2016). Rational over-imitation: Preschoolers consider material costs and copy causally irrelevant actions selectively. Cognition, 147, 85–92. Keupp, S., Behne, T., & Rakoczy, H. (2013). Why do children overimitate? Normativity is crucial. Journal of Experimental Child Psychology, 116(2), 392–406. Király, I., Csibra, G., & Gergely, G. (2013). Beyond rational imitation: Learning arbitrary means actions from communicative demonstrations. Journal of Experimental Child Psychology, 116(2), 471–486. Krasnow, M. M., Cosmides, L., Pedersen, E. J., & Tooby, J. (2012). What are punishment and reputation for? PLoS ONE, 7(9), Article e45662. Lacey, N. (1988). State punishment. Routledge. Lee, Y., & Warneken, F. (2020). Children’s evaluations of third-party responses to unfairness: Children prefer helping over punishment. Cognition, 205, Article 104374. Marques, J. M., Yzerbyt, V. Y., & Leyens, J.-P. (1988). The ‘Black Sheep Effect’: Extremity of judgments towards ingroup members as a function of group identification. European Journal of Social Psychology, 18(1), 1–16. McAuliffe, K., Jordan, J. J., & Warneken, F. (2015). Costly third-party punishment in young children. Cognition, 134, 1–10. McGeer, V. (2008). Varieties of moral agency: Lessons from autism (and psychopathy). In W. Sinnott-Armstrong (Ed.), Moral psychology: Vol. 3. The neuroscience of morality: Emotion, brain disorders, and development (pp. 227–257). MIT Press. McGeer, V. (2013). Civilizing blame. In J. D. Coates & N. A. Tognazzini (Eds.), Blame: Its nature and norms (pp. 162–188). Oxford University Press. McGeer, V. (2020). Enculturating folk psychologists. Synthese, 199(1–2), 1039–1063. McGeer, V., & Funk, F. (2017). Are ‘optimistic’ theories of criminal justice psychologically feasible? The probative case of civic republicanism. Criminal Law and Philosophy, 11(3), 523–544. Molnar, A., Chaudhry, S., & Loewenstein, G. F. (2023). “It’s not about the money. It’s about sending a message!” Avengers want offenders to understand the reason for revenge. Organizational Behavior and Human Decision Processes, 174, Article 104207. Moore, R. (2016). Meaning and ostension in great ape gestural communication. Animal Cognition, 19(1), 223–231.

Moral Communication

Nadelhoffer, T., Heshmati, S., Kaplan, D., & Nichols, S. (2013). Folk retributivism and the communication confound. Economics and Philosophy, 29(2), 235–261. Nahmias, E., & Aharoni, E. (2018). Communicative theories of punishment and the impact of apology. In C. W. Surprenant (Ed.), Rethinking punishment in the era of mass incarceration (pp. 144–161). Routledge. Nucci, L. P. (2001). Education in the moral domain. Cambridge University Press. Nucci, L. P., & Nucci, M. S. (1982). Children’s social interactions in the context of moral and conventional transgressions. Child Development, 53(2), 403–412. Okimoto, T. G., & Wenzel, M. (2008). The symbolic meaning of transgressions: Towards a unifying framework of justice restoration. Advances in Group Processes, 25, 291–326. Piazza, J., Sousa, P., Rottman, J., & Syropoulos, S. (2019). Which appraisals are foundational to moral judgment? Harm, injustice, and beyond. Social Psychological and Personality Science, 10(7), 903–913. Raihani, N. J., & Bshary, R. (2019). Punishment: One tool, many uses. Evolutionary Human Sciences, 1, Article e12. Raihani, N. J., Thornton, A., & Bshary, R. (2012). Punishment and cooperation in nature. Trends in Ecology & Evolution, 27(5), 288–295. Rakoczy, H., & Tomasello, M. (2007). The ontogeny of social ontology: Steps to shared intentionality and status functions. In S. L. Tsohatzidis (Ed.), Intentional acts and institutional facts: Essays on John Searle’s social ontology (pp. 113–137). Springer Netherlands. Riedl, K., Jensen, K., Call, J., & Tomasello, M. (2015). Restorative justice in children. Current Biology, 25(13), 1731–1735. Rossner, M., & Bruce, J. (2016). Community participation in restorative justice: Rituals, reintegration, and quasi-professionalization. Victims & Offenders, 11(1), 107–125. Salali, G. D., Juda, M., & Henrich, J. (2015). Transmission and development of costly punishment in children. Evolution and Human Behavior, 36(2), 86–94. Sarin, A., Ho, M., Martin, J., & Cushman, F. A. (2021). Punishment is organized around principles of communicative inference. Cognition, 208, Article 104544. Shapland, J., Atkinson, A., Atkinson, H., Colledge, E., Dignan, J., Howes, M., Johnstone, J., Robinson, G., & Sorsby, A. (2006). Situating restorative justice within criminal justice. Theoretical Criminology, 10(4), 505–532. Shnabel, N., & Nadler, A. (2008). A needs-based model of reconciliation: Satisfying the differential emotional needs of victim and perpetrator as a key to promoting reconciliation. Journal of Personality and Social Psychology, 94(1), 116–132. Shoemaker, D., & Vargas, M. (2019). Moral torch fishing: A signaling theory of blame. Noûs, 55(3), 581–602. Shweder, R. A., Much, N. C., Mahapatra, M., & Park, L. (1997). The “big three” of morality (autonomy, community, divinity) and the “big three” explanations of suffering. In A. Brandt & P. Rozin (Eds.), Morality and health (pp. 119–169). Routledge. Sjöström, A., & Gollwitzer, M. (2015). Displaced revenge: Can revenge taste “sweet” if it aims at a different target? Journal of Experimental Social Psychology, 56, 191–202. Skitka, L. J., Hanson, B. E., Morgan, G. S., & Wisneski, D. C. (2021). The psychology of moral conviction. Annual Review of Psychology, 72, 347–366. Smetana, J. G. (1993). Understanding of social rules. In M. Bennett (Ed.), Development of social cognition: The child as psychologist (pp. 111–141). The Guilford Press.

405

406

                

Smetana, J. G., & Braeges, J. L. (1990). The development of toddlers’ moral and conventional judgments. Merrill-Palmer Quarterly, 36(3), 329–346. Sommers, S. R., & Ellsworth, P. C. (2000). Race in the courtroom: Perceptions of guilt and dispositional attributions. Personality and Social Psychology Bulletin, 26(11), 1367–1379. Southgate, V., Van Maanen, C., & Csibra, G. (2007). Infant pointing: Communication to cooperate or communication to learn? Child Development, 78(3), 735–740. Southwood, N. (2011). The moral/conventional distinction. Mind, 120(479), 761–802. Sripada, C., & Stich, S. (2006). A framework for the psychology of norms. In P. Carruthers, S. Laurence, & S. Stich (Eds.), The innate mind: Culture and cognition (Vol. 2, pp. 280–301). Oxford University Press. Sterelny, K. (2012). The evolved apprentice. MIT Press. Strang, H. (2002). Repair or revenge: Victims and restorative justice. Clarendon Press. Strawson, P. F. (1962). Freedom and resentment. Proceedings of the British Academy, 48, 187–211. Sunstein, C. R. (1996). On the expressive function of law. University of Pennsylvania Law Review, 144(5), 2021–2053. Tomasello, M., & Call, J. (2019). Thirty years of great ape gestures. Animal Cognition, 22(4), 461–469. Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences, 28(5), 675–691. Tomasello, M., Carpenter, M., & Liszkowski, U. (2007). A new look at infant pointing. Child Development, 78(3), 705–722. Turiel, E. (1983). The development of social knowledge: Morality and convention. Cambridge University Press. Twardawski, M., Tang, K. T. Y., & Hilbig, B. E. (2020). Is it all about retribution? The flexibility of punishment goals. Social Justice Research, 33(2), 195–218. van Prooijen, J.-W. (2018). The moral punishment instinct. Oxford University Press. Voiklis, J., & Malle, B. F. (2017). Moral cognition and its basis in social cognition and social regulation. In K. Gray & J. Graham (Eds.), Atlas of moral psychology (pp. 108–120). The Guilford Press. von Hirsch, A. (1993). Censure and sanctions. Oxford University Press. Washburn, A., & Skitka, L. J. (2015). Motivated and displaced revenge: Remembering 9/11 suppresses opposition to military intervention in Syria (for some). Analyses of Social Issues and Public Policy (ASAP), 15(1), 89–104. Wenzel, M., & Okimoto, T. G. (2016). Retributive justice. In C. Sabbagh & M. Schmitt (Eds.), Handbook of social justice theory and research (pp. 237–256). Springer. Wu, Z., & Gao, X. (2018). Preschoolers’ group bias in punishing selfishness in the Ultimatum Game. Journal of Experimental Child Psychology, 166, 280–292. Xiao, E., & Houser, D. (2005). Emotion expression in human punishment behavior. Proceedings of the National Academy of Sciences, 102(20), 7398–7401. Yamamoto, S., Humle, T., & Tanaka, M. (2012). Chimpanzees’ flexible targeted helping based on an understanding of conspecifics’ goals. Proceedings of the National Academy of Sciences, 109(9), 3588–3592. Yang, X., Wu, Z., & Dunham, Y. (2021). Children’s restorative justice in an intergroup context. Social Development, 30(3), 663–683.

PART IV

Origins, Development, and Variation

17 Grounding Moral Psychology in Evolution, Neurobiology, and Culture Darcia Narvaez A moral psychology grounded in evolution, neurobiology, and cultural influence is vastly different from a moral psychology that is not so grounded. Each of these three topics is discussed in this chapter. To attend to evolution means to take into account humanity’s deep history, not just recent civilizations and theories (Henley et al., 2019). It means drawing on humanity’s social mammalian heritage, including the social mammalian system for raising the young (Narvaez et al., 2013). Taking into account neurobiology means to understand the human individual’s profound immaturity at birth, the influence of social experiences on neurobiological structures in early life, and the individual’s long maturational schedule (till nearly age 30) (Bethlehem et al., 2022; Montagu, 1968). It means understanding human beings as biosocial creatures, whose sociality is highly influenced by their biology, a biology shaped by caregivers and community (Ingold, 2013). It means understanding how neurobiology shapes dispositional moral orientations and situational mindsets. Finally, to address cultural influences means to attend to the stories a culture conveys along with its daily practices, especially in regard to the raising of children (Narvaez, 2014). Nature does not make “bad” (dysregulated, disconnected, irresponsible) creatures, but culture can – within one or multiple generations, child raising can change and epigenetic effects can take hold (Maté & Maté, 2022; Wolynn, 2016). Culture can undermine the development and maintenance of what will be described as species-typical psychosocial neurobiology, upon which is built species-typical sociality and morality (Narvaez, 2021). Before moving forward, a definition of optimal morality would be helpful. I take a view inspired by the hints at the importance of early development that Aristotle and Mencius provided. The lasting effects of early experience are now supported by contemporary biological sciences. From a transdisciplinary perspective, optimal moral intelligence represents comprehensive virtue, defined here as holistically coordinated physiological, psychological, spiritual systems oriented toward holistic communal harmony, social attunement, receptivity, and interpersonal flexibility (Narvaez, 2014). These are rooted in well-functioning neurobiological structures and multiple intelligences. Virtue entails the full coordination of intrapersonal capacities and responsibilities to balance with interpersonal needs in the moment (relational attunement) and that guide imagined possibility and planning that takes into account the web of life (communal imagination). There are multiple processes or capacities that are required for virtuous moral 409

410

      

intelligence in action (Narvaez, 2010; Rest, 1983). The overall categories for these processes include moral perception, moral sensitivity, moral reasoning/judgment, moral motivation, moral identity, moral action capacities, as well as ego strength (the ability to persevere to action completion against all obstacles and discouragement).1 All these must function in coordination for virtuous moral behavior to take place. With well-constructed neurobiology and social support throughout life, virtue becomes a combination of social wu-wei, effortless action with and for the other in the moment (Slingerland, 2014), and social yu-wei, using abstracting capacities to plan inclusively. We evolved to develop such capacities naturally, within a supportive community (Narvaez, 2016), though details vary by locale and culture. I follow the traditional, earthcentric Indigenous worldview and consider the full web of life as part of interpersonal moral concern (Narvaez, Four Arrows, et al., 2019; Topa & Narvaez, 2022). Table 17.1 contrasts aspects of this worldview with the dominant worldview rooted in Western Enlightenment culture. In order to outline different moral paths and their development, we need to address three aspects: realism, idealism, and pragmatism (Lodge, 1944). First, we take a realistic assessment of what sort of creature we are, how we are shaped into our nature, and how things can go wrong. To understand the realms of possibility for our nature and capacities, to lay out the ideals, we can examine what is optimal functioning for our species, from a holistic perspective. At the same time, we can examine pragmatics: What did we evolve to reach our optimal functioning? What do communities provide to maintain our moral optimality? The overall argument is this: Childhood experience matters for psychosocial neurobiology, shaping basic orientation-by-situation schemas toward social trust or distrust, openness to the other, or self-protectionism. Optimal functioning encompasses what helps individuals and diverse communities (human and other than human) flourish in a balanced, give-and-take, mutualistic manner via meeting basic needs through sharing and gifting, as was predominant in traditional nonindustrialized societies around the world (Widlok, 2017). Basic need fulfillment through our species’ developmental system of support fosters and maintains the nature of the species (Narvaez, 2018). The end is balance – balance in individual, relational, and ecological systems. As we will describe, to grow a healthy virtuous human being, our species’ evolved nest in early childhood is required, or else extensive sanctions or healing interventions will be 1

Although others have examined the evolution of reasoning and judgment (Krebs, 2005), I do not follow that example here for several reasons. First, the abstracting kind of reason that Westerners emphasize represents a recent phenomenon in the history of Homo sapiens, reflecting a shift away from concrete know-how and away from presence (Abram, 1996; Ong, 2002). Second, Western views of reasoning (typically emotionally and relationally detached) are Western cultural adhesives, meaning they do not represent the kind of reasoning human beings typically employ outside of calculative schooling that Westerners advocate (e.g., Luria, 1976). Third, abstracting reason is not representative of humanity’s highest form of being, moral virtue, which involves a coordination of emotion, perception, intuition, reason, and concrete know-how applied in the right way at the right time.

Moral Psychology: Evolution, Neurobiology, Culture

411

Table 17.1 Morally relevant aspects of human existence contrasted in traditional earthcentric Indigenous societies and the dominant worldview based in Western Enlightenment Traditional, earthcentric, Indigenous societies

Dominant worldview based in Western Enlightenment

Self-regulation

Facilitated and coached

Empathy

Experienced, modeled, expansive to include other than humans Define being human Treated as sentient role models, teachers

Coerced (e.g., sleep training, punishment) Expected for kin and in-group

Relationships View of rest of natural world Perception Intuition Sensitivity Reasoning/ judgment

Motivation/focus Action

Holistic, inclusive of manifest and unmanifest beings and energies Well-educated emotions and lifeway know-how Manner of relating to web of life Distrusted unless based in concrete experience; communal with biocommunity in mind to seven generations Enhancement of community and web of life For community

Utilitarian Treated as inert, dumb, or inferior Underdeveloped, focused on materialistic human interests Underdeveloped and thus often untrustworthy Diminished, anthropocentric Emphasized but detached from relationship and emotion, anthropocentric Getting ahead of the competition For me and mine

needed later. What I describe then is a developmental ethics. Virtue is the result, not as a trait but as holistic moral intelligence, meaning a flexible, dynamic set of capacities for responding and acting appropriately – a coordination of emotion, perception, intuition, reason, and concrete know-how applied in the right way at the right time. What is required shifts from situation to situation because all situations are unique. An individual may have a person-by-situation signature at the gross level of analysis, consistent with social-cognitive personality theory, but within each situation specific action will vary. Now let’s examine broad evolutionary theory.

17.1 Developmental Evolutionary Psychology Theory Evolution refers to the shift of planetary ecologies across time, shifts in ecology (e.g., climate patterns), ecosystems, and species changing dynamically through symbiosis, gene exchange, and natural selection (Jablonka & Lamb, 2006). Evolution by natural selection, put forward by Charles Darwin (1859/ 1962), refers to one mechanism, now understood as genetic adaptation. Across generations, most genetic characteristics are conserved, operating adaptively. Few genetic mutations are selected for because prior adaptations are working

412

      

well enough. That is, the vast majority of genetic information is conserved into the next generation. Retrospectively, it is possible to observe that a particular genetic mutation was correlated with survival in comparison to rival genes across multiple generations. Making it to reproduction is not enough. Individuals must not only survive but thrive to reproduction and then their offspring must outcompete rivals with different genetic mutations, for multiple generations. What is often overlooked is that survival and thriving depends on a well-constructed creature. For mammals like us, early life undercare and/or trauma are not conducive to survival, thriving, or outcompeting rivals across generations, as research on adverse childhood experiences is demonstrating (e.g., Felitti & Anda, 2005). The story of evolution has been co-opted by the cultural forces that benefit from emphasizing “survival of the fittest,” misunderstanding human evolution so much to conclude that the selfish survive best (Midgely, 2010). Some scientists have truncated how natural selection works and emphasize getting to reproduction as the end game. Thus, it is a “win” for natural selection if a child has a baby at age 8. They are confusing functional adaptation (reactions within a particular life) versus evolutionary adaptation by natural selection (outcompeting rivals across generations; Narvaez, Gettler et al., 2016). The accurate and parsimonious position understands that having a baby at age 8 is a sign of early developmental disruption, specifically, endocrine disruption from pollution (e.g., BPA plastic), biopsychosocial stress, and/or excessive caloric intake or other experiential factors (Fisher & Eugster, 2014). To attend to evolutionary systems means to take humanity’s deep history into account, not just recent civilizations. We need to shake the newish cultural dust off our feet and look farther back whence we came (Henley & Rossano, 2022; Henley et al., 2019). First, humanity’s cooperation is rooted in nature’s vast collaboration (Worster, 1994): every day, scientists are uncovering the expansive networks of cooperation that exist among different species in forests, waterways, soil, and human bodies through symbiosis and mutualism (e.g., Sheldrake, 2021; Simard, 2021). Competition plays a lessor role in comparison. Human groups evolved to take part in Nature’s gift economy, as through a maternal gift economy that provides for the unequal needs of community members with no expectation of reciprocation (Vaughan, 2007, 2019; Widlok, 2017). Second, we note the significance of humanity’s break 6–7 million years ago from the great ape (hominid) line to humanity’s hominin line. Humanity’s huge social brain and cooperative child raising coevolved, moving humanity away from ape-like dominance hierarchies to the egalitarian social structures with “un-apelike selflessness, a degree of hypersociality reflected in a concern for others, eagerness to share food and information with others, and cooperation in a wide array of contexts, even with nonrelatives and near-strangers” (Burkart et al., 2009, p. 175). This shift increased opportunities for social learning and teaching, mindreading, language, and cumulative cultural evolution (Power et al., 2017). In fact, Darwin (1871/1981) noted the “moral sense” as a fundamental characteristic of human nature (a combination of social pleasure and

Moral Psychology: Evolution, Neurobiology, Culture

social concern, empathy, and habit control; Narvaez, 2017), observing how it was more apparent in Native Peoples around the world than in his British compatriots. This is not a surprise when one understands how childhood experience influences moral personality and how British child raising was notoriously brutal and cold (deMause, 1995; Turnbull, 1984) whereas Native Peoples followed our evolved system for raising children (more later). Third, we attend to the fact that our bodies carry trillions of microorganisms that keep us alive (over 90 percent of the genes we carry are theirs; Dunn, 2011). We share nearly 99 percent of our DNA with bonobos and chimpanzees, as well as 50 percent with mushrooms, and 60 percent with bananas. We are not completely new earth creatures but have biological linkages to virtually everything on earth.2 We are embedded in a cooperative natural world; traditionally, the decomposition of our bodies moves into the next generations to form new life (hence, worries about genetic competition are highly overplayed). Fourth, we attend to the multiple inheritances we receive beyond genes,3 such as cell and body plans, epigenetic programming, developmental plasticity, basic needs and the developmental niche to meet them, self-organization, maternal ecology and microbiome, the local ecology, the moral sense, and culture (e.g., Darwin, 1871/ 1981; Jablonka & Lamb, 2006; West-Eberhard, 2003). A developmental evolutionary theory offers a broad view of evolution’s impact on who humans are, emphasizing the complexity of multiple inheritances, appropriate baselines for the dynamic nature of development and human plasticity, and the provision of our species’ developmental niche (Narvaez et al., 2022). When discussing the nature of human beings and their moral potential, we must understand what kind of organism we are, what influences our development, what qualities help us lead a full life, and what kinds of capacities make each a proper member of the species (Foot, 2001; Narvaez, 2021; Thompson, 1995). We need to establish some baselines instead of being pushed to and fro from some new isolated discovery or experiment.

17.2 Morally Relevant Questions about Our Species To make judgements about human nature, we must examine our assumptions. And we must clarify the source of our assumptions.

2

3

I will leave aside how human beings differ from other animals. This is not a typical focus of most of humanity through time, rather, dedifferentiation of self from others along with polymorphism, no fixed identity of anything, was typical (e.g., Bram, 2002, 2018). We often get distracted by information about genes and genetic evolution and start to think that genes make the person. Far from it. As traditional societies understood, it takes many years for a child to grow their humanity (Sahlins, 2008) and it does not happen from coercion but through support, specifically, the evolved nest (Narvaez et al., 2013). Genes have some influence but do not predict psychology and personality; they are inert without experience (Abdolmaleky et al., 2005).

413

414

      

17.2.1 What Kind of Creature Are We? We are a subtribe (hominia) over 6 million years old, a genus (homo) over 2 million years old, with speciation to modern anatomy about 300,000 years ago. Only in the last 10,000 years or so have we moved away from what was adaptive for our ancestors: living at least part of the time in bands of 5–50 people (kin and nonkin), immediate return economies (few possessions or accumulation), egalitarian and peaceable, with extensive enjoyable social leisure (e.g., Boehm, 1999; Fry, 2006; Graeber & Wengrow, 2021; Lee & Daly, 2005; Sahlins, 1968). In these communities, members are both highly communal and highly autonomous (Gowdy, 1998; Ingold, 2005; Narvaez, 2013; Sorenson, 1998) with little tribalism (i.e., out-group suspicion; for reviews, see Eisler & Fry, 2019; Fry, 2006, 2013).4 The multiage, supportive lifestyle of the evolved nest likely contributed (see Sections 17.3 and 17.4).

17.2.2 What Qualities Do We Need to Live a Full Life? For any animal, species-typical development is associated with healthy selfregulatory systems, from the immune system to the stress response (López-Otin & Kroemer, 2021), along with species-normal intelligence to find one’s way in the world and in cooperation with conspecifics. Human fulfillment comes from social fittedness and a supportive community. Our species’ original value orientation is relational – our brains are designed to be addicted to people (Panksepp, 1998). We evolved to value the fun and playfulness of the interpersonal dance that changes in every situation, which is apparent in our ancestral context of hunter-gatherer communities (e.g., Sorenson, 1998).

17.2.3 What Kinds of Capacities Make Each a Proper Member of the Species? Each species has a nature, a set of typical characteristics. Skillful self-regulation and skillful social cooperation are critical social mammalian adaptations over the course of evolution (Hrdy, 2009). Humans evolved to be highly social and interdependent with one another but also with the natural world, on whom all species depend (Shepard, 1998). However, in this day and age we have let baselines slip for what we think is species-normal human nature and speciesnormal human development. It is hard to recognize how far we have fallen from optimization unless one examines societies that maintain a wellness orientation to child raising (more later).

4

Lest the reader think somehow an evolutionary perspective falls into a “golden age fallacy,” not in this case. The comparisons of child raising and outcomes are done with contemporary groups from around the world, species-typical human beings who demonstrate a different nature from unnested groups – more holistically intelligent and cooperative. And we can observe the differences in social capacities, behaviors, and attitudes (see Topa & Narvaez, 2022; Narvaez, 2013).

Moral Psychology: Evolution, Neurobiology, Culture

17.2.4 What Influences Our Development? Every animal evolved a “nest” or system of development that supports the optimal development of the young, fostering its species-typical nature. We know, for example, that the species-typical nature of a puppy (kitten, monkey, any mammal) can be ruined if you take it away from its speciestypical nest prematurely. Humans are no different, except for being much more influenced by experience because of vast immaturity at birth (25 percent of adult brain volume) with the longest maturational schedule (about three decades; Bethlehem et al., 2022). Humans are complex social mammals who resemble fetuses of other animals until at least 18 months of age (Montagu, 1968; Trevathan, 2011), with greater initial plasticity and rapid brain development than found in related species (Gómez-Robles et al., 2015). Thus, the most critical influence on human development is our species’ evolved nest (aka evolved developmental niche, or EDN; Narvaez, 2014; Narvaez et al., 2013). Most components of the EDN have been around for over 70 million years (Weaver et al., 2021). Components of the EDN include soothing gestation and birth, extensive breastfeeding and affectionate touch (and no negative touch), welcoming social climate of multiple stable supportive responsive caregivers, selfdirected social play with multiple aged mates, nature immersion and connection, routine healing practices that help the individual and community rebalance (Hewlett & Lamb, 2005; Young, 2019). Converging evidence from the sciences shows how important each component is for shaping the mind–psyche–behavior of individuals and communities (e.g., Narvaez, 2014, 2018; Narvaez et al., 2013). Childhood experience matters for psychosocial neurobiology and moral functioning. The impacts of nest components on moral development are briefly described in Section 17.3.

17.3 Human Nature and Moral Development Developmental neuroscience research is now demonstrating that child well-being is highly influenced by the quality of early life experiences (Garner et al., 2021; Hambrick et al., 2019; Shonkoff & Phillips, 2000). It is also becoming clear that well-being in early life influences moral development (Narvaez et al., 2021; Narvaez, Wang, & Cheng, 2016). Triune ethics metatheory (Narvaez, 2008, 2014, 2016) addresses how neurobiological development in early-life care constructs capacities for sociality and morality. Ideally, with evolved nest provision by the community, children develop well-regulated physiological, psychological, social, and emotional systems that undergird a flexible, relationally attuned, compassionate morality where abstracting capabilities are used to promote communal well-being.5 In contrast today, most

5

The belief that humans are highly exclusionary, that they cannot move beyond favoring their ingroup, is part of the dominant worldview which is based on “unnested” samples. Out-group

415

416

      

children are not provided the evolved nest, resulting in various forms of dysregulation, underdeveloped emotional, social, and moral skills, and an orientation to self-protectionism. How are virtue and well-being intimately linked with early childcare practices? Here are two examples. Various forms of self-control are regulated by different physiological systems. One such system increasingly studied is the functioning of the vagus nerve, the tenth cranial nerve, which innervates the major organs of the body. Its functioning is shaped by the quality of early life care, meaning that EDN-consistent care helps it grow properly to promote wellfunctioning immune, digestion, heart, respiration, and brain systems; but it also undergirds the social engagement system, allowing for intimacy, and expressions of compassion (Eisenberg & Eggum, 2009; Porges, 2011; Tarsha & Narvaez, 2023). My laboratory’s work at the University of Notre Dame examines effects of early experience on vagus nerve function (vagal tone) – for example, the negative effect of women’s adverse childhood experiences on vagus nerve function is buffered by greater evolved nest childhood experiences (Tarsha & Narvaez, 2021). Another system influenced by early experience is the stress response. The stress response (e.g., fight–flight–freeze–faint) is trained up by prenatal and postnatal experience. When early life is toxically stressful (e.g., through routines of being left alone or left to cry), the stress response system develops a low threshold that is carried forward into the rest of life (Lupien et al., 2009). At the same time, extensive distress impairs normal development of sociality and other forms of self-regulation. When the stress response is activated, it shifts blood flow away from the brain and to the muscles for mobilization (Arnsten, 2009). Toxically stressed children are conditioned to activate the stress response easily from perceived threat, undermining growth, learning, and sociality. As a result, moral functioning is oriented to self-protectionism rather than relational attunement, snowballing into less social interaction and fewer opportunities for social skill building (Narvaez, 2014). In this case then, early experience establishes the value of (and leaning toward) self-protectionist ethics – an orientation to survival through social domination or withdrawal. What often looks like immoral personality is the shielding of protectionism a child has had to develop to survive in an unsupportive environment (Niehoff, 1999). They clothe themselves in a biology of self-protection from immersion in social impoverishment. When social and emotional life are impoverished, so is the value of relationships. When life is unenriched, so is value. Instead, one develops a survival

distrust may be true where humans are raised harshly, where they learn to put up barriers against others for self-protective survival, when they have to develop a large ego because their evolved needs were not met early on, and when they must express their anger at parents indirectly by targeting an out-group. It is not humanity’s evolved heritage. In-group favoritism was not a characteristic of Indigenous peoples around the world at first contact. Fear of outsiders among Native Americans came after harsh experience with explorers and settlers – e.g., it took less than two weeks of Columbus’s first encounter with Caribbean Natives for them to go from extreme friendliness and generosity to extreme fearfulness and running away (Siepel, 2015).

Moral Psychology: Evolution, Neurobiology, Culture

system theory of value because of the neurobiological structures that were enhanced and others underdeveloped. Instead of growing the species’ evolved cooperative, self-controlled nature, one demonstrates threat reactivity, selfcenteredness, unskillful social orientations, and susceptibility to addiction of one kind or another from unmet needs (Maté, 2010; Narvaez, 2014). All phenomena in the psychological realm emerge from biological properties (Kagan & Fox, 2006). The type of nature we develop emerges not only from our genetic history but our life history. Early life shapes bodies and systems, psyche and personality. The vast majority of learning occurs implicitly throughout life, that is, according to “nonintentional, automatic acquisition of knowledge about structural relations between objects or events” (Frensch, 1998), molding responses, habits, and dispositions. We can see from attachment and clinical research that personalities can misdevelop in various ways depending on which brain systems are damaged or neglected when and how the individual adapts (Schore, 2003a, 2003b). When babies do not get their needs met, they first rage for assistance as the sympathetic nervous system has mobilized to guard the baby’s life (Henry & Wang, 1998). A baby who regularly gets help only after raging may develop an angry personality (since that works for getting needs met). Or, if the baby is punished for raging or is not helped even when raging, the baby will despair, emotionally withdraw, and shut down in order to preserve energy and life. The baby who regularly reaches this stage may develop into a shy, withdrawn personality who easily shifts into numb dissociation. Babies who have inconsistent parents (sometimes intrusive, sometimes neglecting, mismatching with baby’s needs) may withdraw emotionally (impairing right brain development) and learn to intellectualize life – that is, be dismissive of vulnerability and soft emotions (Crittenden, 1995; Narvaez, 2014).

17.4 Child Raising as Central to Morality Ethical naturalism emerges from a transdisciplinary understanding of human development, starting “with the assumption that human moral agents are human animals whose values emerge in ongoing interactions with their physical, interpersonal, and cultural environments” (Johnson, 2014, p. 14). What the child experiences and practices is what the child becomes. Childhood shapes orientation: protectionism or openness, distrust or trust, a propensity to feel safe or unsafe (Carter & Porges, 2013; Erikson, 1950). In our studies, child well-being is associated with greater relational cooperation and illbeing with less (Narvaez et al., 2021); evolved nest provisioning fosters wellbeing and moral capacities (Narvaez, Woodbury et al., 2019). In other words, child raising might be considered central to scholarship in morality. Most famously, feminist theorist Virginia Held (1993) suggested just that: Child raising is best considered the center of moral activity and “should concern itself first of all with this activity, with what its norms and practices ought to be, and with how the institutions and arrangements through society

417

418

      

and the world ought to be structured to facilitate the right kinds of development of the best kinds of new persons” (p. 56). Other feminists also emphasize mothering and the maternal gift economy of providing for child needs (e.g., Pulcini, 2019; Vaughan, 2007). The flourishing of children comes about from meeting their basic needs, which forms the foundation for flourishing communities (Narvaez, 2014, 2018). By contrast, there are also feminist voices that dismiss the importance of early experience in shaping moral capacities. Some feminists, emphasizing work and career, are contemptuous of nurturing and instead are focused on controlling children and minimizing their needs (e.g., Chua, 2011; Oster, 2019). Such a misunderstanding corresponds to a simultaneous misunderstanding of the EDN. The EDN is community provisioned, not the responsibility of one mother or the parents alone. Mothers need help feeding the big social brains of their children, which helps explain the existence of postmenopausal females, unusual for most mammalian species except whales, who assist in provisioning children’s calorie-intensive needs (the “grandmother hypothesis”; Hawkes & Coxworth, 2013). In fact, as a result of culture and brain coevolution, cooperative caregiving fostered characteristics only human have: a preference for egalitarianism, capacities to teach intentionally, systematized targeted helping, declarative language and communication, along with cumulative cultural evolution (Burkart et al., 2009). Children grow capacities enabling flexible relations with multiple others (not just with mother), leading to a wide set of attachments that includes the natural world, and develop an implicit shared intentionality (the latter of which chimpanzees lack; Tomasello, 2019). Why do some argue still that we are more like chimpanzees than our own sharing, egalitarian ancestors (e.g., Wrangham & Peterson, 1996)? It is my contention that the move away from cooperative child raising and EDN provision has underdeveloped our species’ evolved nature, shifting brain functioning back to our primate mind, to our survival systems, to an emphasis on ape-like dominance and hoarding. What replaced species-typical moral development? Let’s examine, as an illustration, two forms of child moral development.

17.5 Two Varieties of Moral Development We can identify two different orientations to moral development, one emerging primarily from Western civilization and one more characteristic of First Nation societies around the world. A commonly held belief in Westernized societies is that children need to learn to suppress their own desires and impulses and learn respect by submitting to the authority of adults. Immanuel Kant (1724–1804) discussed two intertwined attitudes among Europeans that are still evident today among WEIRD (Western, educated, industrialized, rich, democratic; Henrich et al., 2010) populations. First, humans are persons because they display autonomy – the ability to act based on principle, not desire – through the imposition of law on themselves. This “rationality” gives humans special status over other animals: the capacity to act morally (from principle instead of

Moral Psychology: Evolution, Neurobiology, Culture

from desire), an autonomous morality. Second, in order to learn to follow law instead of desire, children must be coerced into obedience. Before they develop autonomy to act morally, children must practice “heteronomy,” submitting to rules imposed by adults. This prepares them for the self-discipline of autonomy, submitting to rules they choose for themselves. According to Kant and this view, only when you display autonomous morality are you a real person and have intrinsic value. With autonomous morality you are able to make appropriate laws that take into account the perspective of all persons, according to Kant’s categorical imperative (i.e., treating other people as persons rather than as instruments you use for your own goals). Philosopher John Watson (1847–1939) explained Kant’s perspective: “At first everyone is under apparent bondage to his superiors in the family relation, but in reality this is the means by which a measure of freedom is attained”; through obedience (and punishment to obey) the child learns “to free himself from an undue accentuation of his own individual desires” (Watson, 1988, pp. 37–38). Notably, this is contrary to what we know about child development today and leads instead to a withered self, dissociated from emotional awareness and presence, reactively conformist and even authoritarian (Milburn & Conrad, 2016; Narvaez, 2014). Studied for decades, corporal punishment (spanking) is considered an adverse childhood experience because it is linked to decreased mental health and increased antisocial behavior and aggression (e.g., Gershoff & Grogan-Kaylor, 2016). Nevertheless, the dominant moral psychological development theories in the twentieth century followed a similar understanding to Kant’s. They were cognitive-developmental and focused on moral judgment and reasoning (Kohlberg, 1981, 1984; Piaget, 1932/1965; Turiel, 1983). Piaget’s (1932/1965) heteronomous morality was seen to develop in children first, where rules are perceived to exist externally with some fear of immanent justice (automatic punishment for breaking rules). This type of morality of constraint aligned with Freud’s view of superego development from parental socialization and later identification, presumed necessary brakes for a civil society. Piaget’s second moral orientation was the more sophisticated autonomous morality, characterized by internalized rules with a sense that rules are contractual and subject to changes through mutual agreement of group members. Kohlberg (1981, 1984) expanded Piaget’s two orientations to a six-stage, staircase model. These theories stressed progressive construction of explicit verbalizable reasoning from social experience with particular attention to justice and fairness. Scores are highly correlated with Western schooling (Gielen & Markoulis, 1994). These theories must be understood as part of the Western civilized model of morality as conscious decision making, but whose examination shows a frequent chasm between judgment and action (Blasi, 1980). The emphasis on conscious, explicit reasoning is contrary to understandings of knowledge by contemporary cognitive science as an embodied (biopsychosocial capacities rooted in experience), embedded (situationally based), and enacted (effected action possibilities) know-how (Narvaez et al., 2022). Ethical know-how fits better with Indigenous knowledge systems (Topa & Narvaez, 2022; Varela, 1999).

419

420

      

A more organic development of morality is apparent among traditional peoples. One can observe an organic moral intelligence in the adults which bears striking similarities in egalitarian Indigenous communities around the world: calm, generous, with high individual autonomy, high communalism and sharing, placefulness or at-home-ness in the landscapes in which the group migrates (Ingold, 2005; Lee & Daly, 2005; Narvaez, 2013).6 The Indigenous perspective of child raising contradicts that of Western civilization (Graeber & Wengrow, 2021; MacPherson & Rabb, 2011). Rooted in the Indigenous worldview (Topa & Narvaez, 2022), the non-WesternEnlightenment view common around the world offers an alternative pathway for the development of moral intelligence. First, the notion of personhood is much more expansive. Kant’s conscious rationality is not an indicator of personhood. Rather, Earth is full of persons, only some of whom are human (Harvey, 2017). Human beings share personhood with all other Earth entities, including animals, plants, waterways, and mountains. Each has its own intelligence and agency, its own contribution to the harmony of the whole. Humans are to accept and celebrate diversity and coordinate peaceful coexistence with all beings through respectful attitudes and behavior. This is one of many contrasts in worldview between that of the dominant culture and Indigenous perspective (Redfield, 1953, 1956; Topa & Narvaez, 2022). Second, children develop sociomoral intelligence by being welcomed into a community that respectfully meets their needs in childhood, when human nature is extensively shaped. The child grows up in and with a supportive, guileless environment (that continues throughout life). Describing his experience with the Fore hunter-gatherers of New Guinea, anthropologist E. Richard Sorenson (1998) noted: I was astonished to see the words of tiny children accepted at face value – and so acted on. Over months I tried to find at least one case where a child’s words were considered immature and therefore disregarded. No luck. I tried to explain the idea of lying and inexperience. They didn’t get my point. They didn’t expect prevarication, deception, grandstanding, or evasion. And I could find no cases where they understood these concepts. Even teenagers remained transparently forthright, their hearts opened wide for all to gaze inside. (p. 97)

In First Nation/Indigenous communities around the world, where egalitarianism remains the norm, children do not subordinate their wills to the wills of others, but learn to shape them in prosocial ways, coordinating nonverbal impulses in a manner that enhances relational connection (Sorenson, 1998). First Nation/Indigenous societies typically cherish and honor children, notably supporting but not interfering with children’s development and growth. Children are considered humans-in-the-making with much to learn through their own decision making. Children are assumed to be guided by inner spirits such that coercive actions by others is likely to interfere with the internal, 6

Of course, physical life was challenging, fasting was frequent, with a high mortality rate before age 15.

Moral Psychology: Evolution, Neurobiology, Culture

wellness-oriented guidance. Children are often treated as reincarnated ancestors with their own agency (Sahlins, 2008). It is understood that if adults interfere with the development of the child by coercing them away from their impulses, then they may need guarding the rest of their lives – their self-confidence and inner compass for action having been damaged. Instead, children learn respectful behavior through immersion in a respectful, wellness-oriented community. Children learn culturally appropriate behaviors through stories, rituals, and imitation of community members. McPherson and Rabb (2011) describe how in Native communities elders speak indirectly, using stories as guidance for behavior instead of rules (e.g., Basso, 1996), what they call “interventive-noninterference” (p. 105). This, they write, is contrary to a Kantian approach. Instead, noninterference is a sign of respect for personhood. It fosters self-reliance and independent thinking rather than dependency on rules. Ordering, bossing, or criticizing others is inappropriate. Over the lifespan, individuals are surrounded by respectful role models – in story or real life. Individuals also take up one or more vision quests. Vision quests are essential for harmonizing self with cosmos, feeling a part of the Whole, part of the commonself. “With this comes the knowledge that willing the good of others is not in any sense a form of self-sacrifice given the enlarged sense of self acquired in the journey into non-ordinary reality” (McPherson & Rabb, 2011, p. 100). This form of attachment – ecological attachment – is typically absent among “civilized” peoples and may in part explain the ecologically devastating decisions and actions that the dominant culture has brought about (Narvaez, 2020a, 2020b). The disparity between the Kantian and the First Nation/Indigenous approaches to child raising can be seen in an account from The Jesuit Relations, which provides descriptions of the French missionaries’ experiences in the Americas. In 1633, Paul Le Jeune described an incident that occurred when an Algonquin man was curious about and approached a French boy beating a drum: As the Indian approached close to see him better, the little boy struck him a blow with one of his drumsticks and made his head bleed badly. Immediately all the people of his nation who were looking at the drummer took offense upon seeing this blow given. They went and found the French interpreter and said to him: “One of your people has wounded one of ours. You know our custom well; give us presents for this wound.” As there is no government among the Indians, when one among them kills or wounds another, he is (assuming he escapes immediate retaliation) released from all punishment by giving a few presents to the friends of the deceased or wounded one. Our interpreter said: “You know our custom: When any of our number does wrong, he is punished. This child has wounded one of your people, and so he shall be whipped at once in your presence.” The little boy was brought in, and when they saw that we were really in earnest, that we were stripping this little boy, pounder of Indians and of drums, and that our switches were all ready, they immediately asked that he be pardoned, arguing that he was only a child, that he had no mind, that he did not know what he was doing. As our people were going to punish him nevertheless, one of the Indians stripped himself entirely, threw his robe

421

422

      

over the child, and cried out to the man who was going to do the whipping: “Strike me if you will, but you will not strike him”; and thus the little one escaped. All the Indian nations of these parts – and those of Brazil, we are told – cannot punish a child, nor allow one to be chastised. How much trouble this will give us in carrying out our plans of teaching the young! (Greer, 2000, p. 36)

Native interlocuters were astonished that an adult would punish a child. The rashness of the French was among many characteristics that indicated to the Natives how immoral the French were. The French Jesuits were told that their observed immorality (e.g., always fighting and complaining) was due to their focus on property and money (Graeber & Wengrow, 2021), rather than on community well-being. Children are in fact the center of the Indigenous community (as is apparent among our more peaceful cousins, the bonobos; Hare & Yamamoto, 2017). According to the Indigenous worldview, with proper support children learn healthy community membership without coercion. In contrast, during and after colonization, Native children were forcibly schooled in residential schools where they were punished and abused, with the supposed aim of taking the “Indian” out of the child, intentionally breaking the circle of development and nurturing between children, elders, and community (Adams, 2020). The trauma Native children experienced, due in part to the European-imported view that children must obey adult whims or be punished, still clings to generations of First Nation peoples.

17.6 Moral Consequences of Unnestedness Early life shapes moral propensities because morality is embedded in brain/body systems, psyche, and personality. When physiologically optimal, early experience provides a sense of competence and security that forms the base of the self-in-the-world. The individual can be open to novelty, exploring new things without anxiety. When early life is socially optimal, the individual builds skills as an embedded community member, well connected to kin and neighbors, and capacious in getting along with others. When early experience is physically or socially suboptimal, such capacities are impaired. A selfprotectionist orientation may become habitual and dominant, limiting one’s free will in the present (Henry & Wang, 1998; Mikulincer & Shaver, 2007). Although there may be some plasticity in one’s cognitive-affective orientation to the world after the initial groundwork is laid, flexibility may be minimal. The self-protectionist ethic is based largely in closed systems that are difficult to influence once they are conditioned in early childhood (e.g., stress reactivity), although with brain-wide rewiring, as in intense therapy or psychedelics, there can be revamping (e.g., Doidge, 2007). When children start out with experiences that undermine their species-typical becoming, their moral motivations too are shifted. They move away from favoring relational attunement (peaceful engagement), the predominant moral

Moral Psychology: Evolution, Neurobiology, Culture

orientation visible in societies that provide young children with what they evolved to need – small-band hunter-gatherers (Narvaez, 2013). Instead, with a break in the continuum of safety and comforting support (conveyed by caregiver absence, socially and physically), motivations become oriented away from social and communal commitment. Detachment from intimacy is practiced and, over time, preferred – an orientation that mainstream US culture now considers to be normal (Klinenberg, 2012). Toxically stressed early on so as to miss developing key foundations for sociality, the child automatically shifts to favoring social and moral self-protectionism (Gabel, 2018). Missing is the flexible and adept sociality that was central to human evolution (Burkart et al., 2009). Triune ethics metatheory (TEM; Narvaez, 2014) describes the etiology of both subjective and objective morality. Everyone has a subjective morality – aiming for what is perceived to be good in the moment, whether saint or criminal. The attitudes and behaviors of the self-protectionist ethic are rarely included as moral orientations in moral theories, except in ethical egoism or Rand’s objectivism (Weiss, 2012), and so the justifications for such behaviors by agents often are reinterpreted as outside of morality. But just because a psychobiosocial self is malformed or does not follow evolved expectations, it does not mean a person does not have morality, but they do not have our species’ optimal morality. In comparison to a well-formed self, a malformed self just sees the good differently. It does however mean that species’ fullest moral capacities as a human being are not on full display. Although TEM acknowledges reasoning development generally with cognitive/brain maturation and experience, it also emphasizes how reasoning changes by context within shifting global brain states. One is always susceptible to motivated cognition where emotions and framing drive perception and interpretation (Jost et al., 2003), interacting with reasoning and neurobiological functioning primarily established in early life. When under threat, blood flow shifts away from higher-order thinking, simplifying reasoning to black-andwhite, us-against-them thinking. The whole self is thrust into a different mindset, influencing perception, affordances, attractive rhetoric, and goals (Narvaez, 2010, 2014). In fact, many aspects of morality that concern philosophers are affected by such shifts in global mindsets, for example: free will, decision making, view of human nature, favored belief, egoism, emphasis on utilitarianism, habit formation, adoption of moral rules, motivations, and preferred virtues (Tomkins, 1965). And, Hobbes’s list of human traits is upregulated by early undercare and trauma: self-seeking, appetitive, and competitive drives. Moreover, it can be argued that the seven deadly sins are promoted by unnested child raising.7

7

Undercare or lack of evolved nest provision can lead to 1) insatiability of certain needs that were not met at the scheduled time and their replacement with gluttony and/or 2) a sense of scarcity, leading to greed; 3) self-dissatisfaction and competitiveness (envy); 4) self-doubt and selfprotectionism (pride); 5) enhanced basic mammalian emotion systems like lust, or 6) rage; 7) low energy and lack of self-regulation (sloth).

423

424

      

Most moral philosophers seem to focus on ego-consciousness and calculative intelligence (cleverness), often crediting these with human uniqueness. This contrasts with how 4E cognitive science now understands human functioning as embodied, embedded, extended, and enacted (Newen et al., 2018). Rationality as conscious decision making and ego satisfaction is a left-hemisphere driven focus, often disembodied, disembedded, and unenacted. According to TEM, embodied moral intelligence is often underdeveloped because of a misunderstanding of how children’s capacities develop in childhood (e.g., through responsive care and social play rather than through books). Children’s embeddedness in the world has been limited by walled-in life experience, impairing ecological intelligence. Extended intelligence has been routed to technological devices instead of to the rest of the natural world. Compared to those who live in earthcentric communities, we don’t have much knowledge to enact. Thus, the undercared-for mind fails to display our species-typical, integrated-brain or earthcentric moral intelligence.

17.7 First-Nature Desires: Broken or Fulfilled? First human nature is often discussed as our basic biology (in contrast to second human nature shaped by culture). However, we now understand that our biology is constructed by early life care. The nature of those early experiences is governed by the childraising culture our adult caregivers adopt and by their own early life experiences, which they tend to repeat. Thus, our first nature is biosocially constructed. Our inherited biological human propensities are shaped by developmentally relevant experience which used to be universally provided within a narrow range of variation. Given that a child is not fully developed until around age 30, the power of experience and relationships is extensive. Unfortunately, civilization has developed the habit of impairing our first nature (Gabel, 2018; Maté & Maté, 2022). The result is dysregulation: physiologically – poorly performing systems (e.g., stress response, vagus nerve, immune system); psychologically – a diminished self, an inflated false ego (for self-protection), and misdirected motivations from basic need unfulfillment (Narvaez, 2014). The true self is covered up by habits taken up to alleviate distress from the broken continuum of support that brought about the broken first nature. Our caregivers shape not only our physiological and psychological functioning but our desires, again, based on their treatment of us. We learn to desire experiences that fit the biosociality that was co-constructed by our childhood experiences. Our capacities for social and moral life are shaped by what we ourselves experienced in practices with our caregivers. In our evolved context, we become addicted to relational attunement and mutual enhancement – our mammalian endogenous opioid systems are designed for this (Panksepp, 1998). Our desires orient to community harmony for which we develop many skills from our immersion in communities that have those skills. We can observe the

Moral Psychology: Evolution, Neurobiology, Culture

common relational characteristics seen in EDN-providing communities: social enjoyment, empathy, generosity, forgiveness, love (Ingold, 2005; Narvaez, 2013; Widlok, 2017). Desires are shaped differently in industrialized civilization. Children are forced to detach from their caregivers (e.g., through sleep training and many hours spent alone or untouched) and to instead attach to things, such as teddy bears or screens (Narvaez, 2014). If we grow up with neglect or violence, we will expect it and transfer it to others (Menakem, 2017). Desires can appear inherited because of intergenerational effects – treatment by their parents shaped this generation of parents. Community members shape many of a child’s desires, including acceptable desires. A boy might have been ridiculed when he expressed interest in fashion, for example, causing him to suppress those desires (Porter, 2021). When adolescence offers more freedom, individuals with their aching empty souls (missing the species-normal addiction to relational attunement) often take up measures to distract from the pain, to self-medicate, using nicotine, alcohol, drugs, sex, work – things that often become addictions and impair adult health, leading to disease, disability, and even early death (Felitti, 2010; Maté & Maté, 2022).

17.8 Conclusion The move away from organic moral development results from a set of shifted baselines pervasive in Western and Westernized societies (Narvaez, 2016, 2019, 2020a; Narvaez & Witherington, 2018). There is no single cause but a host of causes, named briefly here, for modern humanity’s poor moral showing (Doris, 2005) – for example, bottom-up causes such as trauma and undercare (degraded evolved nest) and their effects on capacities; and top-down causes (cultural stories, delusions about causes) and their products (e.g., traumatized parents passing on their trauma to their children). The shifts away from millions-years-old species-typical child raising, especially since industrialization, has led to impaired physiology for good health and neurobiological foundations for sociality and morality. These alterations have led to shifts in cultural assumptions about human capacities, human nature, and human potential, including moral potential. Whether the evolved nest is provided to a society’s children depends on the practical experience and ideology of the culture. Humans are innately prepared to be deeply social, to respond to social signals in gestures and faces, to reason about and predict social behavior in others. But these capacities must be honed in the post-birth world. Modern Western culture often stresses infants and young children “for their own good,” not realizing the long-term impairments that can ensue. As a result of these trends, the orientation in most Western minds is to consider morality to be only about human persons and human communities, largely conducted in the intellect or between conscious minds. Moreover, the dominant culture considers the natural environment something to overcome,

425

426

      

something to be humanized by what lies in the particular human imagination (Ingold, 2011). The dominant worldview not only reflects impaired human morality, it has contributed to the degradation of other-than-human life generally and has brought about the planetary crises we are facing today. But there is hope. Phronesis (practical wisdom) represents our ability to take charge of our being and our further becoming. Second-order desires, desires about our desires, can be changed and influence our initial, socially shaped desires. As a result of new learning and awareness, we can choose to adopt new second-order desires and thereby alter desires earlier experiences instilled. For example, we can learn self-calming, social enjoyment, and communal imagination (Narvaez, 2014). We can learn to love the natural world through immersive activities (Young et al., 2010). We can adopt the Indigenous worldview and learn to partner with Nature through the nestedness we provide children and ourselves (Topa & Narvaez, 2022).

References Abdolmaleky, H. M., Thiagalingam, S., & Wilcox, M. (2005). Genetics and epigenetics in major psychiatric disorders: Dilemmas, achievements, applications, and future scope. American Journal of Pharmacogenomics, 5(3), 149–160. Abram, D. (1996). Spell of the sensuous. Vintage Press. Adams, D. W. (2020). Education for extinction: American Indians and the boarding school experience, 1875–1928 (2nd ed.). University Press of Kansas. Arnsten, A. F. T. (2009). Stress signaling pathways that impair prefrontal cortex structure and function. Nature Reviews Neuroscience, 10(6), 410–422. Basso, K. (1996). Wisdom sits in places: Landscape and language among the western Apache. University of New Mexico Press. Bethlehem, R. A. I., Seidlitz, J., White, S. R., Vogel, J. W., Anderson, K. M., Adamson, C., Adler, S., Alexopoulos, G. S., Anagnostou, E., Areces-Gonzalez, A., Astle, D. E., Auyeung, B., Ayub, M., Bae, J., Ball, G., Baron-Cohen, S., Beare, R., Bedford, S. A., Benegal, V., . . . & Alexander-Bloch, A. F. (2022). Brain charts for the human lifespan. Nature, 604, 525–533. Blasi, A. (1980). Bridging moral cognition and moral action: A critical review of the literature. Psychological Bulletin, 88(1), 1–45. Boehm, C. (1999). Hierarchy in the forest: The evolution of egalitarian behavior. Harvard University Press. Bram, M. (2002). The recovery of the west: An essay in symbolic history. Xlibris. Bram, M. (2018). A history of humanity. Primus Books. Burkart, J. M., Hrdy, S. B., & Van Schaik, C. P. (2009). Cooperative breeding and human cognitive evolution. Evolutionary Anthropology, 18(5), 175–186. Carter, C. S., & Porges, S. W. (2013). Neurobiology and the evolution of mammalian social behavior. In D. Narvaez, J. Panksepp, A. Schore, & T. Gleason (Eds.), Evolution, early experience and human development (pp. 132–151). Oxford. Chua, A. (2011). Battle hymn of the tiger mother. Penguin.

Moral Psychology: Evolution, Neurobiology, Culture

Crittenden, P. M. (1995). Attachment and psychopathology. In S. Goldberg, R. Muir, & J. Kerr (Eds.), Attachment theory: Social, developmental, and clinical perspectives (pp. 367–406). The Analytic Press. Darwin, C. (1962). The origin of species. Collier Books. (Original work published 1859) Darwin, C. (1981). The descent of man. Princeton University Press. (Original work published 1871) deMause, L. (1995). The history of childhood. Psychohistory Press. Doidge, N. (2007). The brain that changes itself. Viking. Doris, J. (2005). Lack of character. Cambridge University Press. Dunn, R. (2011). The wild life of our bodies: Predators, parasites, and partners that shape who we are today. Harper. Eisenberg, N., & Eggum, N. D. (2009). Empathic responding: Sympathy and personal distress. In J. Decety & W. Ickes (Eds.), The social neuroscience of empathy (pp. 71–83). Boston Review. Eisler, R., & Fry, D. P. (2019). Nurturing our humanity. Oxford University Press. Erikson, E. H. (1950). Childhood and society. Norton. Felitti, V. (2010). Adverse childhood experiences and their relation to adult health and well-being: Turning gold into lead [Conference session]. Human Nature and Early Experience: Addressing the Environment of Evolutionary Adaptedness. University of Notre Dame. Felitti, V. J., & Anda, R. F. (2005). The Adverse Childhood Experiences (ACE) Study. Centers for Disease Control and Kaiser Permanente. Fisher, M. M., & Eugster, E. A. (2014). What is in our environment that effects puberty? Reproductive Toxicology, 44, 7–14. Foot, P. (2001). Natural goodness. Oxford University Press. Frensch, P. A. (1998). One concept, multiple meanings: On how to define the concept of implicit learning. In M. A. Stadler & P. A. Frensch (Eds.), Handbook of implicit learning (pp. 47–104). Sage Publications, Inc. Fry, D. P. (2006). The human potential for peace: An anthropological challenge to assumptions about war and violence. Oxford University Press. Fry, D. P. (Ed.). (2013). War, peace, and human nature: The convergence of evolutionary and cultural views. Oxford University Press. Gabel, P. (2018). The desire for mutual recognition. Routledge. Garner, A., Yogman, M., & Committee on Psychosocial Aspects of Child and Family Health, Section on Developmental and Behavioral Pediatrics, Council on Early Childhood. (2021). Preventing childhood toxic stress: Partnering with families and communities to promote relational health. Pediatrics, 148(2), Article e2021052582. Gershoff, E. T., & Grogan-Kaylor, A. (2016). Spanking and child outcomes: Old controversies and new meta-analyses. Journal of Family Psychology, 30(4), 453–469. Gielen, U. P., & Markoulis, D. C. (1994). Preference for principled moral reasoning: A developmental and cross-cultural perspective. In L. L. Adler & U. P. Gielen (Eds.), Cross-cultural topics in psychology (pp. 73–87). Praeger. Gómez-Robles, A., Hopkins, W. D., Schapiro, S. J., & Sherwood, C. C. (2015). Relaxed genetic control of cortical organization in human brains compared with chimpanzees. Proceedings of the National Academy of Sciences, 112(48), 14799–14804.

427

428

      

Gowdy, J. (1998). Limited wants, unlimited means: A reader on hunter-gatherer economics and the environment. Island Press. Graeber, D., & Wengrow, D. (2021). The dawn of everything: A new history of humanity. MacMillan. Greer, A. (Ed.). (2000). The Jesuit relations: Natives and missionaries in seventeenthcentury North America. Bedford/St. Martin’s. Hambrick, E. P., Brawner, T. W., Perry, B. D., Brandt, K., Hofmeister, C., & Collins, J. O. (2019). Beyond the ACE score: Examining relationships between timing of developmental adversity, relational health and developmental outcomes in children. Archives of Psychiatric Nursing, 33(3), 238–247. Hare, B., & Yamamoto, S. (2017). Bonobos: Unique in mind, brain and behavior. Oxford University Press. Harvey, G. (2017). Animism: Respecting the living world (2nd ed.). C. Hurst & Co. Hawkes, K., & Coxworth, J. E. (2013). Grandmothers and the evolution of human longevity: A review of findings and future directions. Evolutionary Anthropology, 22(6), 294–302. Held, V. (1993). Feminist morality: Transforming culture, society, and politics. University of Chicago Press. Henley, T., & Rossano, M. (Eds.). (2022). Psychology and cognitive archaeology: An interdisciplinary approach to the study of the human mind. Routledge. Henley, T., Rossano, M., & Kardas, E. (Eds.). (2019). Handbook of cognitive archaeology: A psychological framework. Routledge. Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83. Henry, J. P., & Wang, S. (1998). Effects of early stress on adult affiliative behavior, Psychoneuroendocrinology, 23(8), 863–875. Hewlett, B. S., & Lamb, M. E. (2005). Hunter-gatherer childhoods: Evolutionary, developmental and cultural perspectives. Aldine Transaction. Hrdy, S. (2009). Mothers and others: The evolutionary origins of mutual understanding. Belknap Press. Ingold, T. (2005). On the social relations of the hunter-gatherer band. In R. B. Lee & R. Daly (Eds.), The Cambridge encyclopedia of hunters and gatherers (pp. 399–410). Cambridge University Press. Ingold, T. (2011). The perception of the environment: Essays on livelihood, dwelling and skill. Routledge. Ingold, T. (2013). Prospect. In T. Ingold & G. Palsson (Eds.), Biosocial becomings: Integrating social and biological anthropology (pp. 1–21). Cambridge University Press. Jablonka, E., & Lamb, M. J. (2006). Evolution in four dimensions: Genetic, epigenetic, behavioral, and symbolic variation in the history of life. MIT Press. Johnson, M. (2014). Morality for humans: Ethical understanding from the perspective of cognitive science. University of Chicago Press. Jost, J. T., Glaser, J., Kruglanski, A. W., & Sulloway, F. J. (2003). Political conservatism as motivated social cognition. Psychological Bulletin, 129(3), 339–375. Kagan, J., & Fox, N. A. (2006). Biology, culture, and temperamental biases. In W. Damon & R. M. Lerner (Series Eds.) & N. Eisenberg (Vol. Ed.), Handbook of child psychology (Vol. 3, pp. 167–225). Wiley.

Moral Psychology: Evolution, Neurobiology, Culture

Klinenberg, E. (2012). Going solo: The extraordinary rise and surprising appeal of living alone. Penguin. Kohlberg, L. (1981). The philosophy of moral development: Essays on moral development (Vol. 1). Harper & Row. Kohlberg, L. (1984). The psychology of moral development: Essays on moral development (Vol. 2). Harper & Row. Krebs, D. L. (2005). The evolution of morality. In D. Buss (Ed.), Evolutionary psychology handbook (pp. 747–774). John Wiley & Sons. Lee, R. B., & Daly, R. (Eds.). (2005). The Cambridge encyclopedia of hunters and gatherers. Cambridge University Press. Lodge, R. C. (1944). Balanced philosophy and eclecticism. Journal of Philosophy, 41(4), 85–91. Lopez-Otin, C., & Kroemer, G. (2021). Hallmarks of health. Cell, 184(1), 33–63. Lupien, S. J., McEwen, B. S., Gunnar, M. R., & Heim, C. (2009). Effects of stress throughout the lifespan on the brain, behaviour and cognition. Nature Reviews Neuroscience, 10(6), 434–445. Luria, A. R. (1976). Cognitive development: Its cultural and social foundations (M. Lopez Morillas & L. Solataroff, Trans.). Harvard University Press. Maté, G. (2010). In the realm of hungry ghosts. North Atlantic Books. Maté, G., & Maté, D. (2022). The myth of normal: Trauma, illness, and healing in a toxic culture. Avery. McPherson, D. H., & Rabb, J. D. (2011). Indian from the inside: Native American philosophy and cultural renewal (2nd ed.). MacFarland & Co. Menakem, R. (2017). My grandmother’s hands: Racialized trauma and the pathway to mending our hearts and bodies. Central Recovery Press. Midgely, M. (2010). The solitary self: Darwin and the selfish gene. Acumen. Mikulincer, M., & Shaver, P. R. (2007). Attachment in adulthood: Structure, dynamics, and change. The Guilford Press. Milburn, M. A., & Conrad, S. D. (2016). Raised to rage: The politics of anger and the roots of authoritarianism. MIT Press. Montagu, A. (1968). Brains, genes, culture, immaturity, and gestation. In A. Montagu (Ed.), Culture: Man’s adaptive dimension (pp. 102–113). Oxford. Narvaez, D. (2008). Triune ethics: The neurobiological roots of our multiple moralities. New Ideas in Psychology, 26(1), 95–119. Narvaez, D. (2010). Moral complexity: The fatal attraction of truthiness and the importance of mature moral functioning. Perspectives on Psychological Science, 5(2), 163–181. Narvaez, D. (2013). The 99% – Development and socialization within an evolutionary context: Growing up to become “A good and useful human being.” In D. Fry (Ed.), War, peace and human nature: The convergence of evolutionary and cultural views (pp. 643–672). Oxford University Press. Narvaez, D. (2014). Neurobiology and the development of human morality: Evolution, culture and wisdom. W.W. Norton. Narvaez, D. (2016). Baselines for virtue. In J. Annas, D. Narvaez, & N. Snow (Eds.), Developing the virtues: Integrating perspectives (pp. 14–33). Oxford University Press. Narvaez, D. (2017). Are we losing it? Darwin’s moral sense and the importance of early experience. In R. Joyce (Ed.), Routledge handbook of evolution and philosophy (pp. 322–332). Routledge.

429

430

      

Narvaez, D. (Ed.). (2018). Basic needs, wellbeing and morality: Fulfilling human potential. Palgrave Macmillan. Narvaez, D. (2019). In search of baselines: Why psychology needs cognitive archaeology. In T. Henley, M. Rossano & E. Kardas (Eds.), Handbook of cognitive archaeology: A psychological framework (pp. 104–119). Routledge. Narvaez, D. (2020a). Ecocentrism: Resetting baselines for virtue development. Ethical Theory and Moral Practice, 23(3), 391–406. Narvaez, D. (2020b). Moral education in a time of human ecological devastation. Journal of Moral Education, 50(1), 55–67. Narvaez, D. (2021). Species-typical phronesis for a living planet. In M. De Caro & M. S. Vaccarezza (Eds.), Practical wisdom: Philosophical and psychological perspectives (pp. 160–180). Routledge. Narvaez, D., Four Arrows, Halton, E., Collier, B., & Enderle, G. (Eds.). (2019). Indigenous sustainable wisdom: First Nation know-how for global flourishing. Peter Lang. Narvaez, D., Gettler, L., Braungart-Rieker, J., Miller Graff, L., & Hastings, P. (2016). The flourishing of young children: Evolutionary baselines. In D. Narvaez, J. Braungart-Rieker, L. Miller, L. Gettler, & P. Hastings (Eds.), Contexts for young child flourishing: Evolution, family and society (pp. 3–27). Oxford University Press. Narvaez, D., Gleason, T., Tarsha, M., Woodbury, R., Cheng, A., & Wang, L. (2021). Sociomoral temperament: A mediator between wellbeing and social outcomes in young children. Frontiers in Psychology, 12, Article 5111. Narvaez, D., Moore, D. S., Witherington, D. C., Vandiver, T. I., & Lickliter, R. (2022). Evolving evolutionary psychology. American Psychologist, 77(3), 424–438. Narvaez, D., Panksepp, J., Schore, A., & Gleason, T. (Eds.). (2013). Evolution, early experience, and human development: From research to practice and policy. Oxford University Press. Narvaez, D., Wang, L, & Cheng, A. (2016). Evolved developmental niche history: Relation to adult psychopathology and morality. Applied Developmental Science, 20(4), 294–309. Narvaez, D., & Witherington, D. (2018). Getting to baselines for human nature, development and wellbeing. Archives of Scientific Psychology, 6(1), 205–213. Narvaez, D., Woodbury, R., Gleason, T., Kurth, A., Cheng, A., Wang, L., Deng, L., Gutzwiller-Helfenfinger, E., Christen, M., & Näpflin, C. (2019). Evolved development niche provision: Moral socialization, social maladaptation and social thriving in three countries. Sage Open, 9(2). Newen, A., De Bruin, L., & Gallagher, S. (Eds.). (2018). The Oxford handbook of 4e cognition. Oxford University Press. Niehoff, D. (1999). The biology of violence: How understanding the brain, behavior, and environment can break the vicious circle of aggression. Free Press. Ong, W. (2002). Orality and literacy. Routledge. Oster, E. (2019). Cribsheet: A data-driven guide to better, more relaxed parenting, from birth to preschool. Penguin Press. Panksepp, J. (1998). Affective neuroscience: The foundations of human and animal emotions. Oxford University Press. Piaget, J. (1965). The moral judgment of the child (M. Gabain, Trans.). Free Press. (Originally published in 1932)

Moral Psychology: Evolution, Neurobiology, Culture

Porges, S. (2011). Polyvagal theory. Norton. Porter, B. (2021). Unprotected: A memoir. Abrams. Power, C., Finnegan, M., & Callan, H. (2017). Introduction. In C. Power, M. Finnegan, & H. Callan (Eds.), Human origins: Contributions from social anthropology (pp. 1–34). Berghahn. Pulcini, E. (2019). Is care a gift? In G. Vaughan (Ed.), The maternal roots of the gift economy (pp. 78–93). Ianna Publications. Redfield, R. (1953). The primitive world and its transformations. Cornell University Press. Redfield, R. (1956). Peasant society and culture: An anthropological approach to civilization. University of Chicago Press. Rest, J. (1983). Morality. In J. Flavell & E. Markham (Eds.), Cognitive development, from P. Mussen (Ed.), Manual of child psychology, Vol. 3 (pp. 556–629). Wiley. Sahlins, M. (1968). Notes on the original affluent society. In R. B. Lee & I. DeVore (Eds.), Man the hunter (pp. 85–89). Aldine Publishing Company. Sahlins, M. (2008). The Western illusion of human nature. Prickly Paradigm Press. Schore, A. N. (2003a). Affect dysregulation and disorders of the self. Norton. Schore, A. N. (2003b). Affect regulation and the repair of the self. Norton. Sheldrake, M. (2021). Entangled life: How fungi make our worlds, change our minds & shape our futures. Random House. Shepard, P. (1998). Coming home to the Pleistocene (F. R. Shepard, Ed.). Island Press/ Shearwater Books. Shonkoff, J. P., & Phillips, D. A. (2000). From neurons to neighborhoods: The science of early childhood development. National Academy Press. Siepel, K. H. (2015). Conquistador voices: The Spanish conquest of the Americas as recounted largely by the participants: Christopher Columbus, Hernán Cortés. Spruce Tree. Simard, S. (2021). Finding the Mother Tree: Discovering how the forest is wired for intelligence and healing. Knopf. Slingerland, E. G. (2014). Trying not to try: Ancient China, modern science, and the power of spontaneity. Broadway Books. Sorenson, E. R. (1998). Preconquest consciousness. In H. Wautischer (Ed.), Tribal epistemologies (pp. 79–115). Ashgate. Tarsha, M. S., & Narvaez, D. (2021). Effects of adverse childhood experience on physiological regulation are moderated by evolved developmental niche history. Anxiety, Stress & Coping, 35(4), 488–500. Tarsha, M. S., & Narvaez, D. (2023). The developmental neurobiology of moral mindsets: Basic needs and childhood experience. In M. Berg & E. Chang (Eds.), Motivation & morality: A biopsychosocial approach (pp. 187–204). APA Books. Thompson, M. (1995). The representation of life. In R. Hursthouse, G. Lawrence, & W. Quinn (Eds.), Virtues and reasons (pp. 247–296). Clarendon Press. Tomasello, M. (2019). Becoming human: A theory of ontogeny. Harvard University Press. Tomkins, S. (1965). Affect and the psychology of knowledge. In S. S. Tomkins & C. E. Izard (Eds.), Affect, cognition, and personality (pp. 72–97). Springer. Topa, W., & Narvaez, D. (2022). Restoring the kinship worldview: Indigenous voices introduce 28 precepts for rebalancing life on planet earth. North Atlantic Books.

431

432

      

Trevathan, W. R. (2011). Human birth: An evolutionary perspective (2nd ed.). Aldine de Gruyter. Turiel, E. (1983). The development of social knowledge: Morality and convention. Cambridge University Press. Turnbull, C. M. (1984). The human cycle. Simon & Schuster. Varela, F. (1999). Ethical know-how: Action, wisdom, and cognition. Stanford University Press. Vaughan, G. (2007). Introduction: A radically different worldview is possible. In G. Vaughan (Ed.), Women and the gift economy (pp. 1–40). Ianna Publications. Vaughan, G. (Ed.). (2019). The maternal roots of the gift economy. Ianna Publications. Watson, J. (1988). The relation of philosophy to science. In J. D. Rabb (Ed.), Religion and science in early Canada (pp. 18–39). Ronald P. Frye. Weaver, L. N., Varricchio, D. J., Sargis, E. J., Chen, M., Freimuth, W. J., & Wilson Mantilla, G. P. (2021). Early mammalian social behaviour revealed by multituberculates from a dinosaur nesting site. Nature Ecology & Evolution, 5(1), 32–37. Weiss, G. (2012). Ayn Rand nation: The hidden struggle for America’s soul. Macmillan. West-Eberhard, M. J. (2003). Developmental plasticity and evolution. Oxford University Press. Widlok, T. (2017). Anthropology and the economy of sharing. Routledge. Wolynn, M. (2016). It didn’t start with you: How inherited family trauma shapes who we are and how to end the cycle. Penguin. Worster, D. (1994). Nature’s economy: A history of ecological ideas (2nd ed.). Cambridge University Press. Wrangham, R. W., & Peterson, D. (1996). Demonic males: Apes and the origins of human violence. Houghton, Mifflin and Company. Young, J. (2019). Connection modeling: Metrics for deep nature-connection, mentoring, and culture repair. In D. Narvaez, Four Arrows, E. Halton, B. Collier, & G. Enderle (Eds.), Indigenous sustainable wisdom: First Nation knowhow for global flourishing (pp. 219–243). Peter Lang. Young, J., Haas, E., & McGown, E. (2010). Coyote’s guide to connecting with nature (2nd ed.). Owlink Media.

18 Moral Babies? Evidence for Core Moral Responses in Infants and Toddlers Kiley Hamlin and Francis Yuen 18.1 Theories of the Origins of Morality Humans’ capacity for moral judgment has been of great interest to both philosophers and scientists. Although people living in different cultures vary somewhat in their judgments of exactly which subset of moral actions are “right” or “wrong,” there is nevertheless cross-cultural consensus that some actions are more permissible and morally good than others (Brown, 1991; Curry et al., 2019). Thus, a moral sense – the capacity to evaluate certain actions and individuals as good, right, and deserving of reward, and others as bad, wrong, and deserving of punishment – appears to be universally present among humans. But where does this moral sense come from? Inquiries into the origins of humans’ moral sense have typically taken two distinct, though clearly not mutually exclusive, paths. Some, particularly within developmental psychology, have focused on its ontogenetic origins: When and how does this moral sense emerge within individual lifespans, and how does it change over development? These ontogenetic perspectives, though themselves quite diverse, hold that humans’ moral sense and moral behavior emerge and develop throughout early childhood via a combination of exposure to social experiences, guidance from caregivers and peers, and improvements in cognitive capacities such as reduction of egocentrism and increase in emotion regulation (Bandura, 1977; Damon, 1977; Eisenberg, 1986; Grusec et al., 2014; Kohlberg, 1969; Piaget, 1932; Turiel, 1983; for more recent reviews see Killen & Smetana, 2014). These perspectives differ as to when they consider genuinely moral capacities to first emerge (e.g., early versus middle to late childhood) but are generally unified in perceiving human infants as lacking both sufficient experience and sufficiently developed cognitive capacities for morality. By contrast, some have focused on morality’s phylogenetic origins (for reviews, see Henrich & Henrich, 2007; Joyce, 2007; Katz, 2000; Nowak, 2006). These theories view the moral sense as emerging over humans’ evolutionary history due to its utility for maintaining cooperative systems. Specifically, the evolutionary perspective on morality holds that humans reap significant rewards from cooperating with others, as groups of cooperators can achieve goals that would be impossible for any individual to achieve on their own. At the same time, cooperative groups require significant contribution from their members to maintain their functionality. Thus, if cooperative systems are to

433

434

                

persist, the benefits of group work must outweigh the costs incurred by the individual cooperative members, something that requires each member of the team to contribute to the best of their ability. As a result, cooperative systems are highly susceptible to “cheaters”: those who reap the rewards of the group effort without reciprocating. Since cheaters incur little to no cost and nevertheless enjoy the same benefits as cooperators, over time they will necessarily beat out those who cooperate, thereby destroying the cooperative system. Evolutionary theorists hypothesize that large-scale cooperation persists despite this fatal vulnerability because humans developed, alongside the tendency to cooperate with others, means of consistently avoiding cheaters. For example, groups could establish sets of rules and expectations to guide permissible and impermissible behavior (e.g., helping group members is good; taking advantage of group members is bad). Further, individuals could identify and negatively evaluate noncooperators and exclude them from future cooperative efforts, and as a result selectively cooperate with cooperative others. Finally, groups and individuals could punish noncooperators, increasing the costs of defection. These tendencies to establish and abide by social rules, to notice and to exclude those who are noncompliant, and to punish those who defect may have emerged over the course of human history to keep defection rates in large-scale cooperative societies low and mitigate the risks associated with group living. Although cultures could establish and enforce cooperative norms solely at the societal level, evolutionary theories of morality posit that the presence of a moral sense within individuals facilitates these processes by ensuring, for instance, that actions that help versus harm the group as a whole are consistently and objectively evaluated positively versus negatively by each of the group’s members. If so, morality itself may have been subjected to evolutionary pressure, resulting in humans being endowed with some foundational moral sense – capacities for understanding and evaluating morally relevant acts in ways predicted by evolutionary theories of cooperation – that arises independently from particular experience. To differentiate it from a mature and fully articulated moral sense, we refer to this foundational moral sense as the moral core; the core is then greatly expanded upon and even remodeled as individuals interact with their particular cultures over the course of development. In the sections to follow, we will identify various aspects of the proposed moral core and review evidence in support of its existence.

18.2 Examining the Moral Core How could we know if humans possess a moral core? In the past two decades, researchers have increasingly explored whether any morally relevant capacities exist in human infants, who have had considerably less opportunity to learn what is right and wrong through first-hand or observational experience or explicit teaching; they also lack many of the advanced cognitive capacities hypothesized to underlie moral judgments in adults. Thus, if infants

Core Moral Responses in Infants and Toddlers

nevertheless demonstrate tendencies to understand and evaluate morally relevant acts, this would provide positive evidence for the existence of a moral core (Bloom, 2013; Hamlin, 2013a; Premack & Premack, 1994; Wynn & Bloom, 2013). Of course, early emergence need not be synonymous with evolutionarily based processes; for instance, many adaptive processes do not emerge until much later in life (e.g., sexual interest). That said, first, many capacities that facilitate adaptive behavior in older humans do indeed emerge early in life and apparently in the absence of relevant experience (see, e.g., Spelke & Kinzler, 2007). Second, arguably capacities for understanding and evaluating morally relevant acts actually would benefit very young humans, who are unique within the animal kingdom in their likelihood of being frequently cared for by nonkin from immediately after birth (Hrdy, 2011). Probing the existence of a moral core in infants, however, presents a number of challenges. Children’s moral sense has typically been examined by presenting children with morally relevant scenarios, acts, and characters and then asking children to verbalize their judgments and the reasoning behind them (e.g., Kohlberg, 1969; Piaget, 1932; Turiel, 1983). Although these methods have been exceptionally useful in exploring developmental changes in moral reasoning across childhood, they are clearly ill-suited for preverbal infants. In order to tap into the moral minds of a preverbal population, then, researchers need to develop tools that capitalize on responses that infants can produce, which can require neither understanding nor producing language. To that end, research into preverbal infants’ responses to the moral world has utilized methods in which infants are presented with nonverbal depictions of morally relevant interactions between individuals, or “morality plays” (Hamlin, 2013a). These plays sometimes involve human actors but are more commonly performed by infant-friendly puppets or animated characters, both because infants enjoy watching puppet interactions and because the use of puppets allows for maximum control over potential confounding factors relative to using humans (e.g., group membership, facial expressions; see Kominsky et al., 2022). Infants’ responses to these plays are then evaluated in various ways. First, researchers can probe infants’ expectations about how people typically act within the sociomoral world with violation-of-expectation (VoE) paradigms. This paradigm builds on the assumption that infants often look longer to events they find surprising (Aslin, 2007; Fantz, 1964; see also Kidd et al., 2012). VoE studies predict that if infants have some understanding of morally relevant events, they will tend to look longer at events that violate their understanding than at those that do not. Second, researchers can assess infants’ evaluation of agents who have performed morally relevant actions with preferential looking and reaching paradigms. Here, infants are shown two characters at the same time and are predicted to look longer at and/or reach for whichever they prefer. In what follows, we review existing literature using these methodologies, which provides evidence that infants appear to both hold expectations for how agents are likely to behave in morally relevant situations and appropriately evaluate those who perform actions that adults typically consider morally good versus bad. Our

435

436

                

discussion is primarily centered on two moral domains: help versus harm and fairness versus unfairness. These domains are both foundational to diverse theories of human moral development (e.g., Gray et al., 2012; Haidt & Graham, 2007; Haidt et al., 1993; Kohlberg, 1969; Turiel, 1983) and have been studied most extensively to date. Where appropriate, we discuss ways in which other morally relevant domains (e.g., group membership) interact with harm and fairness, as well as what is known about how these domains interact with each other. Of course, it is difficult, if not impossible, to determine whether infants’ behavior in these studies really rises to the level of a moral core or instead should be considered reflective of capacities to understand and evaluate the social world more broadly. Indeed, moral responses in adults hold normative sway; that is, adults not only dislike immoral acts and those who perform them but also think immoral agents ought not to have acted as they did because certain actions are wrong. Given infants’ other limitations, whether or not they also think in normative terms may never be able to be effectively demonstrated. For instance, while acts of protest are often seen as paradigmatic instances of normativity (Casler et al., 2009; Josephs et al., 2016; Rakoczy et al., 2008), preverbal infants cannot protest. Considering these ambiguities, this chapter intends to demonstrate that the behaviors observed in existing infant paradigms could be interpreted as moral, insofar as they apply to the same actions and agents that morally sensible adults characterize as moral. Where applicable, we will detail what a (merely) social versus a moral interpretation of a pattern of data might look like, and what other evidence might help to adjudicate between the possibilities (for further discussion of the moral versus social distinction, see Tafreshi et al., 2014). That said, we will not convincingly demonstrate that infants are engaging in honestto-goodness moral thinking, both because we are skeptical that such evidence could be found even if infants were doing so and because the evolutionary theories of morality that we draw from are themselves based in humans having evolved basic capacities to identify and positively evaluate good cooperators and negatively evaluate bad ones. These capacities may give rise to, but do not themselves consist of, genuinely moral thought. Thus, our aim is to illustrate the evidence for the existence of foundational or “core” capacities upon which honest-to-goodness moral thinking builds. In the concluding sections, we discuss what is known about how this moral core might develop into a full-fledged moral sense, challenges that current moral development research faces, and outstanding questions for future research.

18.3 Investigating Help versus Harm 18.3.1 Infants Differentiate between Helping and Harming To our knowledge, the first examination of infants’ understanding of acts within the harm domain was performed by Premack and Premack (1997), who

Core Moral Responses in Infants and Toddlers

investigated whether infants understand and differentiate between positively and negatively valenced social interactions. Specifically, 12-month-olds watched two cartoon circles engaging in an interaction that was either positive (one circle helping the other by either caressing it or helping it achieve a goal) or negative (one circle harmed the other by either hitting it or hindering its attempt to achieve a goal). Critically, interactions of opposing valence were designed to be more physically similar than were interactions of the same valence (for example, helping was more similar to hindering than to caressing), so that infants could attend either to the valence of the interactions or to their physical similarities. After the infants were habituated to one of these interactions, they were shown a different interaction (hitting) that was inconsistent with the originally shown action either in valence (e.g., infants who saw caressing saw hitting) or inconsistent physical properties (e.g., infants who saw hindering saw hitting). The authors reasoned that infants would look longer at whichever type of inconsistency was more meaningful to them. Looking-time analyses revealed that infants who saw valence-inconsistent acts looked significantly longer than did infants who saw physically inconsistent acts, suggesting that infants categorized the interactions they viewed during habituation in terms of their valence rather than their physical properties. These results were interpreted as evidence that infants are sensitive to some morally relevant interactions by the end of their first year and that they extracted valence information from the observed interactions. Section 18.3.2 will discuss additional research exploring different expectations that infants may hold about helping and hindering.

18.3.2 Do Infants Hold General Expectations about Helping and Harming? Adults uphold rules and standards for how people tend to interact socially with others, which raises the question: Do infants also hold baseline expectations about how individuals tend to behave and treat each other? Do they, for example, generally expect helpfulness? A number of studies have provided insight into this question (e.g., Hamlin, 2013b, 2014; Jin & Baillargeon, 2017; Lee et al., 2015; Premack & Premack, 1997) by presenting infants with puppet shows or videos featuring helping and hindering acts and comparing infants’ attention following each. Across studies, results have consistently shown that infants looked about equally following both acts, suggesting that infants do not hold baseline expectations that others will help. Do these results suggest that infants hold no expectations about how individuals are likely to behave and interact? Not necessarily. Notably, the studies just described have consistently included entirely unknown characters, providing neither information about the relationships among them nor the characters themselves (their past behaviors, their needs, etc.). In Section 18.3.2.1, we review work that has examined whether these factors influence infants’ expectations.

437

438

                

18.3.2.1 Contextual Influences on Infants’ Expectations Adults’ assessments of moral acts are often informed by contextual factors, such as an individual’s need: Helping an elderly person cross the street is likely to be judged as more morally praiseworthy than helping an able-bodied adult. Research suggests that infants may also consider contextual information when forming expectations for how people are likely to direct their helping behavior. For example, Köster and colleagues (2016) showed that 9- to 18-month-olds expected a third-party helper to assist a character whose goal was obstructed over a character who could attain their goal without assistance, suggesting infants’ helping expectations take into consideration target individuals’ neediness. Notably, although a wide age range was used for this study, the effects were consistent across age. Similarly, a recent study found that when presented with a scenario where an infant is in distress, 4-month-old infants were surprised (i.e., looked longer) when an adult ignored rather than assisted the infant (Jin et al., 2018). This difference in looking was not due to infants being surprised by a mere lack of action by the adult: When the infant was laughing instead of crying, infants did not show different looking patterns between an approaching or ignoring adult. This suggests that infants’ expectations were based on the neediness of the crying baby. Together, these studies provide evidence that infants as young as 4 months old expect helping behaviors to be directed toward those in need. In addition to individuals’ neediness, infants’ helping expectations also seem to consider the group memberships of both helpers and recipients of help. For instance, Jin and Baillargeon (2017) examined baseline expectations for helping in 17-month-olds by showing scenarios in which an adult either ignored or gave an out-of-reach object to another actor. As in previous work (Hamlin, 2013b, 2014; Lee et al., 2015; Premack & Premack, 1997), infants’ looking times suggested that they held no general expectations that unknown others would help each other. They then examined whether group membership would influence infants’ expectations by having helpers and recipients either declare that they were in the same group (“I’m a bem!” “I’m a bem too!”) or in different groups (“I’m a bem!” “I’m a tig!”). Infants’ attention suggested that they expected someone to help an in-group member but held no expectations for how an adult would treat an out-group member. A related study recently showed that infants as young as 9 months of age expect an agent to intervene and help an in-group member, even at the expense of harming an out-group member (Pun et al., 2021). Together, these results suggest that infants may expect certain individuals to help each other: those in the same group.

18.3.2.2 Expectations for How Individuals Respond to Helping and Harming These results suggest that infants hold relatively specific expectations for helping that specify who will get helped (e.g., someone in need or in one’s group). That said, even without general expectations for helping, one would nevertheless expect individuals who were previously helped versus previously

Core Moral Responses in Infants and Toddlers

harmed to subsequently behave differently from each other; research suggests that infants hold these expectations. For instance, Hamlin and colleagues (2007) showed 10-month-olds shows in which a protagonist repeatedly attempts and fails to climb up a steep hill. Infants then saw on alternating trials either a helper assist the protagonist by pushing them up or a hinderer interfere by pushing the protagonist down. After repeated viewings, infants were shown new shows in which the protagonist approached either the helper or the hinderer. Infants looked significantly longer when the protagonist approached the hinderer, suggesting that they expected a recipient of helping and hindering to prefer its helper. In a related study, 12-month-olds preferentially looked toward the helper prior to the protagonist’s approach, suggesting that they anticipated that the protagonist would move toward the helper and not the hinderer (Fawcett & Liszkowski, 2012; see also Kuhlmeier et al., 2003). Together, these results suggest infants expect that recipients of helping and hindering will subsequently prefer their helpers to their hinderers. Expectations that recipients will approach their helpers could indicate that infants expect helper-approach behavior, hinderer-avoidance behavior, or both. To disambiguate these possibilities, Chae and Song (2018) conducted two experiments examining 6- and 10-month-olds’ expectations for whether a protagonist would be more likely to approach 1) a helper versus a neutral character and 2) a hinderer versus a neutral character. Results showed that whereas infants looked reliably longer when the protagonist approached a hinderer versus a neutral character, suggestive of an expectation for hindereravoidance, they looked equally when the protagonist approached the helper versus the neutral character, which suggests no corresponding expectation for helper-approach. These patterns show that an expectation that others will selectively avoid harmful individuals may emerge by 6 months of age, while an expectation of selectively approaching helpers may develop later (see Hamlin et al., 2007 for null results in all neutral character contrasts; see Chae & Song, 2018 for discussion). These results are consistent with work demonstrating a “negativity bias” in both children and adults, whereby individuals selectively attend to and learn from negative over positive stimuli (for review, see Vaish et al., 2008; see also Baumeister et al., 2001; Hamlin et al., 2010).

18.3.2.3 Infants’ Expectations Consider Intent Adults, particularly in the West (Curtin et al., 2020), place great emphasis on intent when judging others for their morally relevant acts (Cushman, 2008; Malle, 1999; Mikhail, 2007; Young et al., 2007). For example, a failure to bring about a positive outcome despite positive intentions (e.g., trying but failing to put out a house fire) is likely to be judged as morally superior to intentionally bringing about a negative outcome (e.g., setting someone’s house on fire), even though both scenarios share identical (negative) endings. Further, a failed attempt at preventing a negative outcome is typically regarded as superior to

439

440

                

a failed attempt to cause a negative outcome (e.g., unsuccessful arson attempts), even though the latter is associated with better results. On what basis do infants generate expectations for approach behavior? An additional study using the hill paradigm examined whether infants’ expectations for who the protagonist will approach are based on valenced intent (to help or hinder the protagonist’s goal to climb) versus valenced outcome (whether or not the protagonist achieves its goal; Lee et al., 2015). In this study, 16-month-olds watched versions of the hill paradigm where a helper attempted but failed to help the protagonist get to the hilltop, and a hinderer attempted and succeeded to prevent the protagonist’s goal. Despite both events resulting in the same negative outcome (the protagonist never reaching the hilltop), infants nevertheless looked longer (i.e., were surprised) when the protagonist approached the successful hinderer, suggesting that they expected the protagonist to favor the character with better intentions. Notably, younger (12-monthold) infants failed to expect helper-approach when the outcomes were both negative; however, they did expect the protagonist to approach the helper when only the helping and hindering attempts were shown (all events stopped prior to an outcome occurring), suggesting that younger infants may be sensitive to intent but relatively more susceptible to the influence of salient outcomes than are older infants. Together, these results suggest that intent is a primary factor driving infants’ expectations about how recipients of moral actions will respond, at least in the second year.

18.3.2.4 Infants’ Expectations for How Third-Party Observers Respond to Helping and Harming Humans often evaluate others even as third-party observers who are not personally affected by the acts themselves. Do infants also expect mere observers of helping and harming to evaluate and differentially react to helpers and harmers? To examine this question, Kanakogi and colleagues (2017) showed 6-month-old infants scenarios where a bystanding observer watched as a cartoon character (aggressor) repeatedly hit another character (victim). Afterward, infants were shown videos in which the observer either helped or hit both the aggressor and the victim, and their looking times were measured. Results showed that infants looked longer when the observer helped the aggressor versus helped the victim, and they looked longer when the observer harmed the victim versus harmed the aggressor. These patterns suggest that infants expect neural third parties to selectively direct prosocial behavior toward prosocial individuals but antisocial behavior toward antisocial individuals by just 6 months of age. A related study with 15-month-olds explored whether these expectations were based solely on infants’ own evaluation of the aggressor and the victim (e.g., their own sense that aggressors should be harmed and not helped) or a representation of how the observer, specifically, might respond after witnessing the aggressive acts. Suggesting that infants’ expectations are about the observer in particular, 15month-olds expected a third party to avoid positively interacting with a

Core Moral Responses in Infants and Toddlers

previously antisocial other only if the third party had actually witnessed the antisocial act (Choi & Luo, 2015).

18.3.3 Infants’ Evaluations of Helpers and Harmers In Section 18.3.2.4, we reviewed evidence suggestive that infants differentiate between helping and hindering actions, do not generally expect helping but do hold distinct expectations for which individuals will receive help, expect recipients of help and harm to selectively avoid their harmers, and expect third-party observers of help and harm to selectively direct their own prosocial and antisocial acts toward helpers and harmers. However, moral judgments involve more than just understanding valenced social interactions and how they affect others but importantly include an evaluative component whereby some actions are viewed as good and others bad. Indeed, returning to the theory of cooperation, it is insufficient to simply understand that a person cheated when they failed to contribute to cooperative activities: Individuals must also negatively evaluate cheaters and subsequently avoid them. In Sections 18.3.3.1, to 18.3.3.4 we will review research investigating how infants evaluate helpful and harmful agents.

18.3.3.1 Infants Prefer Helpers over Hinderers In 2007, Hamlin and colleagues conducted the first study to examine infants’ preferences for prosocial versus antisocial characters. After 6- and 10-montholds were shown a live puppet show version of the hill scenario discussed in Section 18.3.2.2, infants themselves were presented with the helper and hinderer characters and asked to choose between them. Infants’ social preference was identified as the first character they reached for and touched, and results showed that both 6- and 10-month-olds significantly reached for the helper over the hinderer. In addition, when infants watched scenarios where the helper and hinderer were paired with a neutral character that did not interact with the protagonist, they preferred the helper over the neutral character and the neutral character over the hinderer. This was the first evidence suggesting that by 6 months of age, infants positively evaluate characters who help and negatively evaluate those who harm, even when they are merely third-party observers who were not personally affected by the characters’ acts. Building on these results, Hamlin and colleagues (2010) presented the same paradigm to 3-month-olds to investigate when helper preferences first emerge. Since 3-months-olds lack the motor capacity to reach for objects, their evaluations were measured using a preferential looking paradigm during which the helper and hinderer were held in front of infants’ faces, and their looking behavior was subsequently coded from video recordings. This younger population showed a similar pattern to older infants, spending significantly more time looking at the helper than at the hinderer. Interestingly, when tested in the neutral conditions, 3-month-olds looked somewhat different from older infants:

441

442

                

3-month-olds reliably preferred a neutral character to a hinderer, suggestive that they negatively evaluated the hinderer; but they did not prefer a helper to a neutral character, suggestive that they did not positively evaluate the helper. Together, this series of studies shows that the capacity to understand helpful and unhelpful acts and to prefer helpful versus unhelpful individuals may emerge within the first few months of life and that negative evaluations may emerge before positive evaluations do. Could infants have acquired these capacities through experience? Clearly, even infants in their first 3 months have ample experiences of being both helped and hindered. Perhaps infants generalize the positive and negative feelings they experience in these situations to the third-party helping and hindering behaviors they observed and to the individuals who performed them. Although we do not deny this possibility, we note that this experience-based learning would need to be quite impressive. For one, infants would have to recognize and evaluate acts of helping and hindering with which they have no first-hand experience. Further, the individuals most likely to help infants in their daily lives are also those most likely to hinder them (e.g., parents); as such, there is likely no consistent differentiation between helpful and unhelpful individuals in infants’ everyday lives. Therefore, while early social experiences may support 3-month-olds’ sociomoral evaluations, it remains unclear how that process works.

18.3.3.2 Truly Social, or Something Else? That said, showing infants just one instance of helping and hindering behavior makes it difficult to draw broad conclusions as to why infants choose helpers. Indeed, some researchers have raised concerns about results using the hill paradigm, noting that rather than basing their preferences on the sociomoral context of the puppet shows, infants’ preferences may have been driven by lowlevel physical differences between the helping and hindering events. For instance, Scarf and colleagues (2012) noted that Hamlin et al.’s (2007) protagonist puppet performed a celebratory “bounce” at the hilltop after being helped, whereas no bouncing occurred after hindering. They conducted a series of studies suggesting that infants do not prefer helpers after all but instead prefer any character associated with bouncing. These results called into question the claim that infants engage in sociomoral evaluation at all. In response, Hamlin and colleagues (2012) noted that the stimuli used by Scarf and colleagues (2012) differed from the original in at least one critical way: Whereas Hamlin and colleagues’ protagonist gazed upward toward the top of the hill during his failed attempts, Scarf and colleagues’ protagonist’s eyes were unfixed, causing it to gaze down the hill. Hamlin and colleagues argued that the protagonist’s downward gaze may have interfered with infants’ ability to infer that the protagonist’s goal was to climb the hill, rendering the acts of the helper and hinderer uninterpretable. To examine this possibility, in a series of studies Hamlin (2015) manipulated both the protagonist’s eye gaze (fixed upward versus unfixed) as well as whether or not the protagonist bounced

Core Moral Responses in Infants and Toddlers

at the top of the hill, and found that infants favored the helper as long as the protagonist’s eye gaze was consistent with the goal of hill-climbing, regardless of whether or not it bounced. These results suggest that infants’ preferences were based on the prosocial and antisocial nature of the helper’s and hinderer’s acts; that is, facilitating versus blocking the protagonist’s unfulfilled goal. Studies using conceptually similar – but physically distinct – paradigms provide further support for the claim that infants’ evaluations are based on social aspects of the stimuli rather than low-level physical properties. For example, in the box scenario (Hamlin & Wynn, 2011), a protagonist puppet repeatedly attempts and fails to open a box containing an attractive toy. The helper helps the protagonist open the box and obtain the toy, whereas the hinderer slams the box shut. In the ball show (Hamlin & Wynn, 2011), a protagonist puppet plays with and subsequently loses control of a ball, which is alternately caught by a helper and a hinderer. The helper returns the ball to the protagonist, whereas the hinderer runs away with it. Studies using these scenarios have revealed that infants as young as 3 months visually preferred the helper after watching the ball scenario, and 5-month-olds selectively reached for the helper over the hinderer following both types of shows. Additional studies have examined similar scenarios that involve physical aggression and protection. For example, 10-month-old infants selectively reach for characters who were victims of aggressive acts over the aggressors themselves (Kanakogi et al., 2013), and infants as young as 6 months preferred a character who intervened to protect a victim from aggression over one who did not (Kanakogi et al., 2017). Results from several of the paradigms we have discussed have been replicated by independent laboratories (Loheide-Niesmann et al., 2021; Scola et al., 2015), though other attempts at replication have been unsuccessful (Salvadori et al., 2015; Scarf et al., 2012; Schlingloff et al., 2020; to be discussed in Section 18.5.2.1). Taken together, these studies suggest that infants positively evaluate helpful others across multiple physically distinct interactions. Yet, if infants’ evaluations are truly social in nature, they should be selective to situations involving social interactions, as opposed to anytime a particular physical act takes place. For example, kicking a person is typically morally forbidden, whereas kicking a ball is not. To examine whether infants’ evaluations are similarly constrained, other groups of infants have viewed inanimate control versions of the aforementioned shows, in which “helpers” and “hinderers” direct their actions toward inanimate objects (i.e., an eyeless red ball with no self-propelled motion; a mechanical claw). In these nonsocial conditions, infants have shown no reliable preferences for the “helpers,” suggesting that choices in the social conditions reflect more than preferences for physical aspects of the displays (Hamlin et al., 2007, 2010; Hamlin & Wynn, 2011). This interpretation is bolstered by a study in which the very same physical acts were directed at social versus nonsocial recipients: 10-month-olds preferred an adult who comforted another human and pushed an inanimate object over one who pushed another human and comforted an object (Buon et al., 2014).

443

444

                

These results suggest that infants only form positive and negative evaluations when the recipients of helpful and harmful actions are social agents worthy of being helped and hindered and support a social interpretation of infants’ preference for helpers. That said, none of these findings informs the question of whether infants’ evaluations are anything close to moral. For instance, infants could hold a preference for agents that help for purely egotistic reasons: Perhaps infants prefer helpful agents because they reason that helpers will be more likely to help infants themselves, rather than because they think helping is a good thing (or hindering is a bad thing) to do more generally. Of course, moral judgments in older children and adults are informed by a multitude of factors, and it is imperative to examine the extent to which infants’ evaluations are or are not also informed by these factors. In the following sections, we will review research that investigates some of the variables known to influence adults’ moral judgments and examine the extent to which they also influence infants’ social preferences.

18.3.3.3 Infants’ Evaluations Consider Intent In Section 18.3.2.3, we discussed evidence showing that infants consider an agent’s intent when forming expectations about agents’ behavior in morally relevant scenarios (Lee et al., 2015). Other studies reveal a similar pattern of results when it comes to infants’ evaluations of helpers and hinderers. In a collection of studies, Hamlin (2013b) showed 8-month-olds the aforementioned box scenario featuring both successful helpers and hinderers (those who attempt to help/hinder and succeed) and failed helpers and hinderers (those who attempt to help/hinder and fail to do so). Across several studies in which infants choose between fully crossed pairs of successful and failed helpers and hinderers, infants consistently preferred puppets with more positive intentions, irrespective of the outcomes they caused, including when intention valence and outcome valence were pitted against each other. Notably, when puppets demonstrated the same intent but were associated with different outcomes, infants did not distinguish between them. These findings suggest that by 8 months of age, infants’ social evaluations are based on intent and not outcome. That said, 5month-olds in the same studies consistently chose randomly anytime one of the agents demonstrated a failed attempt, suggesting that younger infants’ preferences do not yet privilege intent (nor, importantly, outcome). Other studies have explored the role of intent in a different context: intentional versus accidental acts. Consider the scenario in which a person is asked to donate money at the checkout of a grocery store. Most people would judge a person who intentionally donates as “nicer” than one who donated because they pushed the “yes” instead of the “no” button by accident. In line with this example, Woo and colleagues (2017) demonstrated that, by 10 months of age, infants may share the same intuitions: They preferred puppets who intentionally helped a protagonist over one who helped by accident, but they preferred a

Core Moral Responses in Infants and Toddlers

puppet who accidentally harmed a protagonist over one who harmed intentionally. Similarly, Kanakogi and colleagues (2017) found that 10month-olds preferred those who intentionally intervened with bullying over those who intervened accidentally. These lines of work provide additional evidence that by 8–10 months of age, infants’ social evaluations seem to be sensitive to mental states. In order to accurately evaluate an agent’s intent, one often needs to consider what they know. For example, consider Tom, a child who is choosing a gift for his friend Billy. Tom decides to get Billy an action figure, only to find out that Billy hates action figures and is actually a fan of baseball cards. While this makes for an awkward gift, most people would place little blame on Tom for his unsuccessful gift giving, since most kids like action figures and Tom did not possess the knowledge to do better. Similarly, had Tom bought Billy baseball cards but a malicious store clerk swapped out the cards for fake ones without his knowing, Tom could not be blamed. Several studies now suggest that infants and toddlers may also consider actors’ knowledge when interpreting and evaluating morally relevant actions (for review, see Baillargeon et al., 2014; for alternative explanations, see Heyes, 2014; Perner & Ruffman, 2005). In one study, 10-month-olds were shown a protagonist puppet that repeatedly chose one toy over another, thereby demonstrating a preference, as two additional puppets observed. Afterward, obstacles were introduced such that the protagonist no longer had access to either toy. On alternating trials, two puppets each took turns removing an obstacle thereby allowing access to one of the toys. Here, infants selectively reached for the puppet that provided access to the protagonist’s preferred toy, suggesting that they positively evaluated the puppet that provided the preferred toy (Hamlin, Ullman, et al., 2013). Did infants merely prefer the puppet whose actions led to a positive outcome for the protagonist? Additional conditions suggest not: If the two helper puppets were not present during the preference demonstration and so could not have known which object was preferred, infants chose between the two puppets randomly (Hamlin, Ullman, et al., 2013). These results suggest that infants consider agents’ knowledge states when evaluating their intent, providing further support to the claim that infants’ social evaluations are based on intent.

18.3.3.4 Contextual Influences on Infants’ Evaluations In addition to accounting for intent, sociomoral evaluations are also contextual. Indeed, the same fully intentional, apparently antisocial act can be judged either negatively or positively, depending on the context. To illustrate, consider the 2016 case of the “kangaroo puncher,” where a man punched a kangaroo in the face. Although typically such a violent act against an animal would likely be deemed blameworthy, in this case the man allegedly punched the kangaroo to protect his dog from harm. To serve the ultimate goal of sustaining successful

445

446

                

cooperative systems, social evaluations must take into consideration the why behind prosocial and antisocial acts. Specifically, cooperative theories have suggested that intentional antisocial acts that punish noncooperators are critical for the success of a cooperative system, as they serve as deterrents against future cheaters (Boyd & Richerson, 1992; O’Gorman et al., 2009). Under this framework, antisocial actors should be evaluated positively if their actions are directed toward a “bad” individual who deserves punishment. To examine whether infants consider the context in which morally relevant acts occur, researchers have shown infants scenarios in which normally antisocial acts (e.g., hindering someone’s goal) may be positively evaluated. Infants were first shown puppet shows similar to ones mentioned in Section 18.3.3.2 in which a protagonist received prosocial or antisocial acts from two helper and hinderer puppets, respectively. Depending on the condition, one of the puppets was then chosen to be the new-protagonist, and infants watched ball scenarios in which the new-protagonist was helped by a new-helper and hindered by a new-hinderer. Critically, the new-helper and new-hinderer in these scenarios were third-party puppets that had no cues of affiliation with the protagonist from the first show. Across multiple studies, 19-, 8-, and even 4.5-month-olds preferred the new-helper if the new-protagonist had previously behaved prosocially (therefore deserved reward) but preferred the new-hinderer if the newprotagonist was previously antisocial (therefore deserved punishment; Hamlin, 2014; Hamlin et al., 2011). Importantly, control conditions demonstrated that infants’ preference were not a product of simple valence-matching (i.e., preferring the new-hinderer because the recipient of the action was previously associated with a negative outcome) but considered who had done what to whom: When the new-protagonist had instead been the victim of an antisocial act and therefore should not deserve punishment, infants preferred the new-helper over the new-hinderer. These results demonstrate that infants’ evaluations are sensitive to context and are consistent with the possibility that infants positively evaluate “rewarders” of prosocial behavior and “punishers” of bad behavior. Of course, it remains to be seen on what basis infants positively evaluated appropriate punishers in these studies. One possibility is that infants simply like it when bad things happen to bad actors and so prefer puppets who harm bad actors for any reason. In contrast, adults presumably positively evaluate punishers because they see their acts as informed responses to others’ misdeeds, that importantly do not suggest a likelihood to engage in antisocial action more generally. Indeed, since the new-helper and new-hinderer puppets in these studies were not present to observe the new-protagonist’s previous acts, infants may not have perceived them as rewarding and punishing in any intentional way. Indeed, contextual influences sometimes lead infants to evaluate helpers and hinderers in morally “incorrect” ways. For example, 9- and 14-month-old infants prefer puppets that harmed another puppet that merely did not share the infant’s food preference (Hamlin, Mahajan, et al., 2013). Nevertheless, these results suggest that infants’ action evaluations can differ depending on context,

Core Moral Responses in Infants and Toddlers

supporting capacities for positively evaluating appropriate punishment and punishers.

18.4 Investigating Fairness: Equality versus Equity Adults’ moral sense also includes notions of distributive justice, including equality and equity (Deutsch, 1975). The principle of equality maintains that, all else equal or unknown, resources should be equally distributed and any unequal distributions are unfair. On the contrary, the principle of equity allows for exceptions where unequal distributions may be fair in particular contexts (e.g., unequal effort or deservingness, different levels of recipient neediness, etc.). Using similar methodologies as those described throughout Section 18.3, researchers have explored infants’ sensitivity to these two principles; the following sections will review this work.

18.4.1 Infants Hold Fairness Expectations 18.4.1.1 Equality: Infants Expect Equal Distribution of Resources In contrast to work suggestive that infants lack a baseline expectation that individuals will help others, research suggests that infants do expect resources to be distributed equally. For example, after viewing events where a human actor distributed four graham crackers between two human recipients, 15month-olds looked reliably longer at an unequal (3:1) distribution than an equal one (2:2) (Schmidt & Sommerville, 2011). Critically, when distributions were made to inanimate objects rather than humans, 15-month-olds no longer showed increased attention to unequal distributions, suggestive that infants’ differential attention is not based solely on physical differences between events (for similar results, see Sloane et al., 2012; Sommerville et al., 2013). Subsequent studies using this paradigm suggest that infants are sensitive to equality by 12 months of age (Ziv & Sommerville, 2017). Twelve-month-olds’ sensitivity to fairness has been shown to be related to other factors, such as whether an infant has siblings at home (Ziv & Sommerville, 2017) and whether they are likely to share generously themselves (Schmidt & Sommerville, 2011; Sommerville et al., 2013). Is infants’ sensitivity to fairness the result of these experiences? Recent studies showing success in younger infants when simpler distributions are utilized suggest that whereas siblings and personal sharing experience may give infants an advantage with understanding complex distributive acts, they are unlikely to be solely responsible for infants’ sensitivity to equality. For instance, Buyukozer Dawkins and colleagues (2019) found that both 4- and 9-month-olds looked longer to simple unequal versus equal distributions (2:0 versus 1:1), and that results did not depend on whether infants had an older sibling. Since 4-month-olds also lack the motor ability to engage in sharing behavior themselves, these findings

447

448

                

suggest that sharing experience is likely not the sole factor underlying infants’ baseline expectation of equality. Together, these studies suggest that experience may facilitate the development of fairness expectations by helping infants to expand upon a preexisting understanding of fairness.

18.4.1.2 Equity: Infants’ Expectations Are Contextual In addition to basic equality, adults assess fairness on the basis of equity, where what is fair depends on contextual information such as how much work one has contributed (Austin, 1980; Leventhal & Michaels, 1969) or how much one needs (Lamm & Schwinger, 1980). Perhaps unsurprisingly, research suggests that infants develop a sensitivity to equity considerably later than their sensitivity to equality. For example, after watching two experimenters perform a joint task for rewards, 21-month-olds expected an observing distributor to equally distribute rewards only if both experimenters had contributed equally to the task and not when one experimenter worked more (Sloane et al., 2012). In a related study, 17-month-olds watched puppets perform a joint activity then subsequently divide a pile of resources among themselves. Corroborating previous findings, infants expected the puppets to divide resources equally only if both puppets had contributed equally (Wang & Henderson, 2018). Currently, 17 months is the youngest age for which any sensitivity to equity has been demonstrated, versus just 4 months for equality. Infants’ fairness expectations are also driven by other contextual factors such as group status. For example, Bian and colleagues (2018) showed that 19month-olds held a baseline expectation that resources should be distributed equally among animal puppets when there are enough resources to go around. However, when resources were scarce, infants instead looked least at (i.e., were least surprised by) events where the distributor favored a puppet of its own species (an in-group member). These results suggest that when equal distributions are impossible, infants expect in-group favoritism. Together, these findings provide evidence that infants, like adults, may expect unequal distributions in certain contexts.

18.4.1.3 Do Infants Expect Individuals to Respond to Fairness Violations? Similar to their expectations of helping and hindering, infants appear to hold expectations for how observers of fair and unfair distributions are likely to respond; that said, to date research investigating this topic has yielded inconsistent results. For instance, it has been shown both that 10-month-olds looked longer at events in which an unfair versus a fair distributor was rewarded and at events in which an unfair versus a fair distributor was punished (Meristo & Surian, 2013, 2014). Further, Geraci and Surian (2011) demonstrated that 16month-olds (but not 10-month-olds) looked longer at events in which a thirdparty observer approached a fair versus an unfair distributor. In conclusion, while some findings suggest that infants may hold expectations about how

Core Moral Responses in Infants and Toddlers

observers of fair/unfair events will behave, there is a clear need for future research to clarify these conflicting findings.

18.4.2 Infants Evaluate Fair and Unfair Distributors as Third-Party Observers In Section 18.4.1, we reviewed research suggesting that infants are sensitive to distributive justice and seem to hold expectations of fairness informed by the principles of equality and equity. They also appear to hold some expectation about how observers of fair and unfair distributions would act toward the distributors, although findings to date in this area are inconclusive. That said, as mentioned in earlier sections of this chapter, morality necessarily involves an evaluative process. In Section 18.4.2.1, we review evidence that infants also generate evaluations of individuals they have seen distributing resources fairly versus unfairly.

18.4.2.1 Infants Prefer Fair versus Unfair Distributors To date, several studies have demonstrated that infants evaluate others on the basis of fairness. For example, Geraci and Surian (2011) found that 16-montholds (though not 10-month-olds) selectively reached for a fair (equal) versus an unfair (unequal) distributor; other studies have shown the same results using the preferential approach (Burns & Sommerville, 2014; Lucca et al., 2018). Notably, these results have been shown whether distributors are humans or animated simple agents, suggestive that they are robust. Together, these results suggest that shortly after their first birthday, infants evaluate fair and unfair individuals and prefer fair ones. Do these results suggest that infants evaluate individuals based on fairness considerably later than based on helping and harming? Possibly. On the other hand, since relatively less work has explored infants’ evaluations in the fairness domain than in the harm domain, it remains possible that a preference for fair over unfair distributors is present earlier in infancy, from the time infants show sensitivity to equal versus unequal distributions at all (4 months; Buyukozer Dawkins et al., 2019). Future research should probe the earliest emergence of a preference for fair distributors.

18.5 The Current State 18.5.1 What Is the Nature of Infants’ Responses to the Sociomoral World? The preceding sections reviewed research suggesting that infants: 1) detect violations in both the harm and fairness domain, 2) form expectations about how observers of such violations will subsequently behave, and 3) use

449

450

                

information about these violations to guide their own social behaviors, suggestive of evaluation. But how should we interpret infants’ responses? This chapter is framed around the question of whether or not infants possess a “moral core,” defined as capacities for understanding and evaluating morally relevant acts in ways predicted by evolutionary theories of cooperation. Although we believe that the research reviewed herein provides positive evidence that infants do possess such core capacities, their underlying nature is far from clear. One possible interpretation of these results is that infants’ responses reveal a sense that some actions are generally (and genuinely) good versus bad; that is, perhaps infants’ responses reflect some impartial moral sense. Alternatively, infants may be motivated entirely by self-interest, and their responses stem from strictly social inferences about whether helpful/fair and harmful/unfair individuals will be more likely to provide benefits versus costs to infants themselves in the future. As noted at the start of the chapter, convincingly distinguishing between these possibilities may be impossible. In the following sections, we review existing research that attempts to elucidate the underlying nature of infants’ responses to the sociomoral world.

18.5.1.1 Who Is “Good”? Do infants and toddlers make genuine good versus bad evaluations? Research asking infants and toddlers to identify the target of evaluative labels is suggestive that they might. In a recent study, 30-month-olds verbal toddlers were asked to “pick the good one” after watching either a helping/hindering or a fair/unfair distribution puppet show (Franchin et al., 2019). Toddlers reliably selected the previously helpful puppet as “good”; interestingly, they did not do so for the fair puppet. Notably, when simply asked to “pick one,” toddlers did select the fair puppet, suggesting that they positively evaluated the fair puppet, and yet did not identify the fair puppet as good. In a related study, 13- and 15-montholds were shown fair/unfair distributions and then saw the two distributors’ faces on two separate monitors accompanied by vocal stimuli of either praise (“Good job! She’s a good girl”) or admonishment/blame (“Bad job! She’s a bad girl” (DesChamps et al., 2016). Results showed that 13-month-olds displayed selective looking behavior depending on the accompanying audio: During the admonishment trials, infants looked significantly longer at the unfair distributor; and during the praise trials, infants looked numerically (albeit not significantly) longer at the fair distributor. These findings suggest that infants may associate admonishment/blame with unfair actions and are somewhat less likely to associate praise with fair actions. However, while 15-month-olds in the study also differentiated between the two distributors, they showed the opposite pattern, instead looking significantly longer at the fair distributor during the admonishment trials. Together, this line of work suggests that older infants and toddlers may see helpers/fair and hinderers/unfair individuals as “good” and “bad,” respectively, but clearly more work is needed to determine when and why they do so. Further, because infants in these studies are necessarily old

Core Moral Responses in Infants and Toddlers

enough to understand evaluative terms, these results cannot speak to the nature of younger infants’ responses.

18.5.1.2 Do Evaluations Support Generalization? If infants’ evaluations are based on the inferred “goodness” and “badness” of an action or a character, then they should expect these traits to hold consistent across domains (e.g., someone who harms will probably also act unfairly). Indeed, at least older infants appear to make relatively broad inferences about antisocial characters, expecting those who behaved badly in one situation to potentially do so again. For instance, after 14-month-olds watched a character repeatedly help or hinder another agent, their looking times suggested they expected the helper to subsequently distribute resources fairly between two novel agents, but held no such expectation for the hinderer (Surian et al., 2018). Another study found that by 25 months of age toddlers expected a hinderer to subsequently distribute resources unfairly, as opposed to fairly (Ting & Baillargeon, 2021). Together, these studies suggest that infants may view those who show one kind of antisocial act to be more likely to show another. These studies could be taken to indicate that infants view antisocial individuals as generally bad; however, they are not incompatible with the hypothesis that infants are sensitive to who harms others in order to avoid being harmed themselves.

18.5.1.3 Are Infants Motivated by Self-Interest? Are infants simply motivated by self-interest? Some research suggests that infants’ evaluations are not entirely motivated by self-interest. Building on the finding that 12-month-olds will selectively reach for more versus fewer resources (e.g., cookies; Feigenson et al., 2004), Tasimi and Wynn (2016) investigated whether infants would still show a preference to interact with a helpful character if they had to sacrifice resources to do so. Infants chose between a helper and a hinderer in the usual way, except that the hinderer offered two cookies and the helper only offered one; here, infants still preferred the helper. However, when the discrepancy in resources increased (helper offered one, hinderer offered eight), infants no longer displayed a preference for either character. These results suggest that infants are willing to give up resources to selectively engage with prosocial others but less so if the cost is significant. Thus, their evaluations are resistant to self-interest to some (perhaps small) degree.

18.5.1.4 Do Evaluations Support Reward and Punishment? Another way to probe the nature of early sociomoral evaluations is to examine third-party reward and punishment. Arguably infants themselves do not directly benefit from allocating rewards and punishments, thus any decision made may reflect a more objective judgment than do social choices. In one

451

452

                

experiment, 20-month-old toddlers first watched puppet shows in which two puppets either helped a protagonist by helping him obtain a toy placed on a high shelf or hindered by taking the toy away (Van de Vondervoort et al., 2018). Toddlers were subsequently more likely to reward the helper puppet by giving them the toy they requested compared to the hinderer puppet. Similarly, in a different study, 19- to 24-month-olds preferentially gave treats to previously helpful puppets and took treats away from hinderers (Hamlin et al., 2011). Finally, as discussed in sections above, although younger infants cannot yet administer reward and punishment themselves, research has found that both 4.5- (Hamlin, 2014) and 8-month-olds (Hamlin et al., 2011) prefer to interact with puppets that helped a previously prosocial puppet, as well as those that hindered a previously antisocial puppet. Infants show a similar pattern of rewarding “good-doers” in the fairness domain. In a recent study, Ziv and colleagues (2021) first trained 16-monthold infants on using a touchscreen device that allowed them to elicit different video responses: For example, touching the left side of the screen played a video clip in which the actor shown on screen is rewarded with more cookies, while touching the right caused the actor to lose cookies. Then, infants watched two videos of two actors distributing food either fairly or unfairly. After the distributions, infants were shown faces of the actors on the screen and given a chance to reward or punish the actors via the touchscreen. Results show that infants rewarded the fair distributor at a higher rate (i.e., touched the reward side more) than they did the unfair distributor. Altogether, these findings suggest that infants and toddlers may view prosocial behavior as deserving of reward and antisocial behavior as deserving of punishment. These concepts are crucial for sustaining large-scale cooperative societies and may reflect a sense that prosocial and antisocial acts are “good” and “bad,” respectively.

18.5.2 Challenges and Current Efforts While research on infants’ moral core has made great strides in recent years, it would be amiss to not address issues currently present in the field. Most notably, recent failed replication attempts of landmark studies (e.g., Salvadori et al., 2015) have brought into question the replicability of findings upon which our theories rest. In addition, the field of psychology as a whole has increasingly questioned whether research conducted on “WEIRD” (Western, educated, industrialized, rich, democratic) populations generalize to diverse populations (Arnett, 2008). In the concluding sections, we will discuss challenges currently facing infant morality research, current efforts to address these challenges, as well as future directions of the field.

18.5.2.1 Failed Replication Attempts In light of the ongoing “replication crisis” in psychology (Fidler & Wilcox, 2021; Ioannidis, 2005), in recent years there have been increased efforts to replicate seminal works in developmental research. For example, there have

Core Moral Responses in Infants and Toddlers

been several direct and conceptual replication attempts of Hamlin and colleagues (2007) and Hamlin and Wynn (2011)’s seminal findings that infants prefer helpers over hinderers that have failed to replicate the original findings (Salvadori et al., 2015; Scarf et al., 2012; Schlingloff et al., 2020). Of the attempts, Scarf and colleagues’ (2012) failure to replicate has been attributed to methodological differences (see Hamlin et al., 2012). However, the more recent attempts by Salvadori and colleagues (2015) received detailed procedural guidance from the original lead author and nevertheless failed to find a preference for helpers over hinderers. Besides minor methodological differences that may still exist despite detailed instructions from the original authors, what could have caused these failed replication attempts? A recent meta-analysis on infants’ evaluation of prosocial and antisocial agents suggests that while results consistently find a preference for prosocial characters, the effect size observed in published research may be inflated: On average, in the meta-analysis two out of three infants preferred the helper over the hinderer (Margoni & Surian, 2018). In addition, studies with larger sample sizes tended to find smaller effects, suggesting that the true effect may be smaller than those reported in smallscale studies (most common sample size ¼ 16 infants). Therefore, failures to replicate may be due to a lack of statistical power to detect a smaller-thanpredicted effect size.

18.5.2.2 The WEIRD Problem in Developmental Research As psychologists become more aware of cultural differences and cast doubt on whether research results obtained in WEIRD societies can be generalized to diverse populations (Arnett, 2008; Henrich et al., 2010), so too do developmental researchers (Amir & McAuliffe, 2020; Nielsen & Haun, 2016; Nielsen et al., 2017). Traditional research methods involve small-scale studies in specific demographic regions conducted by individual researchers. This research model greatly limits the generalizability of research findings, which may in turn explain failed replication attempts. Furthermore, infant research is difficult and financially costly to conduct, placing further constraints on which laboratories are capable of conducting infant studies.

18.5.2.3 Current Efforts: The ManyBabies Project In an attempt to address both the replication issue and the lack of diversity in participant data, new models of large-scale collaborative research have emerged. Specifically, the ManyBabies project unites dozens of laboratories to participate in large-scale replications of seminal findings in developmental psychology (Frank et al., 2017). These projects provide numerous benefits: (1) They pool data from several labs to reach sample sizes otherwise unattainable by any individual researcher, thereby allowing for the detection of smaller effects and increasing the chance of successful replications.

453

454

                

(2) They allow laboratories in different demographic regions to participate in areas of research in which they are not specialized. (3) They increase the cultural diversity of participants, both increasing the overall generalizability of research findings as well as allowing for the detection of cultural differences. Of the numerous projects, ManyBabies4 is a large-scale replication of Hamlin and colleagues’ (2007) study (Lucca et al., in press). We hope that this project will shed light on the strength of infants’ preference for helpers over hinderers, as well as any potential cross-cultural differences in the development of these preferences.

18.6 Summary and Outstanding Questions Evolutionary theorists hypothesize that large-scale cooperative systems persisted through our evolutionary history because humans developed means of detecting and avoiding cheaters (Henrich & Henrich, 2007; Joyce, 2007; Katz, 2000; Nowak, 2006). These means manifest as tendencies to establish social rules and evaluate others based on their compliance, which serve as a foundation for the human moral system. Other perspectives view infants as amoral or immoral from birth, maintaining that morality is a product of experience, socialization, and improvements in cognitive capacities (e.g., Bandura, 1977; Eisenberg, 1986; Grusec et al., 2014; Kohlberg, 1969; Piaget, 1932; Turiel, 1983). In this chapter, we have reviewed research suggesting that within the first two years of development, infants already understand and hold expectations about prosocial and antisocial acts (helping versus harming and fairness versus unfairness) and evaluate social others who have performed these acts. Some of these abilities emerge early in development, at ages that lack extensive exposure to social experiences, which suggests that experience and socialization are unlikely to be solely responsible for the development of humans’ moral sense. These lines of work provide support for the evolutionary account as well as the existence of the hypothesized moral core. That said, the existence of a moral core is far from mutually exclusive with an ontogenetic perspective: As we noted at the outset of the chapter, the research presented herein should not be taken as evidence that infants possess an unchanging, fully developed moral sense at birth. Indeed, throughout the chapter we have highlighted findings across a broad age range to show that, while some aspects of the moral core appear to be present from early in life (as young as 3 months of age), change occurs even within the infancy period. Further, although we have underscored several ways in which infants’ evaluations parallel those of someone morally mature (for instance, considering context and intent), we have also highlighted ways in which they differ significantly, such as when infants forgo preferences for prosocial individuals if an antisocial individual provides more resources (Tasimi & Wynn, 2016), and when infants prefer puppets that hinder those who simply do not share the infant’s own food preference (Hamlin, Mahajan, et al., 2013). These types of decisions, analogous to taking bribes from a corrupt politician or approving

Core Moral Responses in Infants and Toddlers

harming acts for trivial reasons, would hardly be considered morally praiseworthy by adults. Thus, there are clear gaps between the sensitivity to morally relevant acts observed in infants and full-fledged morality in adults. To what extent does the moral core constrain the development of morality, and how do external factors influence its development? Research examining the role of family and parental variations may help answer this question. For example, Ziv and Sommerville (2017) showed that infants with siblings were more sensitive to violations of fairness norms than those without, suggesting that variations in family dynamic influence the development of fairness expectations. In the harm domain, studies have shown that parental reports of justice sensitivity are related to infants’ tendencies to prefer prosocial over antisocial characters (Cowell & Decety, 2015). In addition, both Japanese and American infants were more likely to demonstrate a preference for prosocial characters if their mothers made socially evaluative comments during the puppet show, although the causal relationship remains unclear (Shimizu et al., 2018). On the contrary, infants’ evaluation of punishers (i.e., puppets that took toys from hinderers) do not seem to be influenced by whether or not they had an older sibling (Hamlin, 2014). While these studies do not yet provide sufficient evidence to warrant broad claims about moral development, together they show how moral development might involve an interplay between an existing core and socialization. Future research should further explore other sources of variation in infants’ development in these domains, particularly across cultures where socialization processes and familial practices significantly differ. Finally, though we have argued that the early emergence of the proposed moral core supports an evolutionary perspective by which at least some features of a rudimentary moral sense are universal and unlearned, those skeptical of this possibility might argue that even the responses we see by 3 months of age are learned via experience. Existing literature does not have a clear answer to this debate, and given limitations in many other domains (vision, color perception, memory), it seems unlikely that one could reliably measure helper preferences before this age even if they existed. Given said constraints posed by infant subjects and current methodology, it also remains difficult to clearly distinguish whether infants’ judgments are moral or social in nature. As researchers develop more advanced methodology, we hope that future work, especially studies that better characterize infants’ morally relevant experiences in their first few months as well as how variation in that experience impacts infants’ responses, might shed light on these important questions. The study of infants’ moral capacities has made remarkable progress over recent decades, yet many outstanding questions remain. Exactly how different is an infant’s moral core compared to an adult’s moral sense? How do crosscultural differences in morality interact with the core over development? To what extent can large-scale, collaborative research practices address the lack of generalizability and reconcile failed replications of seminal work? As this exciting field advances, we look forward to future research that seeks to answer these questions while continuing to uncover the existence (or lack thereof ) of the posited moral core.

455

456

                

References Amir, D., & McAuliffe, K. (2020). Cross-cultural, developmental psychology: Integrating approaches and key insights. Evolution and Human Behavior, 41(5), 430–444. Arnett, J. J. (2008). The neglected 95%: Why American psychology needs to become less American. The American Psychologist, 63(7), 602–614. Aslin, R. N. (2007). What’s in a look? Developmental Science, 10(1), 48–53. Austin, W. (1980). Friendship and fairness: Effects of type of relationship and task performance on choice of distribution rules. Personality and Social Psychology Bulletin, 6(3), 402–408. Baillargeon, R., Setoh, P., Sloane, S., Jin, K., & Bian, L. (2014). Infant social cognition: Psychological and sociomoral reasoning. In M. S. Gazzaniga & G. R. Mangun (Eds.), The cognitive neurosciences (5th ed., pp. 7–14). Boston Review. Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215. Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5(4), 323–370. Bian, L., Sloane, S., & Baillargeon, R. (2018). Infants expect ingroup support to override fairness when resources are limited. Proceedings of the National Academy of Sciences, 115(11), 2705–2710. Bloom, P. (2013). Just babies: The origins of good and evil. Bodley Head. Boyd, R., & Richerson, P. J. (1992). Punishment allows the evolution of cooperation (or anything else) in sizable groups. Ethology and Sociobiology, 13(3), 171–195. Brown, D. E. (1991). Human universals. Temple University Press. Buon, M., Jacob, P., Margules, S., Brunet, I., Dutat, M., Cabrol, D., & Dupoux, E. (2014). Friend or foe? Early social evaluation of human interactions. PLoS ONE, 9(2), Article e88612. Burns, M., & Sommerville, J. (2014). “I pick you”: The impact of fairness and race on infants’ selection of social partners. Frontiers in Psychology, 5, Article 93. Buyukozer Dawkins, M., Sloane, S., & Baillargeon, R. (2019). Do infants in the first year of life expect equal resource allocations? Frontiers in Psychology, 10, Article 116. Casler, K., Terziyan, T., & Greene, K. (2009). Toddlers view artifact function normatively. Cognitive Development, 24(3), 240–247. Chae, J. J. K., & Song, H. (2018). Negativity bias in infants’ expectations about agents’ dispositions. British Journal of Developmental Psychology, 36(4), 620–633. Choi, Y., & Luo, Y. (2015). 13-month-olds’ understanding of social interactions. Psychological Science, 26(3), 274–283. Cowell, J. M., & Decety, J. (2015). Precursors to morality in development as a complex interplay between neural, socioenvironmental, and behavioral facets. Proceedings of the National Academy of Sciences, 112(41), 12657–12662. Curry, O. S., Jones Chesters, M., & Van Lissa, C. J. (2019). Mapping morality with a compass: Testing the theory of ‘morality-as-cooperation’ with a new questionnaire. Journal of Research in Personality, 78, 106–124. Curtin, C. M., Barrett, H. C., Bolyanatz, A., Crittenden, A. N., Fessler, D. M. T., Fitzpatrick, S., Gurven, M., Kanovsky, M., Kushnick, G., Laurence, S., Pisor, A., Scelza, B., Stich, S., von Rueden, C., & Henrich, J. (2020). Kinship

Core Moral Responses in Infants and Toddlers

intensity and the use of mental states in moral judgment across societies. Evolution and Human Behavior, 41(5), 415–429. Cushman, F. (2008). Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment. Cognition, 108(2), 353–380. Damon, W. (1977). The social world of the child. Jossey-Bass. DesChamps, T. D., Eason, A. E., & Sommerville, J. A. (2016). Infants associate praise and admonishment with fair and unfair individuals. Infancy: The Official Journal of the International Society on Infant Studies, 21(4), 478–504. Deutsch, M. (1975). Equity, equality, and need: What determines which value will be used as the basis of distributive justice? Journal of Social Issues, 31(3), 137–149. Eisenberg, N. (1986). Altruistic emotion, cognition, and behavior. Lawrence Erlbaum Associates. Fantz, R. L. (1964). Visual experience in infants: Decreased attention to familiar patterns relative to novel ones. Science, 146(3644), 668–670. Fawcett, C., & Liszkowski, U. (2012). Infants anticipate others’ social preferences. Infant and Child Development, 21(3), 239–249. Feigenson, L., Dehaene, S., & Spelke, E. (2004). Core systems of number. Trends in Cognitive Sciences, 8(7), 307–314. Fidler, F., & Wilcox, J. (2021). Reproducibility of scientific results. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Summer 2021 ed.). https://plato .stanford.edu/archives/sum2021/entries/scientific-reproducibility/ Franchin, L., Savazzi, F., Neira-Gutierrez, I. C., & Surian, L. (2019). Toddlers map the word ‘good’ to helping agents, but not to fair distributors. Journal of Child Language, 46(1), 98–110. Frank, M. C., Bergelson, E., Bergmann, C., Cristia, A., Floccia, C., Gervain, J., Hamlin, J. K., Hannon, E. E., Kline, M., Levelt, C., Lew-Williams, C., Nazzi, T., Panneton, R., Rabagliati, H., Soderstrom, M., Sullivan, J., Waxman, S., & Yurovsky, D. (2017). A collaborative approach to infant research: Promoting reproducibility, best practices, and theory-building. Infancy, 22(4), 421–435. Geraci, A., & Surian, L. (2011). The developmental roots of fairness: Infants’ reactions to equal and unequal distributions of resources. Developmental Science, 14(5), 1012–1020. Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101–124. Grusec, J. E., Chaparro, M. P., Johnston, M., & Sherman, A. (2014). The development of moral behavior from a socialization perspective. In M. Killen & J. G. Smetana (Eds.), Handbook of moral development (2nd ed., pp. 113–134). Psychology Press. Haidt, J., & Graham, J. (2007). When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. Social Justice Research, 20(1), 98–116. Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65(4), 613–628. Hamlin, J. K. (2013a). Moral judgment and action in preverbal infants and toddlers: Evidence for an innate moral core. Current Directions in Psychological Science, 22(3), 186–193.

457

458

                

Hamlin, J. K. (2013b). Failed attempts to help and harm: Intention versus outcome in preverbal infants’ social evaluations. Cognition, 128(3), 451–474. Hamlin, J. K. (2014). Context-dependent social evaluation in 4.5-month-old human infants: The role of domain-general versus domain-specific processes in the development of social evaluation. Frontiers in Psychology, 5, Article 614. Hamlin, J. K. (2015). The case for social evaluation in preverbal infants: Gazing toward one’s goal drives infants’ preferences for Helpers over Hinderers in the hill paradigm. Frontiers in Psychology, 5, Article 1563. Hamlin, J. K., Mahajan, N., Liberman, Z., & Wynn, K. (2013). Not like me ¼ bad: Infants prefer those who harm dissimilar others. Psychological Science, 24(4), 589–594. Hamlin, J. K., Ullman, T., Tenenbaum, J., Goodman, N., & Baker, C. (2013). The mentalistic basis of core social cognition: Experiments in preverbal infants and a computational model. Developmental Science, 16(2), 209–226. Hamlin, J. K., & Wynn, K. (2011). Young infants prefer prosocial to antisocial others. Cognitive Development, 26(1), 30–39. Hamlin, J. K., Wynn, K., & Bloom, P. (2007). Social evaluation by preverbal infants. Nature, 450(7169), 557–559. Hamlin, J. K., Wynn, K., & Bloom, P. (2010). 3-month-olds show a negativity bias in their social evaluations. Developmental Science, 13(6), 923–929. Hamlin, J. K., Wynn, K., & Bloom, P. (2012). Reply to Scarf et al.: Nuanced social evaluation: Association doesn’t compute. Proceedings of the National Academy of Sciences, 109(22), E1427–E1427. Hamlin, J. K., Wynn, K., Bloom, P., & Mahajan, N. (2011). How infants and toddlers react to antisocial others. Proceedings of the National Academy of Sciences, 108(50), 19931–19936. Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83. Henrich, J., & Henrich, N. (2007). Why humans cooperate: A cultural and evolutionary explanation. Oxford University Press. Heyes, C. (2014). False belief in infancy: A fresh look. Developmental Science, 17(5), 647–659. Hrdy, S. B. (2011). Mothers and others: The evolutionary origins of mutual understanding. Harvard University Press. Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), Article e124. Jin, K., & Baillargeon, R. (2017). Infants possess an abstract expectation of ingroup support. Proceedings of the National Academy of Sciences, 114(31), 8199–8204. Jin, K.-S., Houston, J. L., Baillargeon, R., Groh, A. M., & Roisman, G. I. (2018). Young infants expect an unfamiliar adult to comfort a crying baby: Evidence from a standard violation-of-expectation task and a novel infant-triggeredvideo task. Cognitive Psychology, 102, 1–20. Josephs, M., Kushnir, T., Gräfenhain, M., & Rakoczy, H. (2016). Children protest moral and conventional violations more when they believe actions are freely chosen. Journal of Experimental Child Psychology, 141, 247–255. Joyce, R. (2007). The evolution of morality. MIT Press. Kanakogi, Y., Inoue, Y., Matsuda, G., Butler, D., Hiraki, K., & Myowa-Yamakoshi, M. (2017). Preverbal infants affirm third-party interventions that protect victims from aggressors. Nature Human Behaviour, 1(2), 1–7.

Core Moral Responses in Infants and Toddlers

Kanakogi, Y., Okumura, Y., Inoue, Y., Kitazaki, M., & Itakura, S. (2013). Rudimentary sympathy in preverbal infants: Preference for others in distress. PLoS ONE, 8(6), Article e65292. Katz, L. D. (2000). Evolutionary origins of morality: Cross-disciplinary perspectives. Imprint Academic. Kidd, C., Piantadosi, S. T., & Aslin, R. N. (2012). The Goldilocks effect: Human infants allocate attention to visual sequences that are neither too simple nor too complex. PLoS ONE, 7(5), Article e36399. Killen, M., & Smetana, J. G. (2014). Handbook of moral development. Psychology Press. Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. Rand McNally. Kominsky, J. F., Lucca, K., Thomas, A. J., Frank, M. C., & Hamlin, K. (2022). Simplicity and validity in infant research. Cognitive Development, 63, Article 101213. Köster, M., Ohmer, X., Nguyen, T. D., & Kärtner, J. (2016). Infants understand others’ needs. Psychological Science, 27(4), 542–548. Kuhlmeier, V., Wynn, K., & Bloom, P. (2003). Attribution of dispositional states by 12month-olds. Psychological Science, 14(5), 402–408. Lamm, H., & Schwinger, T. (1980). Norms concerning distributive justice: Are needs taken into consideration in allocation decisions? Social Psychology Quarterly, 43(4), 425–429. Lee, Y., Yun, J. E., Kim, E. Y., & Song, H. (2015). The development of infants’ sensitivity to behavioral intentions when inferring others’ social preferences. PLoS ONE, 10(9), Article e0135588. Leventhal, G. S., & Michaels, J. W. (1969). Extending the equity model: Perception of inputs and allocation of reqard as a function of duration and quantity of performance. Journal of Personality and Social Psychology, 12(4), 303–309. Loheide-Niesmann, L., de Lijster, J., Hall, R., van Bakel, H., & Cima, M. (2021). Toddlers’ preference for prosocial versus antisocial agents: No associations with empathy or attachment security. Social Development, 30(2), 410–427. Lucca, K., Yuen, F., Wang, Y., Alessandroni, N., Allison, O., Alvarez, M., Axelsson, E. L., Baumer, J., Baumgartner, H. A., Bertels, J., Bhavsar, M., Byers-Heinlein, K., Capelier-Mourguy, A., Chijiiwa, H., Chin, C. S. S., Christner, N., Cirelli, L. K., Corbit, J., Daum, M. M., . . . Hamlin, J. K. (in press). Infants’ social evaluation of helpers and hinderers: A large-scale, multi-lab, coordinated replication study. Developmental Science. Lucca, K., Pospisil, J., & Sommerville, J. A. (2018). Fairness informs social decision making in infancy. PLoS ONE, 13(2), Article e0192848. Malle, B. F. (1999). How people explain behavior: A new theoretical framework. Personality and Social Psychology Review, 3(1), 23–48. Margoni, F., & Surian, L. (2018). Infants’ evaluation of prosocial and antisocial agents: A meta-analysis. Developmental Psychology, 54(8), 1445–1455. Meristo, M., & Surian, L. (2013). Do infants detect indirect reciprocity? Cognition, 129 (1), 102–113. Meristo, M., & Surian, L. (2014). Infants distinguish antisocial actions directed towards fair and unfair agents. PLoS ONE, 9(10), Article e110553. Mikhail, J. (2007). Universal moral grammar: Theory, evidence and the future. Trends in Cognitive Sciences, 11(4), 143–152.

459

460

                

Nielsen, M., & Haun, D. (2016). Why developmental psychology is incomplete without comparative and cross-cultural perspectives. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 371(1686), Article 20150071. Nielsen, M., Haun, D., Kärtner, J., & Legare, C. H. (2017). The persistent sampling bias in developmental psychology: A call to action. Journal of Experimental Child Psychology, 162, 31–38. Nowak, M. A. (2006). Five rules for the evolution of cooperation. Science, 314(5805), 1560–1563. O’Gorman, R., Henrich, J., & Van Vugt, M. (2009). Constraining free riding in public goods games: Designated solitary punishers can sustain human cooperation. Proceedings of the Royal Society B: Biological Sciences, 276(1655), 323–329. Perner, J., & Ruffman, T. (2005). Infants’ insight into the mind: How deep? Science, 308 (5719), 214–216. Piaget, J. (1932). The moral judgment of the child. Harcourt, Brace. Premack, D., & Premack, A. J. (1994). Moral belief: Form versus content. In L. A. Hirschfeld & S. A. Gelman (Eds.), Mapping the mind: Domain specificity in cognition and culture (pp. 149–168). Cambridge University Press. Premack, D., & Premack, A. J. (1997). Infants attribute value to the goal-directed actions of self-propelled objects. Journal of Cognitive Neuroscience, 9(6), 848–856. Pun, A., Birch, S. A. J., & Baron, A. S. (2021). The power of allies: Infants’ expectations of social obligations during intergroup conflict. Cognition, 211, Article 104630. Rakoczy, H., Warneken, F., & Tomasello, M. (2008). The sources of normativity: Young children’s awareness of the normative structure of games. Developmental Psychology, 44(3), 875–881. Salvadori, E., Blazsekova, T., Volein, A., Karap, Z., Tatone, D., Mascaro, O., & Csibra, G. (2015). Probing the strength of infants’ preference for helpers over hinderers: Two replication attempts of Hamlin and Wynn (2011). PLoS ONE, 10(11), Article e0140570. Scarf, D., Imuta, K., Colombo, M., & Hayne, H. (2012). Social evaluation or simple association? Simple associations may explain moral reasoning in infants. PLoS ONE, 7(8), Article e42698. Schlingloff, L., Csibra, G., & Tatone, D. (2020). Do 15-month-old infants prefer helpers? A replication of Hamlin et al. (2007). Royal Society Open Science, 7(4), Article 191795. Schmidt, M. F. H., & Sommerville, J. A. (2011). Fairness expectations and altruistic sharing in 15-month-old human infants. PLoS ONE, 6(10), Article e23223. Scola, C., Holvoet, C., Arciszewski, T., & Picard, D. (2015). Further evidence for infants’ preference for prosocial over antisocial behaviors. Infancy, 20(6), 684–692. Shimizu, Y., Senzaki, S., & Uleman, J. S. (2018). The influence of maternal socialization on infants’ social evaluation in two cultures. Infancy, 23(5), 748–766. Sloane, S., Baillargeon, R., & Premack, D. (2012). Do infants have a sense of fairness? Psychological Science, 23(2), 196–204. Sommerville, J. A., Schmidt, M. F. H., Yun, J., & Burns, M. (2013). The development of fairness expectations and prosocial behavior in the second year of life. Infancy, 18(1), 40–66.

Core Moral Responses in Infants and Toddlers

Spelke, E. S., & Kinzler, K. D. (2007). Core knowledge. Developmental Science, 10(1), 89–96. Surian, L., Ueno, M., Itakura, S., & Meristo, M. (2018). Do infants attribute moral traits? Fourteen-month-olds’ expectations of fairness are affected by agents’ antisocial actions. Frontiers in Psychology, 9, Article 1649. Tafreshi, D., Thompson, J. J., & Racine, T. P. (2014). An analysis of the conceptual foundations of the infant preferential looking paradigm. Human Development, 57(4), 222–240. Tasimi, A., & Wynn, K. (2016). Costly rejection of wrongdoers by infants and children. Cognition, 151, 76–79. Ting, F., & Baillargeon, R. (2021). Toddlers draw broad negative inferences from wrongdoers’ moral violations. Proceedings of the National Academy of Sciences, 118(39), Article e2109045118. Turiel, E. (1983). The development of social knowledge: Morality and convention. Cambridge University Press. Vaish, A., Grossmann, T., & Woodward, A. (2008). Not all emotions are created equal: The negativity bias in social-emotional development. Psychological Bulletin, 134(3), 383–403. Van de Vondervoort, J. W., Aknin, L. B., Kushnir, T., Slevinsky, J., & Hamlin, J. K. (2018). Selectivity in toddlers’ behavioral and emotional reactions to prosocial and antisocial others. Developmental Psychology, 54(1), 1–14. Wang, Y., & Henderson, A. (2018). Just rewards: 17-month-old infants expect agents to take resources according to the principles of distributive justice. Journal of Experimental Child Psychology, 172, 25–40. Woo, B. M., Steckler, C. M., Le, D. T., & Hamlin, J. K. (2017). Social evaluation of intentional, truly accidental, and negligently accidental helpers and harmers by 10-month-old infants. Cognition, 168, 154–163. Wynn, K., & Bloom, P. (2013). The moral baby. Routledge Handbooks Online. Young, L., Cushman, F., Hauser, M., & Saxe, R. (2007). The neural basis of the interaction between theory of mind and moral judgment. Proceedings of the National Academy of Sciences, 104(20), 8235–8240. Ziv, T., & Sommerville, J. A. (2017). Developmental differences in infants’ fairness expectations from 6 to 15 months of age. Child Development, 88(6), 1930–1951. Ziv, T., Whiteman, J. D., & Sommerville, J. A. (2021). Toddlers’ interventions toward fair and unfair individuals. Cognition, 214, Article 104781.

461

19 An Integrative Approach to Moral Development During Adolescence Abigail A. Baird and Margaret M. Matthews

B: Nothing’s ever simple anymore. I’m constantly trying to work it out. Who to love, or hate . . . who to trust. It’s just like the more I know, the more confused I get. G: I believe that’s called growing up. B: I’d like to stop, then. Okay? G: I know the feeling. B: Does it ever get easy? G: You mean life? B: Yeah. Does it get easy? G: What do you want me to say? B: Lie to me. G: Yes, it’s terribly simple. The good guys are always stalwart and true. The bad guys are easily distinguished by their pointy horns or black hats. And, uh, we always defeat them and save the day. No one ever dies and everybody lives happily ever after. B: Liar. (Whedon, 1997) These are the words of Buffy Summers, a seemingly ditzy cheerleader who lives in a fictional universe where there are vampires, demons, and forces of darkness. As fate would have it, Buffy is the “Slayer” of her generation, a single girl granted the superpowers needed to fight evil and save the world. Every Slayer is given a “Watcher,” in Buffy’s case, Giles, the school librarian. The Watcher’s sworn duty is to help the Slayer grow into a role she did not ask for and cannot refuse. Buffy is highly ambivalent about her newfound responsibility, as she is often torn between balancing the social expectations of high school with decisions of who, what, and when to “slay.” In the relatively simple exchange above, Buffy exemplifies the adolescent mind in striking detail. It is easy to soothe a young child with simple truths about right and wrong, but it is not as simple with the onset of puberty and adolescence. It is almost a cruel joke that the adolescent mind can reason about complex moral dilemmas and yet often lacks the experience to formulate and execute an effective plan to resolve them. While it is often hard to navigate, moving through the developmental stage of adolescence is the only way to 462

Moral Development during Adolescence

acquire a mature moral sense. Over neurodevelopmental time and with adequate lived experience, humans become adept at behaving in accordance with the moral standards of the world around them. Morality can be described as an intricate system of beliefs, values, and ideas that ultimately influences how an individual distinguishes between right and wrong and acts upon these judgments (Ellemers et al., 2019; Haidt, 2008; Kalsoom et al., 2012). Our evolved morality is thought to be one of the things that truly sets us apart from other species. Given the importance of morality in maintaining human life, it is remarkable that humans do not come hardwired with (i.e., are not born with) a moral sense. Decades of science and philosophy have demonstrated that we are born with a capacity to acquire morality shaped by the individual’s contextually constrained lived experience (A. Dahl & Killen, 2018; Narvaez & Lapsley, 2014). The heterogeneity observed in moral development is at once compelling and nebulous. Like many human phenomena, morality has a developmental course shaped by a highly idiosyncratic interaction of nature and nurture (Baird, 2007). At present, the nature of this interaction, in practical terms, prevents the construction of a singular model that accurately captures how each human acquires a mature moral sense. Thankfully, these constraints have done little to discourage religious scholars (R. M. Thomas, 1997), philosophers (Heidegger, 1927/1992; Vygotsky, 1978), and scientists (Gilligan, 1982; Kohlberg, 1969; Piaget, 1932/1965). In this chapter, we acknowledge this complexity while aiming to add to the discourse on the developmental processes by which one acquires a mature moral sense. One way we can begin to parse the complexity of this topic is to examine the role that relevant individual differences play in shaping the course of moral development. While acknowledging that enormous individual differences exist, many of which are beyond the scope of this chapter, we will suggest that the neuropsychological changes during adolescence set the stage for the development of a mature moral sense. This chapter endeavors to highlight the importance of temperament, gender, relationships (both early attachments and peer influence), and lived experience (as opposed to lessons passed on through spoken or written word) during adolescence. We will also argue that integrating intense visceral emotion with social cognition, which takes place in adolescence, is essential for fully developed moral reasoning. In addition to the developmental path itself, consideration of how individuals interact with their specific environments – the actions they take (often influenced by their temperament, attachments, and gender) and lessons they learn from the people and places around them (often influenced by their attachments and lived experience) – is critical to advancing our understanding of moral development (Piaget, 1932/1965; Waite-Stupiansky, 2017).

19.1 Adolescence Adolescence is the period of life between puberty and adulthood. Puberty refers to a constellation of physical changes that make an individual capable of sexual reproduction. While estimates vary, pubertal onset generally

463

464

   .           .     

occurs between 10 and 12 years of age for girls and between 13 and 15 years of age for boys. Once a child is of reproductive age, they have entered adolescence but are still far from adulthood. Adolescence describes this transitional time in which the individual undergoes significant physiological, social, emotional, and cognitive changes that, over the years, enable them to become an adult member of society (Blakemore, 2008). Adolescence is the last developmental stage before an individual enters the adult world, where they will spend most of their life. Adolescence is the social, cultural, and emotional manifestation of puberty. Importantly, adolescence is highly variable depending on the country, era, socioeconomic milieu, and everything related to the variety of contexts (both within and surrounding the individual) where puberty occurs. More succinctly, puberty is the hardware and adolescence is the software for creating a psycho-socially healthy adult. In comparison to puberty, the functionality and importance of adolescence are poorly understood. There are likely a few reasons for this disparity, but the most important of them is likely the enormous heterogeneity characteristic of adolescence relative to puberty (which is more or less homogeneous). Beyond its central role in transforming children into reproductively viable adults, adolescence is a sensitive period for sociocultural processing (Blakemore & Mills, 2014). Unprecedented gains in abstract thinking that follow the neural maturation of puberty enable adolescents to engage in a profoundly revised form of perspective taking in which they are able to simultaneously consider multiple unique personal points of view, as well as the reasons for said points of view while engaging in social interactions (Spenser et al., 2020). Of course, these are complex social behaviors that do not emerge all at once and require a fair amount of practice to acquire. This acquisition is likely part of what gives rise to some of the “growing pains” often associated with adolescence (Nelson et al., 2014). What young adolescents have yet to realize, however, is that the skills they are acquiring during this time are the result of a critically important learning process. Driven by brain development, social context, and lived experience, this period in development provides adolescents with the skills needed to engage in adult levels of moral reasoning and moral behavior. The nature of moral development requires a period like adolescence in order to integrate an individual’s idiosyncratic constellation of behaviors with those of larger society. This requires a great deal of experience and practice, making adolescence the ideal time for scientists to observe how humans come to have a moral sense. In the same way that toddlers go through a sensitive period for learning a language (Newport et al., 2001), adolescents experience a sensitive time during which, due to the reappearance of increased neural plasticity in their rapidly developing brains (Takesian & Hensch, 2013) and the accumulation of increasingly independent experiences, they are able to learn from their social worlds at a speed not seen at any other point in development (Blakemore & Mills, 2014). During the adolescent years, individuals are able to make optimal use of the most relevant information from the social context around them. For most adolescents, this will be a product of the community and culture

Moral Development during Adolescence

in which they reside, intertwined with influence from the contemporary culture within their peer group. In sum, moral development during adolescence is the result of the emergence and integration of biopsychosocial factors, including neuronal maturation, changes in cognition, and a seismic shift from a parentcentered to a peer-centered social world. It is also the only (albeit highly heterogeneous) developmental path that enables the emergence of mature moral reasoning. Few would argue with the idea that we place different behavioral expectations on children and adults; at the same time, it is difficult to define what is expected of individuals who are in between these developmental stages, namely adolescents. Adolescence is the socio-cognitive dress rehearsal for adulthood, and as such, its central function is to enable the transition from the relatively insular and predictable world of the child to the world of the adult, where relationships, agency, and responsibility form the bedrock of culture and society. Piaget described the unprecedented gains in abstract thinking that follow the neural maturation of puberty. His concept of “formal operations” describes the idea that adolescents develop the ability to think abstractly, reason logically, and use deductive reasoning for the first time in their lives Piaget (1932/1965). Formal operations also gives rise to metacognition, a cognitive process that provides essential scaffolding for the development of moral reasoning. Metacognition involves, among other things, thinking about one’s own thinking processes, which in turn enables self-reflection. While critically important, the emergence of this ability is just the beginning of a learning process, driven by lived experience, that will equip adolescents with the skills needed to engage in adult levels of moral reasoning and moral behavior. The means by which adolescents come to integrate their own beliefs with those around them (at both proximal and distal levels) is precisely what prepares an adolescent to enter the adult social world with a mature moral sense. Understanding the beliefs of others requires the ability to understand not only the thoughts but also the emotions of oneself and others. While Piaget’s early theories focused on cognitive development, it is worth noting that the emergence of formal operations also enables individuals to think about their own emotions, as well as the emotions of others. In addition to Piagetian theory, the notion of being able to think about one’s own emotions as well as those of others has also been supported by work on the development of self-conscious emotions. Self-conscious emotions, such as guilt, pride, and jealousy, are a set of complex emotions that, while detectible in young children, only become fully evident relatively late in development and require specific reciprocal interactions between cognitive, emotional, and social processes for their elicitation (Lewis, 2007). Primary emotions such as joy, sadness, fear, and anger appear earlier in development and have received considerable attention in developmental research. Self-conscious emotions, which emerge later, have received considerably less empirical attention. There are likely many reasons for this. One reason may be that self-conscious emotions cannot be described solely by

465

466

   .           .     

examining a particular set of facial movements (as some researchers have described primary emotions); they require the observation of both bodily action and facial cues (Darwin, 1872). The elicitation of self-conscious emotions involves elaborate cognitive processes that are driven by the awareness of a unique self within a social context. Any thorough account of moral development requires consideration of the role of cognition and emotion. Importantly, when considering cognition and emotion in the context of morality, the integrated nature of these processes should be acknowledged. Constructivists have asserted that among adults, emotion and cognition are, in fact, not separate processes (Barrett & Satpute, 2019). As accurate as the constructivist approach may be, within psychology, many researchers have maintained that cognition and emotion are, at the very least, phenomenologically distinct (Duncan & Barrett, 2007). Traditionally, research on the role of emotion in moral development has focused on empathy and other vicarious emotional responses (Eisenberg et al., 1994; Eisenberg & Shell, 1986; Hoffman, 1991). Given its clear importance and complexity, it is not surprising that there are a number of ways to conceptualize and operationalize human empathy (Batson, 2009). Keeping in mind that empathy is most likely a constellation of mental and behavioral processes, it is worth highlighting the aspects that inform morality. The emotional aspects of empathy are directly relevant to moral development and are thought to involve experiencing the same feelings as another individual (Decety & Cowell, 2014), what Batson (2009) calls “coming to feel as another person feels.” This may sound like a relatively simple thing to do, and it may be easy for a mature human, but how one becomes capable of being emotionally aroused by, and attuned to, the emotions of others undoubtedly results from confluence of developmental processes. A number of researchers have noted that empathy involves cognitive, motivational, and emotional processes (Decety & Cowell, 2014) and further; that the development of empathy closely parallels the development of more general cognitive skills (Fabes et al., 1999). As a result of adolescence, thoughts and behaviors become more fully integrated with emotions that, in their most evolved and mature state, produce socially and morally appropriate behavior more efficiently than they did in childhood (Baird, 2007). The synthesis of emotional (often visceral) and cognitive information is critical to successful maturation. Initially, evidence of this synthesis is reflected in the involvement of cognition in second-order representations. This involvement takes the form of reasoning about the connection between bodily states and their environmental (either internal or external) correlates. Awareness of one’s visceral sense is the next important step toward adult-like moral reasoning. For example, the adolescent learns to associate “a sinking feeling” in their stomach with the anticipation of parents’ potential feelings of displeasure and/or disappointment regarding the adolescent’s behavior. In simple terms, this means that engaging in abstract reasoning about a potential future event elicits a bodily response. While young children may describe feeling “bad” (some may even describe a stomach ache) when

Moral Development during Adolescence

parents reprimand their behavior in real time, most young children do not experience somatic changes in anticipation of their parents’ potential feelings; rather, they worry about “getting in trouble” for engaging in the behavior that violates their parents’ rules. During adolescence, the concern that was associated with potential punishment for “breaking the rules” as a young child is replaced with a visceral sense of guilt (often in the form of discomfort in the gut). The ability to integrate emotion and bodily sensation with cognition is a critical step toward moral maturation. This phenomenon has also been described as the somatic marker hypothesis by Damasio and his colleagues. The emergence of somatic markers enables adolescents to avoid committing transgressions because they become capable of anticipating how they might feel following a variety of events (Damasio, 1994). The child who hesitates before breaking one of their parent’s rules would most likely possess the physiology of a fear-conditioned animal: increased amygdala arousal, heightened vigilance, amplified heart rate, and muscle tension – everything required for the “fight or flight” response. In contrast, an adolescent who hesitates before breaking one of their parent’s rules would most likely possess the physiology of someone who is about to ingest something that previously gave them food poisoning, or more specifically, increased activity in the anterior cingulate and insula accompanied by some form of gastric distress. At this point, “feeling bad” has taken on a new meaning rooted in the cognitive and emotional processes of the individual. Guilt has now been transformed from “I am afraid of what will happen (when my parent finds out)” to “I feel absolutely sick about what happened.” In Piagetian or Kohlbergian terms, heteronomous moral behavior, which describes behavior based on external rules, authority figures, and consequences, rather than internal principles (Kohlberg, 1969; Piaget, 1932/1965), helps young children learn the larger, more general moral standards, the pervasive ideals that allow children to navigate the social expectations of their caretakers. This learning most likely relies on neural hardware (e.g., the amygdala) that develops early and supports processes like fear conditioning. By having the necessary brain structures in place, the young child is able to quickly learn the most important, ubiquitous moral rules from caregivers and other authority figures. As development continues, heteronomous moral reasoning continues to enable more mature individuals to rely on the “fundamentals” of morality when they encounter social contexts with uncertain moral standards. For example, teens can rely on early conditioning to exhibit “good manners” when visiting a friend’s home. Put simply, you do not need to know everything about a friend’s parents to know that showing some deference to them and saying “please” and “thank you” are very likely good ideas. In this relatively low-stakes example, heteronomous moral reasoning provides guidance for the adolescent’s behavior. During adolescence, individuals continue to rely on heteronomous moral reasoning; however, the maturation and integration of many aspects of their cognitive and emotional systems support the emergence of autonomous moral reasoning.

467

468

   .           .     

Autonomous moral reasoning describes the process of making moral decisions based on one’s own principles, values, and critical thinking, rather than on external authorities (e.g., parents), societal norms, or awareness of what is legal. In this framework, individuals independently evaluate what is right or wrong, using reason and personal moral judgment rather than relying on the conditioned responses imparted by people and experiences of their childhood. It is more complicated to acquire and comply with one’s own internal standards of right and wrong and successfully integrate those with environmental expectations. Undoubtedly, this is why the neural structures that contribute to autonomous moral reasoning (e.g., the insula and anterior cingulate cortex) produce such intense somatic experiences. Historically, researchers have attempted to parse the cognitive, social, and emotional development that takes place during this time, some going so far as to suggest that developmental processes oppose one another (Somerville et al., 2010; Steinberg, 2005). It is far more likely, however, that adolescence is a time of critical integration of these processes (Crone & Dahl, 2012; Kilford et al., 2016). With regard to moral development, the integration of neural and behavioral maturation with personal experience during the adolescent years is essential for the emergence of a mature moral sense.

19.2 Brain Development The developmental changes in cognition seen during adolescence largely result from the maturation of working memory, selective attention, error detection, and inhibition, all of which have been shown to improve with maturational changes in brain structure and function. Perhaps the most consistently reported finding associated with adolescent brain development is substantial refinement in the transmission capacity, speed, and coordination of connections throughout the cortex, especially within the prefrontal cortex (Giedd et al., 1999; Sowell et al., 1999). The prefrontal cortex is of paramount interest in human development because of its well-understood role in the integration of cognitive, social, and emotional processes in adulthood. The converging evidence of prolonged development of the prefrontal cortex throughout childhood and adolescence highlights the synergy between brain development and behavioral development (Chugani et al., 1987; Diamond, 1988; Huttenlocher, 1979; Luna et al., 2013). During adolescence, the rapidly developing brain adapts a “use it or lose it” process that is driven by lived experience. This means that neural connections that support recurrent behaviors are retained and fortified. It follows that a response to a particular event in the environment will be potentiated by repeated exposure and subsequent strengthening of the relation between that event and the generation of the appropriate response (Takesian & Hensch, 2013). The delayed maturation of the prefrontal cortex and its growing connectivity to other brain regions, allows an individual to acquire lived experience

Moral Development during Adolescence

and thereby adapt to the particular demands of their unique environment (R. Dahl & Suleiman, 2017). One specific frontal region within which increases in connectivity and coordination have been observed during adolescence is the anterior cingulate cortex, an area known for its prominent role in the mediation and control of emotional, attentional, motivational, social, and cognitive behaviors (Vogt et al., 1992). A significant positive relationship between age and total anterior cingulate volume has been well documented (Casey et al., 1997). It is thought that this relationship may reflect improved cortical-cortical and cortical-subcortical coordination (Hwang et al., 2010; Hwang et al., 2016). The projections from both cortical and subcortical regions to the anterior cingulate observed in adults are known to support improvements in the coordination and regulation of cognitive and emotional processes (Galván, 2013; Luna et al., 2013). A critical question with regard to human development has been the exact developmental course of these projections. Maturation of the dorsal anterior cingulate cortex has been consistently related to self-control and behavioral inhibition (Luna et al., 2015), and it appears that the right anterior cingulate plays a specific executive role in the integration of autonomic responses with behavioral effort (Critchley et al., 2001). Furthermore, activity within the dorsal portion of the anterior cingulate cortex has been shown to play a crucial role in autonomic control and the conscious interpretation of somatic state, indicating that it may be an important center for the creation of second-order representations of body state. Second-order representations result from the integration of first-order sensory information (from a number of cortical and subcortical brain regions) with cognitive and contextual information. Although the notion that an individual’s perception and subsequent interpretation of their bodily states determines their emotional experience is an idea that dates back to the work of William James (1897/2014), understanding the concomitant social neuroscience better informs how we understand the developmental course of secondorder representations. Second-order representations highlight the role that socio-emotional experience can play in decision making. The somatic marker hypothesis suggests that external or internal stimuli initiate a bodily state that becomes associated with pleasurable or aversive somatic states, such as feeling nauseated in response to the thought or memory of committing a significant moral transgression (Damasio, 1994). According to Damasio’s theory, optimal decision making is not simply the result of rational calculation of gains and losses but rather is driven by the pleasant or aversive emotional reactions to prior outcomes of choices. He posits that, in essence, rational choice is guided by emotional reactions that bias decision making. Over time, somatic markers help to reduce the complexity of decision making by providing a “gut feeling” that does not require effortful cognition (Hinson et al., 2002). Somatic markers guide a person’s behavior by promoting actions that result in pleasurable embodied feelings and inhibiting actions that result in aversive feelings. For example, when a choice is followed by an aversive outcome, an

469

470

   .           .     

embodied response becomes associated with that choice. Once the pairing is sufficiently well established, specific situations and/or cognitive states and the concomitant bodily sensation occur before a choice is made. Ideally, the somatic anticipation of an unpleasant outcome prevents socially undesirable behavior and encourages prosocial interactions. Thus, a somatic marker of “good” and “bad” options improves the probability of ecologically, socially, and emotionally adaptive decision making. Given the complexity of this process, it is not surprising that a network of brain regions has been consistently described as critical for the initial representation and the reenactment of somatic markers. The regions believed to be primarily responsible for the initial (or primary) induction of somatic markers include the amygdala, a subcortical structure that matures early and helps the brain detect novel stimuli and initiate a response (Bechara et al., 2003; Damasio, 1994), and the anterior portion of the cingulate cortex (Poppa & Bechara, 2018; Sobhani & Bechara, 2011), a cortical region that helps direct attention with regard to emotional stimuli. Once these regions create a primary representation of the aversive or pleasant outcome, other brain regions are left to create bodily states, called secondary inducers (Damasio, 1995). Secondary inducers require the participation of association areas, including a ventromedial portion of the prefrontal cortex and the insula (located within the supertemporal sulcus) (Bechara, 2001; Poppa & Bechara, 2018). These regions rely on lived experiences and ideas that we will return to in later sections of this chapter. Additionally, all of the brain regions we have reviewed as being involved with the somatic marker hypothesis have also been consistently associated with both cognitive and emotional processes, suggesting that brain regions most capable of integrating the two (and likely other processes) are central to the production of adaptive prosocial behavior. In this view, activity within the brain regions that contribute to the second-order bodily experience, specifically the insular cortex, signals the intensity of the somatic state. If the activity within the insula is associated with aversive somatic markers, a relative increase in activity during a decision-making situation would likely indicate a potentially aversive outcome and help guide the individual to avoid the selection of an undesirable outcome (Paulus et al., 2003; Zinchenko & Arsalidou, 2018). Understanding the social neuroscience of emotional responses is critical to understanding how individuals acquire a moral sense. Somatic markers work by using the rapid, automatic (emotional) responses to experiences that are generated by the primary inducers, and over time, with experience, secondary inducers are able to (cognitively) recreate these primary (emotional) somatic states that inform decision making. Damasio (1995) also included an interesting insight regarding the idea that primary inducers, especially the amygdala, may have attributes that contribute to an individual’s emotional reactivity to the world around them. This idea is particularly relevant to developmental psychology, as it offers a neurobiological basis for something that has been long understood to play a foundational role in moral development: human temperament.

Moral Development during Adolescence

19.3 Temperament One of the most elegant facets of human development is that we are born with biological predispositions that, among other things, form the foundations of personality. Importantly, these predispositions are influenced throughout the life course by experiences, environments, and cultures. These predispositions, best described as temperament, are a critical part of understanding how one acquires a moral sense. Human temperament corresponds to individual differences in the way that children, from birth, display unique emotions, activity levels, and attention in response to nearly identical stimuli or situations regardless of context (A. Thomas & Chess, 1977). While there has been ample evidence that most aspects of temperament are the outcroppings of our biological bedrock, both in terms of genetic heritability and brain function (Schwartz et al., 2012), it is also important to note that lived experience often has a significant influence on the way in which temperament is displayed (Tsui et al., 2017). Temperament itself does not change, but as individuals mature, they learn about their temperament and how it is best displayed in a social world. While there are innumerable aspects of human temperament, one of the most well-studied is the manner in which individuals respond to unfamiliar or unexpected events. Jerome Kagan’s classic work examined children whose temperamental styles were described as “low reactive” and “high reactive” (Kagan, 1994). Behavioral and psychophysiological studies have consistently shown that high-reactive infants display relatively higher levels of motor activity and crying at four months old to unfamiliar or unexpected events, whereas low-reactive infants showed no increase in either motor activity or crying under identical conditions (Kagan, 1997). Kagan and colleagues followed this infant cohort through to adulthood and found that these fundamental differences in the level of measurable distress (either physiologically, behaviorally, or sometimes both) in response to unfamiliar or unexpected events did not change. Children were reevaluated at 4, 5, 7, 11, 15, and 18 years old, and at every evaluative point, those labeled high reactive as infants consistently displayed more signs of uncertainty in the form of caution, inhibition of spontaneity, or avoidance of unfamiliar events. High reactives were less likely to smile and engage in small talk with unfamiliar experimenters, were consistently less social with unfamiliar peers who were their own age, and at older ages reported more worry over having to cope with unforeseen events over which they would have no control, such as a plane crash or getting lost in the subway (Kagan, 2018). In a series of complementary studies, the amygdala (the brain’s alarm) of high reactives has been shown to be more responsive to novel events (Schwartz et al., 2010). In addition to differences in the excitability of the amygdala, temperamental differences have been shown to correlate with differences in the structure of the prefrontal cortex (Schwartz et al., 2010). These findings are not surprising given that the specific areas of the prefrontal cortex are known to be more responsible for regulating impulsive and hedonic behavior,

471

472

   .           .     

while the other portions are known to contribute to integrating a number of aspects of the processing, regulating, and decision making when emotional stimuli are involved, especially those that are self-relevant in function (Goldin et al., 2008). Given the aforementioned differences in prefrontal structure and function (i.e., behavioral regulation), it follows that low reactives may be more prone to impulsive and hedonistic behavior that could violate moral standards. It may also be the case that they are simply less fearful of consequences in general (Stifter et al., 2009). High reactives have brain differences that heighten their sensitivity to threats, such as a more reactive amygdala, which also makes them more likely to adhere to moral standards by closely regulating their behavior to avoid punishment or disapproval. While specific temperaments do not lead to precise predictions about moral behavior, it is critical to understand that infants come with different temperaments, and these differences are deeply rooted in their neurophysiology. Jerome Kagan has referred to the “long shadow of temperament” because there are likely no human behaviors, especially those essential for survival, that are not influenced by temperament (Kagan & Snidman, 2004). In addition to the clear implications that temperamental differences may have for morality, given the relatively hardwired nature of temperament, it is hard to imagine that its influence is not enmeshed with other developmental processes. Keeping in mind that moral norms are social constructs, it seems important to consider the role attachment plays in casting the long shadow described by Kagan.

19.4 Attachment All humans are perpetually dependent upon the synchronous exchange of emotion with others for stability. Early life, however, represents an important period during which emotional input from caregivers has a profound influence on development (Zeifman, 2019). As originally conceptualized by John Bowlby, attachment behaviors were evidence of a biologically based behavioral system that promotes the viability of the infant through the formation of a bond, based on physical proximity, with its mother (Bowlby, 1969). Infant attachment represents a constellation of behavioral-physiological interactions that begin prenatally and continue to evolve postnatally (Insel, 1997). In this view, from the first day of life, mother and infant participate in a vital exchange of signals that will have crucial implications for neurodevelopment and, therefore, for the nature and capacities of the adult that the infant will become. From their earliest encounter, the caregiver participates in the regulation of the homeostatic state and the neurodevelopment of the infant whom they are caring for (Ciaunica et al., 2021; Kraemer, 1992). Neurophysiologically, human attachment is believed to rely heavily on the hormone oxytocin and the neural structures with which it interacts (Carter, 2017; Strathearn, 2011). Research has demonstrated the importance of oxytocin in complex forms of social memory, such as offspring recognition and

Moral Development during Adolescence

pair-bonding (Campbell, 2010; Lin & Hsu, 2018), and levels of oxytocin have been shown to increase steadily following birth, with a peak in early childhood (Rokicki et al., 2022). Social memory is believed to rely heavily on how sensitive an individual’s limbic system is to oxytocin (Campbell, 2010; Lin & Hsu, 2018), more specifically, the amygdala (Gamer et al., 2010). Although this suggests an important, albeit complex, relationship between temperament and attachment (Groh et al., 2017), it does seem to be the case that the neurobiology of temperament exerts influence on early attachment through a multitude of other processes that are highly individualized and are only now in the early stages of being studied and understood (Oliveira & Fearon, 2019). What is clear is that there is an important interplay between the infant’s and caregiver’s neurobiology through which attachment facilitates the infant’s learning about the world around them. Fariborz et al. (1996) review evidence supporting the hypothesis that human infants are equipped with a functional memory system at birth. This early memory system is largely implicit and is ideal for forming memories that do not require volitional attention or linguistic understanding. This type of memory also encompasses experiences and “automatic” behaviors that are typically outside of the individual’s awareness (e.g., conditioning). The implicit memory system is able to preserve information that is not available for conscious recollection or awareness yet is still able to create enduring and observable changes in behavior (Milner et al., 1968; Reber, 2013). The underlying patterns and regularities in the attachment relationship are thus detected, extracted, encoded, and stored. In this manner, the growing infant collects knowledge regarding what relationships are like, how they are conducted, and by which “rules” relational behavior is regulated. However, because the operative memory system is implicit, knowledge is acquired in the form of generalizations and rules extracted from experience, and it later operates to influence behavior in an unthinking, reflexive manner. In a process that may be somewhat analogous to the learning of motor skills, people proceed in later life to enact attachments in accordance with the rules or prototypes they have extracted based on prior experience (e.g., bicycle riding). Simply, the memories of the earliest attachment create an enduring neural structure that exerts a significant and lifelong influence on most, if not all, of an individual’s relationships. Given the extent to which humans learn about morality within a social context, considering how attachment may promote or suppress prosocial behavior is critical to understanding how mature morality emerges. The successful development of a human infant relies critically on its ability to create internal representations of how relationships are formed. The infant’s gradual internalization of the affectively driven communication with their caretaker both creates and continually refines the structure and function of neurobehavioral systems underlying human attachment (Fariborz et al., 1996). The relationship between emotion and implicit memory is a reciprocal one, in which the character and development of emotion are as much influenced by memory processes as these processes are influenced by emotion

473

474

   .           .     

(Alexander et al., 2010). In summary, the initial attachments with caregivers create the structures humans need to care for one another and to understand and respond to the wants and needs of other people, capacities that form the foundation of a moral sense. To illustrate the infant–caregiver bond, consider the analogy of clapping. It takes two hands to clap, and though a lack of sound indicates no clapping, it is not always obvious which hand is not cooperating. Traditional models of attachment disruption focus on a disordered or inadequate attachment figure (Jones et al., 2015), but this is simply one side of the equation. Modern attachment theorists have concluded that sociability is a natural consequence of a beneficial infant–parent attachment and, further, that the quality of being securely attached is preserved from infancy to later childhood. It is important to note here that the use of “secure” here is not meant to imply a category of attachment but rather the quality of attachment. There has been considerable empirical evidence that the categorization of attachment relationships into types may be misleading. More parsimonious models have suggested that variation in patterns of attachment is largely continuous rather than categorical, which makes much more sense when thinking about it as a trait capable of reflecting individual differences such as differences in temperament (Fraley et al., 2015). A large body of literature has consistently shown that disruption of an individual’s attachment to their primary caregivers can be detrimental and, in some cases, even lethal (Bowlby, 1969; Ward et al., 2000). Some studies have focused on how disordered attachment reliably impairs adolescents’ ability to empathize with others (Boele et al., 2019) and significantly increases the potential for antisocial and immoral behavior (Hoeve et al., 2012). However, relatively little is known about the precise combination of attachment-related deficits that so deeply disrupts some individuals yet seems to leave others relatively intact (Fonagy & Bateman, 2016). It is not unreasonable to speculate that a more thorough understanding of the role that disordered attachment plays in antisocial behavior would vastly improve our understanding of how attachment patterns influence morality across the lifespan. While there is little doubt that early attachments predict the qualities of relationships in later life, when it comes to the emergence of a socially constructed moral sense, attachment to one’s age-matched peers may be as predictive as early attachments, if not more so.

19.5 The Importance of Peers Across developmental time, adolescents’ self-concepts change as they realize that they are unique individuals. However, being an individual does not necessarily reveal much about your specific identity or how you fit into the different domains of your life. Adolescents seek validation and approval from numerous groups of people in their lives (e.g., parents, other family members,

Moral Development during Adolescence

adults, friends, and classmates). All of these people contribute to an adolescent’s sense of identity, but all have different values and expectations (Harter et al., 1998). The juxtaposition of these different social influences and pressures may explain why adolescents use different self-descriptions in different social contexts (Harter et al., 1998; Tanti et al., 2011). When children are young, parents dictate what is considered appropriate behavior. Parents encourage and enforce prosocial behavior and act as the authority that oversees the development of a child’s morality. In many ways, parents can be thought of as external prefrontal cortices for their children, helping to interpret environmental demands and construct and execute appropriate responses. Given the behavioral consequences of having an immature prefrontal cortex, parents assume a number of frontal functions by instructing their children in the absence of their own abstract reasoning and behavioral regulation. Parents attempt to maintain control of where and with whom a child associates in order to minimize behavioral transgressions. They also provide real-time feedback that enables children to modify their behavior. Adolescence forever changes the interaction between parents and their children. As children mature, they acquire new social skills that necessitate a renegotiation of their relationships with family members and peers. Despite the cognitive development that takes center stage during puberty, it is important to keep in mind that the evolutionary (and in many cultures vestigial) purpose of puberty is to make individuals capable of contributing to human reproduction. As teenagers develop a functionally reproductive adult body, they also develop an increased interest in sex and things related to sex. So, while the body is preparing for this, the mind is trying to line up the combination of behaviors that will gain individual access to potential mates, who are likely to be closer in age than their parents. As a result, adolescents focus more of their energy on peer groups. Within peer groups, teens learn how to talk, walk, and act around each other (L. J. Walker et al., 2000). Research has suggested that it is not wholly accurate to refer to this time as a switch “from” parents “to” peers. It is more precise to describe this time as one where the parent’s role changes and peers figure more prominently in the social lives of adolescents. Theorists such as Piaget (1932/1965) and Kohlberg (1969) have argued that parents have a minimal and nonspecific role in their adolescents’ moral development primarily because of their position of unilateral authority. Both theorists have expanded on this to say that in adolescence, the role of the parents and family, while not unimportant, pales in comparison with the critical contributions of peers to moral development. Developmental psychologists have asserted that owing to both their equal developmental status and the reciprocal nature of their relationships, peers provide the necessary scaffolding for moral development. More recently, however, research has demonstrated that the development of moral reasoning over a four-year period was predicted by the nature of both parents’ and peers’ interactions in moral discussions (L. J. Walker et al., 2013). While relationships with both parents and peers contributed to moral

475

476

   .           .     

reasoning, Walker and colleagues also demonstrated that each type of relationship influenced moral development differentially. The authors reported that parents who focus on dispensing information in lecture-style communication with their children or adolescents were far less effective than those who used a Socratic style that elicited interaction and was aimed at the development of mutual understanding (L. J. Walker et al., 2000; L. J. Walker & Taylor, 1991). In peer interactions, low-level conflict or “interference,” as termed by Walker and colleagues, was the most effective means of facilitating moral development. This interaction style is nearly antithetical to that found to be successful in parental relationships and appears to be the result of different perceptions of personal power within different types of relationships. It has been suggested that the egalitarian nature of most peer-to-peer relationships enables conflict to be productive in terms of moral development (see L. J. Walker et al., 2000 for a review). Regardless of the context, the key to moral development was the individual’s use of their own social, cognitive, and emotional skills to elicit and assimilate the knowledge of others with whom they have relationships (and therefore are able to engage in the emotional reciprocity that comes with attachment). Conversely, situations in which information was imparted in a monological lecture style (more typically associated with parental advice) were associated with very slow rates of moral development (L. J. Walker et al., 2000). This suggests that in family settings where parental influence is imparted through realtively secure attachments, the developing adolescent likely benefits from an attachment figure with significantly more lived experience. Additionally, because parents are most frequently operating at a higher stage of moral reasoning than their children, they are able to offer a consistent base from which their children can learn. Although the knowledge necessary to navigate specific moral dilemmas is most effectively supplied by peers, overarching moral concepts are best imparted by those with higher-level moral reasoning (Killen & Cooley, 2013; L. J. Walker & Taylor, 1991). Just as language is best taught under conditions of shared attention, so too is morality. When the adolescent initiates the interaction, it is an indicator to the parent or caretaker that there is an opportunity to impart adult-level moral principles. There is also a way in which social conflict and its resolution facilitate the development of moral reasoning. There is, however, a clear contextual distinction between parents and peers with regard to the efficacy of moral development through social conflict. In effect, parents cannot do it; only peers can. The finding that conflict within a peer context facilitates moral development seems at first glance counterintuitive but can be interpreted as consistent with the Piagetian view that egalitarian relationships among friends permit less constrained expressions of conflict than do asymmetrical child–parent relationships (Turiel, 2013). It makes adaptive sense, then, that adolescents spend more time with their peers than they do with their parents (Brown, 2004: Brown & Bakken, 2011). During the limited time teens spend away from their friends, if they are not conversing on the phone or computer, they are most likely thinking about their peer groups. Based on the empirical and theoretical data

Moral Development during Adolescence

on the importance of social self-perceptions during the teenage years (Jacobs et al., 2004; Preckel et al., 2013), it is not surprising that teenagers place great importance on their friendships. Adolescents turn to peer groups for emotional support and perceive group approval as an indication of social acceptability (Brown et al., 1993; Masten et al., 2012). Peers have coercive power, providing reinforcement for socially approved behaviors and punishment for noncompliance with group standards (French & Raven, 1959; Kiesner et al., 2002; Savin-Williams & Berndt, 1990). This power is wielded through discreet, subtle means of approval and reward or disapproval, teasing, and rejection. Within groups of friends, this type of social feedback is probably healthy and constructive because it helps adolescents develop mature social skills (e.g., empathy, perspective taking, and good listening skills). This could mean peers inspire teammates to work harder in practice, encourage friends to study harder, and challenge others to try out a new activity. Conversely, it could mean that adolescents pressure their friends into sneaking out, drinking alcohol, and other potentially dangerous risk-taking behaviors (Chein et al., 2011; Patterson et al., 2000). Ironically, the literature seems to most consistently report the inconsistent effects of peer influence on adolescent behavior (Dumas et al., 2012). It is also important to note that the overwhelming majority of research on this topic has been focused on the antisocial impact of peer influence, alongside the assumption that “peer pressure” is something to be ubiquitously resisted (Steinberg & Monahan, 2007). It is only very recently that researchers have begun to develop the means by which to examine the positive effects of peer influence (McConchie et al., 2022). While a more thorough examination of this topic is merited, it is beyond the scope of the current chapter. That said, there can be little doubt that the research we have described underscores how and why acknowledging the significance and influence of peers is paramount to understanding adolescent moral development.

19.6 Gender From early on in development, one of the few ubiquitous, recognizable characteristics a parent can use to make inferences about their child is biological sex based on external genitalia. Although the labels “male” and “female” do not inherently reveal any information about disposition, preferences, or identity, society has expectations from the moment of birth. It is thus not surprising that adults interact with infants differently depending on perceived biological sex (Clearfield & Nelson, 2006; Fausto-Sterling et al., 2015), and a large body of research points to differential socialization throughout the developmental course (Leaper & Farkas, 2015). Young children do not possess secondary sex characteristics, such as facial hair or full breasts, meant to signal their biological sex to others. These traits do not emerge until adolescence simply because they are physical manifestations of the increases in sex hormones that accompany puberty. These observable

477

478

   .           .     

differences evolved because they enable people of reproductive age to recognize each other with greater ease and speed (Barber, 1995; Darwin, 1871; Gluckman & Hanson, 2006). At present, males still show more “traditionally male” and females “traditionally female” behavior, but there are increasing numbers of individuals who show a mix of the two (Gilligan, 1982; Muuss, 1988). Normative ideas about how boys and girls should feel and behave, and the common assumption that gender maps cleanly onto biological definitions of sex, mean that certain behaviors are reinforced based on gender. For example, boys are more likely to be rewarded or ignored for physically aggressive behaviors (due to their association with masculinity), while girls are more likely to be scolded for that same behavior and receive reinforcement for masking their anger, building relationships, and focusing on the needs of others (ZahnWaxler & Polanichka, 2004). Children are active agents in these patterns of socialization: Once they develop an awareness of “self” as a boy or girl, typically between 18 and 24 months of age, they begin actively seeking information about how they should behave and why, integrating this knowledge into gender schemas (Halim, 2016; Martin & Ruble, 2010). These schemas expand and grow more sophisticated with age and experience and are used to inform one’s self-concept and evaluate one’s surroundings (Tobin et al., 2010). Based on the prevalence of gender-based socialization patterns, as humans learn right from wrong, we might expect to see gender-based differences in moral behavior. Certainly, there are well-known gender differences in many of the behaviors central to developing a moral sense. Notably, greater reward sensitivity (Cross et al., 2013) and impulsivity (Cross et al., 2011) in males contribute to a greater propensity for risk taking (D. M. Walker et al., 2017), whereas relative risk-aversion in females is linked to elevated sensitivity to punishment and fear of negative evaluation (Ding et al., 2017; VillanuevaMoya & Expósito, 2021) as well as greater behavioral self-regulation (Wanless et al. 2013). Though prosocial behavior increases from early childhood to mid-adolescence for both groups (Eisenberg & Fabes, 1998; Van der Graaff et al., 2018), females consistently score higher on measures of empathy, perspective taking, and prosocial behavior (Fabes et al., 1999; Van der Graaff et al., 2018). Furthermore, females have been shown to pay greater attention to details and may be more responsive to contextual variables (Meyers-Levy & Loken, 2015). Yet efforts to understand the relationship between gender and moral development have historically been unclear at best. For decades, the field of moral psychology was dominated by Kohlberg’s (1969) stage-based theory, which was developed using all-male samples and has been criticized for emphasizing masculine conceptions of morality. Kohlberg’s model is often called “justice ethics” because it is based on understanding and following basic “rules of the game” – namely, the fixed, universal principles of justice, equality, and human rights. However, when his former graduate student Carol Gilligan systematically interviewed women making major life decisions, she found that they were more concerned with outcomes that reflected caring rather than relying exclusively on

Moral Development during Adolescence

what the rules allowed (Gilligan, 1982). As a response to Kohlberg’s model, she then offered the theory of “care ethics.” Whereas the justice orientation demands one act as an impartial judge to make the morally best decision, the care orientation requires attention to context and the needs of others enmeshed in each moral decision, the best choice being that which best sustains existing human connections. Indeed, studies from social and behavioral psychology corroborate the notion that women are more inclined to stress interpersonal relationships and take responsibility for others’ well-being (Meyers-Levy & Loken, 2015; Shih & Auerbach, 2010). Gilligan, however, while adding critically important data to the field, did not manage to empirically validate her primary sex-based hypotheses (see Gilligan, 1982, for a review of this work). Further, researchers who examined how college students responded to dilemmas from both Kolberg’s and Gilligan’s measures found that neither sex differences nor self-reported gender characteristics predicted individuals’ responses (Friedman et al., 1987). While the findings of Friedman and colleagues are important, it is hard to know how the use of college students as research participants may have influenced social constructs about both moral reasoning and gender itself. More directly, there remains a sizable gap in the empirical literature that has directly addressed the complex interaction of sex and gender with regard to morality. As such, a thorough and consistent understanding of gender differences in moral development has remained elusive, and we believe an incomplete conceptualization of gender as an individual difference variable is at the root of this conundrum. It is clear that in addition to being more thoroughly researched, gender should be treated as multidimensional and flexible rather than as a static, dichotomous variable. It is evidently clear that a great deal of diversity exists in how gender is expressed, experienced, and understood, and empirical findings that challenge the binary have brought psychologists to push for a replacement that stresses multiplicity and fluidity and allows for the possibility that gender is irrelevant to a sense of self (Hyde et al., 2019). For example, some young children never exhibit normative gender rigidity, preferring other-gender-typed activities and social groups and/or identifying with the other gender (Halim, 2016). In some cases, sociocultural factors may explain low levels of gender typing; for example, children raised by parents with nontraditional gender schemas tend to have more flexible gender attitudes and self-concepts (Tenenbaum & Leaper, 2002). Yet in other children, biological factors appear to have greater influence, as demonstrated by the link between high levels of prenatal androgen exposure and a greater tendency toward other-gender-typed interests, behavior, and identity in girls (Hines et al., 2015). Importantly, different dimensions of gender typing do not necessarily correlate with each other (Halim et al., 2013; Tobin et al., 2010), and the stability of such individual differences over time remains controversial (Martin & Ruble, 2010). If anything is clear, it is that the ratio of biological to social-context-based influences on gender typing can vary greatly not only between individuals but also within individuals over time. If we are to responsibly seek out models of moral development, the effects

479

480

   .           .     

of both neurobiological factors associated with sex and the sociocultural influences of gender must be prominently considered.

19.7 The Importance of Experience During adolescence, experiential learning increases dramatically, likely driven by significant increases in sensation and novelty seeking, as well as heightened interest in personal and social rewards (Steinberg et al., 2018). In terms of the quality of lived experience, adolescents are more independent (or at least more frequently unsupervised) than children and have access to situations that children do not; as a result, adolescents have far more opportunities for acquiring moral lessons through lived experience. In order to build a useful/functional collection of life experiences, adolescents need to engage in first-hand experience or have attachments that are close enough to produce shared memories for events (Berger et al., 2020). The ultimate function of adolescence is to provide individuals with enough experience to make prosocial choices when they are faced with real-time scenarios. Learning about their unique social worlds requires an active process, such as described in Vygotskian constructivism (Vasileva & Balyasnikova, 2019). Lived experience also allows various emotional and cognitive processes to mature by becoming increasingly coordinated. As a result of the developmental features of the adolescent brain that make risk seem so exciting, most adolescents are predisposed to actively seek out the (often novel) experience necessary for collecting the successes and failures that will contribute to healthy decision making as adults (Glicksohn et al., 2018; Moffitt, 1993). It is no small endeavor to acquire an “experience library” that you can automatically draw on to help guide and regulate your behavior in real time. The ability to understand the demands of a situation and properly call to mind and body one’s previous experience to inform one’s next move is essential to adult decision making (Damasio, 1994; Poppa & Bechara, 2018). In fact, throughout our lifetimes, we all continue to learn our strengths and weaknesses, successes and failures, and rights and wrongs through our lived experiences. What is important to note here is that an adolescent’s brain and behavior uniquely situate them to seek out opportunities for practice. Specifically, their reduced inhibition (Constantinidis & Luna, 2019; Perino et al., 2016), the drive to obtain potential social rewards (Foulkes & Blakemore, 2018), and the desire for peer approval (Somerville, 2013) work together to give teens the “gut” to successfully learn about the moral standards of their worlds. Stated simply, adolescents need a wide range of experiences from which to draw. Younger adolescents need a relatively greater number of firsthand experiences from which to learn. As adolescents mature (and become better at understanding and empathizing with other’s experience), they are more likely to glean moral lessons vicariously by watching others (Topping, 2005). With more experiences to draw upon, teens are likely to make better decisions.

Moral Development during Adolescence

Accordingly, not all the “unreasonable” things teens do are mistakes; some of them are learning opportunities. For example, some teens show an increase in levels of immoral behavior, such as shoplifting or trespassing (Lantz & Knapp, 2024; Moffitt, 1993), but the consequences of these decisions tend to leave a formative impression. As detailed by Moffitt, most adolescents do not continue these behaviors into adulthood. In spite of understanding that adolescents are not fully mature, it is often difficult for adults to understand adolescent reasoning in cases like this because they have developed an automatic and implicit reflex, a product of a lifetime of experience, that tells them to avoid immoral, dangerous, and socially prohibited actions (e.g., shoplifting). The implicit reflex or “gut feeling” that enables adults to intuit that something is not safe physically, morally, or both is the result of coordinated activity in a constellation of brain regions that create somatic markers (Damasio, 1994; Poppa & Bechara, 2018). A critical function of adolescents’ prolonged immaturity is that it enables them to learn about the complexities of the adult social and moral world. During this time, adolescents are still accumulating enough lived experience to acquire a mature “gut feeling” that will enable them to join adult society. These lived experiences will eventually become amalgamated, and their heuristics carefully and implicitly cataloged to make adolescents capable of real-time, socially appropriate thinking and behavior (Baird, 2007; Damasio, 1994). This rather long and arduous process is essential for acquiring a moral sense because, as we have discussed, moral standards are relative and must be internalized by individuals over time (Kagan, 2018). Given this, it follows that the unparalleled speed and extent of brain maturation seen during adolescence are not only driving the behavioral need for experience but also being fundamentally shaped by the outcome of these experiences (Berardi et al., 2015; Larsen & Luna, 2018; McEwen, 2012).

19.8 Final Thoughts In this chapter we have highlighted and reviewed some of the individual differences that may help researchers refine their models of how morality develops. We focused the chapter on adolescence specifically because it is not only a period of tremendous growth but also the last period of development before an individual is fully mature. In terms of understanding how we become moral beings, comparing children to adults may be less informative than other comparisons due to the enormous disparity in social and moral expectations. By examining the changes that take place during adolescence we have a much better chance of elucidating how mature moral behavior emerges. While a great deal of work remains to be done, there is clearly a need for better interdisciplinary consensus on how individual differences should be considered with regard to models of moral development.

481

482

   .           .     

Considering the direct and interactive influence of temperament, attachment, peer relationships, and gender on how life is experienced is a critical step toward operationalizing ecologically valid models of moral development. It has been consistently demonstrated that individual differences in behavior are not only products of the brain and experience but also act reciprocally as critical contributors to human development and the subsequent selection/creation of individual experience (Kandler et al., 2021). Given that humans are born with a capacity for moral development, and moral behavior is essential for the survival of our species, the complexity of studying moral development should not discourage researchers from creating more thoughtful and thorough models of how moral behavior develops. For example, future work in this arena might consider Urie Bronfenbrenner’s ecological systems theory (Bronfenbrenner, 1994). Bronfenbrenner created a framework for understanding human development and the multiple influences that shape it. His theory is composed of several nested levels or “systems,” which move from micro to macro. Bronfenbrenner’s ecological systems theory emphasizes that development is not solely the result of an individual’s innate characteristics but is also influenced by the different systems and contexts in which the individual exists. Each system interacts with and influences the other systems, creating a complex interaction of factors that shape an individual’s development. A similar model of systems focused on moral development would undoubtedly include, among many others, the individual differences described in this chapter. There can be no doubt that advancing our understanding of moral development will require multiple approaches that inform one another while also pursuing areas of empirical convergence. As daunting as this endeavor may be, understanding how morality develops has the potential to reveal what is truly unique about being human.

Acknowledgments Excerpt from Buffy the Vampire Slayer © 1997 reproduced courtesy of 20th Television. Written by Joss Whedon. All rights reserved. This work was supported in part by a Faculty Research Grant from the Phebe H. Beadle Fund at Vassar College. The authors would like to acknowledge Debra M. Zeifman and Alexandra Aquilina-Piscitello for their thoughtful insights during the revision of this manuscript.

References Alexander, K. W., O’Hara, K. D., Bortfeld, H. V., Anderson, S. J., Newton, E. K., & Kraft, R. H. (2010). Memory for emotional experiences in the context of attachment and social interaction style. Cognitive Development, 25(4), 325–338.

Moral Development during Adolescence

Baird, A. A. (2007). Adolescent moral reasoning: The integration of emotion and cognition. In W. Sinnott-Armstrong (Ed.), Moral psychology, Vol. 3. The neuroscience of morality: Emotion, brain disorders, and development (pp. 323–342). MIT Press. Barber, N. (1995). The evolutionary psychology of physical attractiveness: Sexual selection and human morphology. Ethology and Sociobiology, 16(5), 395–424. Barrett, L. F., & Satpute, A. B. (2019). Historical pitfalls and new directions in the neuroscience of emotion. Neuroscience Letters, 693, 9–18. Batson, C. D. (2009). These things called empathy: Eight related but distinct phenomena. In J. Decety & W. Ickes (Eds.), The social neuroscience of empathy (pp. 3–15). Boston Review. Bechara, A. (2001). Neurobiology of decision-making: Risk and reward. Seminars in Clinical Neuropsychiatry, 6(3), 205–216. Bechara, A., Damasio, H., & Damasio, A. R. (2003). Role of the amygdala in decisionmaking. Annals of the New York Academy of Sciences, 985(1), 356–369. Berardi, N., Sale, A., & Maffei, L. (2015). Brain structural and functional development: Genetics and experience. Developmental Medicine and Child Neurology, 57(Suppl. 2), 4–9. Berger, C., Deutsch, N., Cuadros, O., Franco, E., Rojas, M., Roux, G., & Sánchez, F. (2020). Adolescent peer processes in extracurricular activities: Identifying developmental opportunities. Children and Youth Services Review, 118, Article 105457. Blakemore, S.-J. (2008). The social brain in adolescence. Nature Reviews Neuroscience, 9(4), 267–277. Blakemore, S.-J., & Mills, K. L. (2014). Is adolescence a sensitive period for sociocultural processing? Annual Review of Psychology, 65(1), 187–207. Boele, S., Van der Graaff, J., de Wied, M., Van der Valk, I. E., Crocetti, E., & Branje, S. (2019). Linking parent–child and peer relationship quality to empathy in adolescence: A multilevel meta-analysis. Journal of Youth and Adolescence, 48(6), 1033–1055. Bowlby, J. (1969). Attachment and loss: Vol. 1. Attachment (2nd ed.). Basic Books. Bronfenbrenner, U. (1994). Ecological models of human development. International Encyclopedia of Education, 3(2), 37–43. Brown, B. B. (2004). Adolescents’ relationships with peers. In R. M. Lerner & L. Steinberg (Eds.), Handbook of adolescent psychology (2nd ed., pp. 363–394). John Wiley & Sons, Inc. Brown, B. B., & Bakken, J. P. (2011). Parenting and peer relationships: Reinvigorating research on family–peer linkages in adolescence. Journal of Research on Adolescence, 21(1), 153–165. Brown, B. B., Mounts, N., Lamborn, S. D., & Steinberg, L. (1993). Parenting practices and peer group affiliation in adolescence. Child Development, 64(2), 467–482. Campbell, A. (2010). Oxytocin and human social behavior. Personality and Social Psychology Review, 14(3), 281–295. Carter, C. S. (2017). The oxytocin-vasopressin pathway in the context of love and fear. Frontiers in Endocrinology, 8, Article 356. Casey, B. J., Trainor, R., Giedd, J., Vauss, Y., Vaituzis, C. K., Hamburger, S., Kozuch, P., & Rapoport, J. L. (1997). The role of the anterior cingulate in automatic and controlled processes: A developmental neuroanatomical study.

483

484

   .           .     

Developmental Psychobiology: The Journal of the International Society for Developmental Psychobiology, 30(1), 61–69. Chein, J., Albert, D., O’Brien, L., Uckert, K., & Steinberg, L. (2011). Peers increase adolescent risk taking by enhancing activity in the brain’s reward circuitry. Developmental Science, 14(2), 1–10. Chugani, H. T., Phelps, M. E., & Mazziotta, J. C. (1987). Positron emission tomography study of human brain functional development. Annals of Neurology, 22(4), 487–497. Ciaunica, A., Constant, A., Preissl, H., & Fotopoulou, K. (2021). The first prior: From co-embodiment to co-homeostasis in early life. Consciousness and Cognition, 91, Article 103117. Clearfield, M. W., & Nelson, N. M. (2006). Sex differences in mothers’ speech and play behavior with 6-, 9-, and 14-month-old infants. Sex Roles, 54(1), 127–137. Constantinidis, C., & Luna, B. (2019). Neural substrates of inhibitory control maturation in adolescence. Trends in Neurosciences, 42(9), 604–616. Critchley, H. D., Mathias, C. J., & Dolan, R. J. (2001). Neural activity in the human brain relating to uncertainty and arousal during anticipation. Neuron, 29(2), 537–545. Crone, E. A., & Dahl, R. E. (2012). Understanding adolescence as a period of social– affective engagement and goal flexibility. Nature Reviews Neuroscience, 13(9), 636–650. Cross, C. P., Copping, L. T., & Campbell, A. (2011). Sex differences in impulsivity: A meta-analysis. Psychological Bulletin, 137(1), 97–130. Cross, C. P., Cyrenne, D.-L. M., & Brown, G. R. (2013). Sex differences in sensationseeking: A meta-analysis. Scientific Reports, 3(1), Article 2486. Dahl, A., & Killen, M. (2018). Moral reasoning: Theory and research in developmental science. In The Stevens’ handbook of experimental psychology and cognitive neuroscience (Vol. 4, pp. 323–353). Wiley. Dahl, R., & Suleiman, A. (2017). Adolescent brain development: Windows of opportunity. In The adolescent brain: A second window of opportunity. A compendium (pp. 21–28). UNICEF. Damasio, A. (1994). Descartes’ error: Emotion, rationality and the human brain. Putnam. Damasio, A. R. (1995). Review: Toward a neurobiology of emotion and feeling: Operational concepts and hypotheses. The Neuroscientist, 1(1), 19–25. Darwin, C. (1871). The descent of man and selection in relation to sex. The Modern Library. Darwin, C. (1872). On the expression of the emotions in man and animals. John Murray. Decety, J., & Cowell, J. M. (2014). The complex relation between morality and empathy. Trends in Cognitive Sciences, 18(7), 337–339. Diamond, A. (1988). Abilities and neural mechanisms underlying AB performance. Child Development, 59(2), 523–527. Ding, Y., Wang, E., Zou, Y., Song, Y., Xiao, X., Huang, W., & Li, Y. (2017). Gender differences in reward and punishment for monetary and social feedback in children: An ERP study. PLoS ONE, 12(3), Article e0174100. Dumas, T. M., Ellis, W. E., & Wolfe, D. A. (2012). Identity development as a buffer of adolescent risk behaviors in the context of peer group pressure and control. Journal of Adolescence, 35(4), 917–927.

Moral Development during Adolescence

Duncan, S., & Barrett, L. F. (2007). Affect is a form of cognition: A neurobiological analysis. Cognition and Emotion, 21(6), 1184–1211. Eisenberg, N., & Fabes, R. A. (1998). Prosocial development. In N. Eisenberg & W. Damon (Eds.), Handbook of child psychology: Social, emotional, and personality development (5th ed., Vol. 3, pp. 701–778). John Wiley & Sons, Inc. Eisenberg, N., Fabes, R. A., Murphy, B., Karbon, M., Maszk, P., Smith, M., O’Boyle, C., & Suh, K. (1994). The relations of emotionality and regulation to dispositional and situational empathy-related responding. Journal of Personality and Social Psychology, 66(4), 776–797. Eisenberg, N., & Shell, R. (1986). Prosocial moral judgment and behavior in children: The mediating role of cost. Personality and Social Psychology Bulletin, 12(4), 426–433. Ellemers, N., van der Toorn, J., Paunov, Y., & van Leeuwen, T. (2019). The psychology of morality: A review and analysis of empirical studies published from 1940 through 2017. Personality and Social Psychology Review, 23(4), 332–366. Fabes, R. A., Carlo, G., Kupanoff, K., & Laible, D. (1999). Early adolescence and prosocial/moral behavior I: The role of individual processes. Journal of Early Adolescence, 19(1), 5–16. Fariborz, A., Thomas, L., Richard, L., Alan, L., Gordon, B., Teresa, M., & Schiff, E. Z. (1996). Affect, attachment, memory: Contributions toward psychobiologic integration. Psychiatry, 59(3), 213–239. Fausto-Sterling, A., Crews, D., Sung, J., García-Coll, C., & Seifer, R. (2015). Multimodal sex-related differences in infant and in infant-directed maternal behaviors during months three through twelve of development. Developmental Psychology, 51(10), 1351–1366. Fonagy, P., & Bateman, A. W. (2016). Adversity, attachment, and mentalizing. Comprehensive Psychiatry, 64, 59–66. Foulkes, L., & Blakemore, S.-J. (2018). Studying individual differences in human adolescent brain development. Nature Neuroscience, 21(3), 315–323. Fraley, R. C., Hudson, N. W., Heffernan, M. E., & Segal, N. (2015). Are adult attachment styles categorical or dimensional? A taxometric analysis of general and relationship-specific attachment orientations. Journal of Personality and Social Psychology, 109(2), 354–368. French, J. R. P., Jr., & Raven, B. (1959). The bases of social power. In D. Cartwright (Ed.), Studies in social power (pp. 150–167). Institute for Social Research. Friedman, W. J., Robinson, A. B., & Friedman, B. L. (1987). Sex differences in moral judgments? A test of Gilligan’s theory. Psychology of Women Quarterly, 11(1), 37–46. Galván, A. (2013). The teenage brain: Sensitivity to rewards. Current Directions in Psychological Science, 22(2), 88–93. Gamer, M., Zurowski, B., & Büchel, C. (2010). Different amygdala subregions mediate valence-related and attentional effects of oxytocin in humans. Proceedings of the National Academy of Sciences, 107(20), 9400–9405. Giedd, J. N., Blumenthal, J., Jeffries, N. O., Castellanos, F. X., Liu, H., Zijdenbos, A., Paus, T., Evans, A. C., & Rapoport, J. L. (1999). Brain development during childhood and adolescence: A longitudinal MRI study. Nature Neuroscience, 2(10), 861–863.

485

486

   .           .     

Gilligan, C. (1982). In a different voice: Psychological theory and women’s development. Harvard University Press. Glicksohn, J., Naor-Ziv, R., & Leshem, R. (2018). Sensation seeking and risk-taking. In M. M. Martel (Ed.), Developmental pathways to disruptive, impulse-control and conduct disorders (pp. 183–208). Elsevier. Gluckman, P. D., & Hanson, M. A. (2006). Changing times: The evolution of puberty. Molecular and Cellular Endocrinology, 254, 26–31. Goldin, P. R., McRae, K., Ramel, W., & Gross, J. J. (2008). The neural bases of emotion regulation: Reappraisal and suppression of negative emotion. Biological Psychiatry, 63(6), 577–586. Groh, A. M., Narayan, A. J., Bakermans-Kranenburg, M. J., Roisman, G. I., Vaughn, B. E., Fearon, R. M. P., & van IJzendoorn, M. H. (2017). Attachment and temperament in the early life course: A meta-analytic review. Child Development, 88(3), 770–795. Haidt, J. (2008). Morality. Perspectives on Psychological Science, 3(1), 65–72. Halim, M. L. D. (2016). Princesses and superheroes: Social-cognitive influences on early gender rigidity. Child Development Perspectives, 10(3), 155–160. Halim, M. L., Ruble, D., Tamis-LeMonda, C., & Shrout, P. E. (2013). Rigidity in gender-typed behaviors in early childhood: A longitudinal study of ethnic minority children. Child Development, 84(4), 1269–1284. Harter, S., Waters, P., & Whitesell, N. R. (1998). Relational self-worth: Differences in perceived worth as a person across interpersonal contexts among adolescents. Child Development, 69(3), 756–766. Heidegger, M. (1992). Being and time. Blackwell. (Original work published 1927) Hines, M., Constantinescu, M., & Spencer, D. (2015). Early androgen exposure and human gender development. Biology of Sex Differences, 6(1), Article 3. Hinson, J. M., Jameson, T. L., & Whitney, P. (2002). Somatic markers, working memory, and decision making. Cognitive, Affective, & Behavioral Neuroscience, 2(4), 341–353. Hoeve, M., Stams, G. J. J. M., van der Put, C. E., Dubas, J. S., van der Laan, P. H., & Gerris, J. R. M. (2012). A meta-analysis of attachment to parents and delinquency. Journal of Abnormal Child Psychology, 40(5), 771–785. Hoffman, M. L. (1991). Is empathy altruistic? Psychological Inquiry, 2(2), 131–133. Huttenlocher, P. R. (1979). Synaptic density in human frontal cortex-developmental changes and effects of aging. Brain Research, 163(2), 195–205. Hwang, K., Ghuman, A. S., Manoach, D. S., Jones, S. R., & Luna, B. (2016). Frontal preparatory neural oscillations associated with cognitive control: A developmental study comparing young adults and adolescents. NeuroImage, 136, 139–148. Hwang, K., Velanova, K., & Luna, B. (2010). Strengthening of top-down frontal cognitive control networks underlying the development of inhibitory control: A functional magnetic resonance imaging effective connectivity study. Journal of Neuroscience, 30(46), 15535–15545. Hyde, J. S., Bigler, R. S., Joel, D., Tate, C. C., & van Anders, S. M. (2019). The future of sex and gender in psychology: Five challenges to the gender binary. American Psychologist, 74(2), 171–193. Insel, T. R. (1997). A neurobiological basis of social attachment. American Journal of Psychiatry, 154(6), 726–735. Jacobs, J. E., Vernon, M. K., & Eccles, J. S. (2004). Relations between social selfperceptions, time use, and prosocial or problem behaviors during adolescence. Journal of Adolescent Research, 19(1), 45–62.

Moral Development during Adolescence

James, W. (2014). The dilemma of determinism. In The will to believe and other essays in popular philosophy (Reprint, pp. 145–183). Cambridge University Press. (Originally published in 1897) Jones, J. D., Cassidy, J., & Shaver, P. R. (2015). Parents’ self-reported attachment styles: A review of links with parenting behaviors, emotions, and cognitions. Personality and Social Psychology Review, 19(1), 44–76. Kagan, J. (1994). Galen’s prophecy: Temperament in human nature. Basic Books. Kagan, J. (1997). Temperament and the reactions to unfamiliarity. Child Development, 68(1), 139–143. Kagan, J. (2018). Perspectives on two temperamental biases. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 373, Article 20170158. Kagan, J., & Snidman, N. (2004). The long shadow of temperament. Harvard University Press. Kalsoom, F., Behlol, M. G., Kayani, M. M., & Kaini, A. (2012). The moral reasoning of adolescent boys and girls in the light of Gilligan’s theory. International Education Studies, 5(3), 15–23. Kandler, C., Bratko, D., Butkovi´c, A., Hlupi´c, T. V., Tybur, J. M., Wesseldijk, L. W., de Vries, R. E., Jean, P., & Lewis, G. J. (2021). How genetic and environmental variance in personality traits shift across the life span: Evidence from a crossnational twin study. Journal of Personality and Social Psychology, 121(5), 1079–1094. Kiesner, J., Cadinu, M., Poulin, F., & Bucci, M. (2002). Group identification in early adolescence: Its relation with peer adjustment and its moderator effect on peer influence. Child Development, 73(1), 196–208. Kilford, E. J., Garrett, E., & Blakemore, S. J. (2016). The development of social cognition in adolescence: An integrated perspective. Neuroscience & Biobehavioral Reviews, 70, 106–120. Killen, M., & Cooley, S. (2013). Morality, exclusion, and prejudice. In M. Killen & J. G. Smetana (Eds.), Handbook of moral development (pp. 340–360). Psychology Press. Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. In D. Goslin (Ed.), Handbook of socialization theory and research (pp. 347–480). Rand McNally. Kraemer, G. W. (1992). A psychobiological theory of attachment. Behavioral and Brain Sciences, 15(3), 493–511. Larsen, B., & Luna, B. (2018). Adolescence as a neurobiological critical period for the development of higher-order cognition. Neuroscience & Biobehavioral Reviews, 94, 179–195. Lantz, B., & Knapp, K. G. (2024). Trends in juvenile offending: What you need to know. Council on Criminal Justice. https://counciloncj.org/trends-in-juvenileoffending-what-you-need-to-know/ Leaper, C., & Farkas, T. (2015). The socialization of gender during childhood and adolescence. In J. E. Grusec & P. D. Hastings (Eds.), Handbook of socialization: Theory and research (2nd ed., pp. 541–565). The Guilford Press. Lewis, M. (2007). Self-conscious emotional development. In J. L. Tracy, R. W. Robins, & J. P. Tangney (Eds.), The self-conscious emotions: Theory and research (pp. 134–149). The Guilford Press.

487

488

   .           .     

Lin, Y.-T., & Hsu, K.-S. (2018). Oxytocin receptor signaling in the hippocampus: Role in regulating neuronal excitability, network oscillatory activity, synaptic plasticity and social memory. Progress in Neurobiology, 171, 1–14. Luna, B., Marek, S., Larsen, B., Tervo-Clemmens, B., & Chahal, R. (2015). An integrative model of the maturation of cognitive control. Annual Review of Neuroscience, 38, 151–170. Luna, B., Paulsen, D .J., Padmanabhan, A., & Geier, C. (2013). Cognitive control and motivation. Current Directions in Psychological Science, 22(2), 94–100. Martin, C. L., & Ruble, D. N. (2010). Patterns of gender development. Annual Review of Psychology, 61, 353–381. Masten, C. L., Telzer, E. H., Fuligni, A. J., Lieberman, M. D., & Eisenberger, N. I. (2012). Time spent with friends in adolescence relates to less neural sensitivity to later peer rejection. Social Cognitive and Affective Neuroscience, 7(1), 106–114. McConchie, J., Hite, B. J., Blackard, M. B., & Cheung, R. C. M. (2022). With a little help from my friends: Development and validation of the positive peer influence inventory. Applied Developmental Science, 26(1), 74–93. McEwen, B. S. (2012). Brain on stress: How the social environment gets under the skin. Proceedings of the National Academy of Sciences, 109(Suppl. 2), 17180–17185. Meyers-Levy, J., & Loken, B. (2015). Revisiting gender differences: What we know and what lies ahead. Journal of Consumer Psychology, 25(1), 129–149. Milner, B., Corkin, S., & Teuber, H. L. (1968). Further analysis of the hippocampal amnesic syndrome: 14-year follow-up study of H. M. Neuropsychologia, 6(3), 215–234. Moffitt, T. E. (1993). Adolescence-limited and life-course-persistent antisocial behavior: A developmental taxonomy. Psychological Review, 100(4), 674–701. Muuss, R. E. (1988). Carol Gilligan’s theory of sex differences in the development of moral reasoning during adolescence. Adolescence, 23(89), 229–243. Narvaez, D., & Lapsley, D. (2014). Becoming a moral person – moral development and moral character education as a result of social interactions. In M. Christen, C. van Schaik, J. Fischer, M. Huppenbauer, & C. Tanner (Eds.), Empirically informed ethics: Morality between facts and norms (Vol. 32, pp. 227–238). Springer International Publishing. Nelson, E. E., Lau, J. Y. F., & Jarcho, J. M. (2014). Growing pains and pleasures: How emotional learning guides development. Trends in Cognitive Sciences, 18(2), 99–108. Newport, E. L., Bavelier, D., & Neville, H. J. (2001). Critical thinking about critical periods: Perspectives on a critical period for language acquisition. In E. Dupoux (Ed.), Language, brain, and cognitive development: Essays in honor of Jacques Mehler (pp. 481–502). MIT Press. Oliveira, P., & Fearon, P. (2019). The biological bases of attachment. Adoption & Fostering, 43(3), 274–293. Patterson, G. R., Dishion, T. J., & Yoerger, K. (2000). Adolescent growth in new forms of problem behavior: Macro-and micro-peer dynamics. Prevention Science, 1(1), 3–13. Paulus, M. P., Rogalsky, C., Simmons, A., Feinstein, J. S., & Stein, M. B. (2003). Increased activation in the right insula during risk-taking decision making is related to harm avoidance and neuroticism. Neuroimage, 19(4), 1439–1448.

Moral Development during Adolescence

Perino, M. T., Miernicki, M. E., & Telzer, E. H. (2016). Letting the good times roll: Adolescence as a period of reduced inhibition to appetitive social cues. Social Cognitive and Affective Neuroscience, 11(11), 1762–1771. Piaget, J. (1965). The moral judgment of the child. The Free Press. (Original work published 1932) Poppa, T., & Bechara, A. (2018). The somatic marker hypothesis: Revisiting the role of the ‘body-loop’ in decision-making. Current Opinion in Behavioral Sciences, 19, 61–66. Preckel, F., Niepel, C., Schneider, M., & Brunner, M. (2013). Self-concept in adolescence: A longitudinal study on reciprocal effects of self-perceptions in academic and social domains. Journal of Adolescence, 36(6), 1165–1175. Reber, P. J. (2013). The neural basis of implicit learning and memory: A review of neuropsychological and neuroimaging research. Neuropsychologia, 51(10), 2026–2042. Rokicki, J., Kaufmann, T., De Lange, A. M. G., van der Meer, D., Bahrami, S., Sartorius, A. M., Haukvik, U. K., Steen, N. E., Schwarz, E., Stein, D. J., Nærland, T., Andreassen, A. O., Westlye, L. T., & Quintana, D. S. (2022). Oxytocin receptor expression patterns in the human brain across development. Neuropsychopharmacology, 47(8), 1550–1560. Savin-Williams, R. C., & Berndt, T. J. (1990). Friendship and peer relations. In S. S. Feldman & G. R. Elliott (Eds.), At the threshold: The developing adolescent (pp. 277–307). Harvard University Press. Schwartz, C. E., Kunwar, P. S., Greve, D. N., Kagan, J., Snidman, N. C., & Bloch, R. B. (2012). A phenotype of early infancy predicts reactivity of the amygdala in male adults. Molecular Psychiatry, 17(10), 1042–1050. Schwartz, C. E., Kunwar, P. S., Greve, D. N., Moran, L. R., Viner, J. C., Covino, J. M., Kagan, J., Stewart, S. E., Snidman, N. C., Vangel, M. G., & Wallace, S. R. (2010). Structural differences in adult orbital and ventromedial prefrontal cortex predicted by infant temperament at 4 months of age. Archives of General Psychiatry, 67(1), 78–84. Shih, J. H., & Auerbach, R. P. (2010). Gender and stress generation: An examination of interpersonal predictors. International Journal of Cognitive Therapy, 3(4), 332–344. Sobhani, M., & Bechara, A. (2011). A somatic marker perspective of immoral and corrupt behavior. Social Neuroscience, 6(5–6), 640–652. Somerville, L. H. (2013). The teenage brain: Sensitivity to social evaluation. Current Directions in Psychological Science, 22(2), 121–127. Somerville, L. H., Jones, R. M., & Casey, B. J. (2010). A time of change: Behavioral and neural correlates of adolescent sensitivity to appetitive and aversive environmental cues. Brain and Cognition, 72(1), 124–133. Sowell, E. R., Thompson, P. M., Holmes, C. J., Batth, R., Jernigan, T. L., & Toga, A. W. (1999). Localizing age-related changes in brain structure between childhood and adolescence using statistical parametric mapping. Neuroimage, 9(6), 587–597. Spenser, K., Bull, R., Betts, L., & Winder, B. (2020). Underpinning prosociality: Age related performance in theory of mind, empathic understanding, and moral reasoning. Cognitive Development, 56, Article 100928. Steinberg, L. (2005). Cognitive and affective development in adolescence. Trends in Cognitive Sciences, 9(2), 69–74.

489

490

   .           .     

Steinberg, L., & Monahan, K. C. (2007). Age differences in resistance to peer influence. Developmental Psychology, 43(6), 1531–1543. Steinberg, L., Icenogle, G., Shulman, E. P., Breiner, K., Chein, J., Bacchini, D., Chang, L., Chaudhary, N., Giunta, L. D., Dodge, K. A., Fanti, K. A., Lansford, J. E., Malone, P. S., Oburu, P., Pastorelli, C., Skinner, A. T., Sorbring, E., Tapanya, S., Tirado, L. M. U., . . . Takash, H. M. S. (2018). Around the world, adolescence is a time of heightened sensation seeking and immature self-regulation. Developmental Science, 21(2), Article e12532. Stifter, C. A., Cipriano, E., Conway, A., & Kelleher, R. (2009). Temperament and the development of conscience: The moderating role of effortful control. Social Development, 18(2), 353–374. Strathearn, L. (2011). Maternal neglect: Oxytocin, dopamine and the neurobiology of attachment: Maternal neglect: Oxytocin, dopamine and attachment. Journal of Neuroendocrinology, 23(11), 1054–1065. Takesian, A. E., & Hensch, T. K. (2013). Balancing plasticity/stability across brain development. Progress in Brain Research, 207, 3–34. Tanti, C., Stukas, A. A., Halloran, M. J., & Foddy, M. (2011). Social identity change: Shifts in social identity during adolescence. Journal of Adolescence, 34(3), 555–567. Tenenbaum, H. R., & Leaper, C. (2002). Are parents’ gender schemas related to their children’s gender-related cognitions? A meta-analysis. Developmental Psychology, 38(4), 615–630. Thomas, A., & Chess, S. (1977). Temperament and development. Brunner/Mazel. Thomas, R. M. (1997). Moral development theories – secular and religious: A comparative study (No. 68). Greenwood Publishing Group. Tobin, D. D., Menon, M., Menon, M., Spatta, B. C., Hodges, E. V. E., & Perry, D. G. (2010). The intrapsychics of gender: A model of self-socialization. Psychological Review, 117(2), 601–622. Topping, K. J. (2005). Trends in peer learning. Educational Psychology, 25(6), 631–645. Tsui, T. Y. L., Lahat, A., & Schmidt, L. A. (2017). Linking temperamental shyness and social anxiety in childhood and adolescence: Moderating influences of sex and age. Child Psychiatry & Human Development, 48(5), 778–785. Turiel, E. (2013). Morality: Epistemology, development, and social opposition. In M. Killen & J. G. Smetana (Eds.), Handbook of moral development (pp. 3–22). Psychology Press. Van der Graaff, J., Carlo, G., Crocetti, E., Koot, H. M., & Branje, S. (2018). Prosocial behavior in adolescence: Gender differences in development and links with empathy. Journal of Youth and Adolescence, 47(5), 1086–1099. Vasileva, O., & Balyasnikova, N. (2019). (Re) Introducing Vygotsky’s thought: From historical overview to contemporary psychology. Frontiers in Psychology, 10, Article 1515. Villanueva-Moya, L., & Expósito, F. (2021). Gender differences in decision-making: The effects of gender stereotype threat moderated by sensitivity to punishment and fear of negative evaluation. Journal of Behavioral Decision Making, 34(5), 706–717. Vogt, B. A., Finch, D. M., & Olson, C. R. (1992). Functional heterogeneity in cingulate cortex: The anterior executive and posterior evaluative regions. Cerebral Cortex, 2(6), 435–443.

Moral Development during Adolescence

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press. Waite-Stupiansky, S. (2017). Jean Piaget’s constructivist theory of learning. In L. E. Cohen & S. Waite-Stupiansky (Eds.), Theories of early childhood education: Developmental, behaviorist, and critical (pp. 3–17). Routledge. Walker, D. M., Bell, M. R., Flores, C., Gulley, J. M., Willing, J., & Paul, M. J. (2017). Adolescence and reward: Making sense of neural and behavioral changes amid the chaos. Journal of Neuroscience, 37(45), 10855–10866. Walker, L. J., Hennig, K. H., & Krettenauer, T. (2000). Parent and peer contexts for children’s moral reasoning development. Child Development, 71(4), 1033–1048. Walker, L. J., & Taylor, J. H. (1991). Family interactions and the development of moral reasoning. Child Development, 62(2), 264–283. Wanless, S. B., McClelland, M. M., Lan, X., Son, S.-H., Cameron, C. E., Morrison, F. J., Chen, F.-M., Chen, J.-L., Li, S., Lee, K., & Sung, M. (2013). Gender differences in behavioral regulation in four societies: The United States, Taiwan, South Korea, and China. Early Childhood Research Quarterly, 28(3), 621–633. Ward, M. J., Lee, S. S., & Lipper, E. G. (2000). Failure-to-thrive is associated with disorganized infant–mother attachment and unresolved maternal attachment. Infant Mental Health Journal, 21(6), 428–442. Whedon, J. (Writer & Director). (1997, November 3). Lie to me (Season 2, Episode 7) [TV series episode]. In G. Berman, S. Gallin, D. Greenwalt, F. Rubel Kuzui, K. Kuzui, & J. Whedon (Executive Producers), G. Davies (Producer), Buffy the Vampire Slayer. Mutant Enemy; Kuzui Enterprises; Sandollar Television; 20th Century Fox Television Productions. Wright, H. F., Barker, R. G., Nall, J., & Schoggen, P. (1951). Toward a psychological ecology of the classroom. Journal of Educational Research, 45(3), 187–200. Zahn-Waxler, C., & Polanichka, N. (2004). All things interpersonal: Socialization and female aggression. In M. Putallaz & K. L. Bierman (Eds.), Aggression, antisocial behavior, and violence among girls: A developmental perspective. (pp. 48–68). Guilford Press. Zeifman, D. M. (2019). Attachment theory grows up: A developmental approach to pair bonds. Current Opinion in Psychology, 25, 139–143. Zinchenko, O., & Arsalidou, M. (2018). Brain responses to social norms: Meta-analyses of fMRI studies. Human Brain Mapping, 39(2), 955–970.

491

20 Morality in Culture The Fate of Moral Absolutes in History Richard A. Shweder, Jacob R. Hickman, and Les Beldo

20.1 Introduction The study of morality in culture is the study of what members of different moral heritage traditions think and mean when they express their approval of a behavior and judge it to be right and good, or when they express their disapproval of a behavior and judge it to be wrong and bad – or even wicked, sinful, or evil. In recent decades there has been a renewal of interest among anthropologists and cultural psychologists in studying the moral foundations of diverse cultural traditions, in accounting for the authority and directive force of social customs, and in developing theories of everyday social interaction related to the expression of values and virtues and the definition of what it means to be a good person. In one way or another, all contemporary theorists of morality in culture either embrace or critique Emile Durkheim’s (1906/1953) contentious assertion that social consensus makes morality. This Durkheimian conflation of the moral and social orders asserts that it makes little sense to analyze morality separately or as distinct from socially agreed-upon norms. Similarly, in one way or another, all contemporary theorists of morality in culture either embrace or reject the implications of Friedrich Nietzsche’s (1882/1974) provocative “God is dead” rebuke of everyday moral realism and what he takes to be the commonplace but erroneous experience of morals as objectively binding. Moral realism is the view that there are moral truths beyond the particular social contexts in which they may play themselves out, and that those truths supply us with good reasons for our actions and reactions toward others. It is the view that those moral truths (for example, that “one should treat like cases alike and different cases differently” or that “one should protect the vulnerable who are in our charge”) are experienced as objective precisely because they possess a normative authority that supersedes social consensus and personal desire. While our purpose in this chapter is not to stake out a particular metaphysical framework to ground moral truth, we develop the argument that 1) moral realism is not only endemic to human experience and thinking about what is moral but also 2) that (whatever one thinks of Nietzsche’s false consciousness critique of moral realism) moral realism supplies us with the type of concepts that anthropologists and cultural psychologists need to make the customary behavior of “others” morally intelligible. 492

Morality and Moral Absolutes in Culture

Concerning the ubiquity of moral realism in everyday life, the anthropologist Raymond Firth once described the moral realism of the Tikopia people of the Southwestern Pacific this way: “The spirits, just as men, respond to a norm of conduct of an external character. The moral law exists in the absolute, independent of the gods” (Firth, 1936/2013, p. 335). One need not be a theist or a devoted supporter of the particularities of the Tikopian way of life to recognize the sense of objectivity inherent in their experience of an external moral order definitive of what is good. Henry Sidgwick, in his highly influential nineteenth-century text in moral philosophy, The Methods of Ethics (1884), captures this sense of moral realism by reference to the key concept expressed in the English language with the word “ought.” Sidgwick argues that any expression of an attitude of approval of a social action that deserves to be called moral (in contrast, for example, to an expression of approval based only on personal liking or socially conditioned familiarity or feelings of pleasure) is “inseparably bound up with the conviction, implicit or explicit, that the conduct approved is ‘objectively’ right – i.e., that it cannot, without error, be disapproved by any other mind” (1884, p. 28). In this chapter we propose, define, and illustrate a moral realist approach to comparative research on morality, which we call “the study of the fate of moral absolutes in history.” The approach begins with the identification of universal existential issues, abstract values, and moral absolutes (basic rules of moral reason of the sort that cannot, without error, be disapproved of by any other mind). We argue that such an approach makes it possible to recognize and better understand the moral foundations of divergent ways of life. In other words, we suggest that regardless of one’s meta-ethical views, the moral realism of everyday life (the ubiquitous “native point of view” on morality) can supply us with a set of theoretical tools for defining the object of study for an anthropology of morality and for rendering the social norms of other cultural traditions morally intelligible to outsiders. In this chapter we posit the existence of abstract moral absolutes so as to make it possible to conduct comparative research that is not ethnocentric. What do we mean by “moral absolutes”? Moral absolutes are principles of moral reason that remain intuitively compelling regardless of historical or cultural context (for example, that “you ought to treat like cases alike and different cases differently”). Moral absolutes are not only abstract in form. They are also experienced as objective in the sense that their directive force does not depend on agreement or consensus. These moral absolutes seem to be intuitively available to the autonomous reason of all human beings. Moral absolutes also meet the universalization criterion identified by Kant: Whenever we act in a way that is moral, our concrete actions can be reasonably identified as a culture-specific, historically contingent instantiation of one or more abstract normative principles that are binding on everyone regardless of culture or history (for example, that “you ought to protect those who are vulnerable and in your charge” or that “you should come to the assistance of

493

494

    .      ,  .    ,       

someone who is in great distress if you are capable of helping them and doing so at no great cost to yourself”). Anyone who possesses the basic mental capacity to evaluate a truth-claim is likely to recognize that moral absolutes possess a genuine air of self-evident truthfulness. By way of contrast, the denial of a moral absolute (for example, an apparently sincere declaration that “you should treat like cases differently and different cases alike”) has the air of a joke or a satire, or else comes across as a failure of comprehension, or a sign of irrationality (Shweder, 1994). Typically, moral absolutes, properly described and comprehended, are instantly, widely, and intuitively recognized as absolutely true and objective (in the senses we have described), with no felt need for justification or debate. In other words, their truth status goes without saying, although, as noted, the truth status of moral absolutes is not dependent on our involvement with them or adherence to them. That is what makes them absolute and objective, rather than subjective, consensual, or discretionary. Because they are abstract, moral absolutes do not provide a useful guide for action unless and until they are expressed in specific, discretionary cultural and historical forms. While we suggest in this chapter that the identification of moral absolutes is an important step in the comparative study of different cultural moralities, that identification is hardly sufficient. Why? Because it is the fate of moral absolutes that they are underspecified abstract forms that are implemented locally, not universally. They are made manifest and gain substance in historically and culturally distinctive ways (Shweder, 1994). Moral absolutes may provide the ideational skeletal structure for the development of any moral tradition; yet they are shaped, interpreted, and applied in particular local historical and cultural contexts. As such, moral absolutes inevitably take on a provincial, parochial, sectarian, or local moral look, which is our aim in this chapter to describe. Before illustrating “the fate of moral absolutes in history” approach, we wish to make it clear, however, that the true metaphysical status of the property of “being true” (whether one speaks of mathematical truths, logical truths, or moral truths) is not something we are going to address. The actual ontological status of such nonphysical things has been debated without resolution by our most renowned metaphysicians. We do not rule out the possibility that the correct answer to that ontological question will remain mysterious to human minds. If you believe there is nothing more to reality than physical particles in fields of force, or if you believe it is merely consensus that makes it true that “if p implies q then not q implies not p,” or if you believe (as the famous anthropologist Ruth Benedict once did believe) that morality is just “a convenient label for socially approved habits” (1934, p. 73), then you are likely to resist the “fate of moral absolutes in history” approach to the comparative study of cultural moralities that we outline and illustrate in this chapter. One obvious line of opposition to our approach might be to contest the validity of the truth status claimed for the moral absolutes, while at the same time trying to convince oneself that morality is nothing more than social

Morality and Moral Absolutes in Culture

consensus and that, for example, it is neither true nor false that one ought to treat like cases alike or protect the vulnerable who are in your charge. Other standpoints may refuse to see morality as anything other than an articulation of power, with many strains of social science adopting a supposedly meta-ethical stance on resisting these orders of power. Here we illustrate an approach to the comparative study of morality in culture that rejects all those conclusions.

20.2 The Fate of Moral Absolutes in History We begin this discussion of the study of morality in culture with a necessary caveat. Many of the cultural communities studied by anthropologists and cultural psychologists are quite illiberal (e.g., Haidt & Graham, 2007; Jensen, 2008). Their members feel viscerally attached to their own historical ethical community grounded in its own identity-defining metaphysical beliefs and legends or origin stories.1 These are communities where members draw strong in-group versus out-group distinctions, do not believe that “there but for fortune goes you or goes I,” do believe that gods, goddesses, and the spirits of the dead play a continuing part in human history, feel duty-bound to live up to the briefs for behavior distinctive of their social status within their community (which is often embedded in hierarchical interdependent patron–client relationships), and frequently discriminate in favor of or against other individuals on the basis of sex, age, kinship, ethnicity, and religion. On the other hand, many of the cultural anthropologists and cultural psychologists who study those illiberal traditions are themselves quite liberal and tend to equate the moral domain with values such as individual autonomy, nondiscrimination, justice, equality, and freedom of choice. This is quite a challenge for the cultural anthropologist or cultural psychologist whose own moral thinking is steeped in a liberal ethics of autonomy, and who seeks to understand and fairly represent the way of life of a people whose moral thinking is rooted in an ethics of community and/or an ethics of divinity (cf. Shweder et al., 2003). To answer this challenge, it is important to make oneself clear about one’s object of study when investigating morality in culture. We approach cultural analysis as the interpretive study of the goals, values, and pictures of the world (including pictures of moral reality) made manifest or revealed in the customary behavior of the members of a social group and expressed not only in the stories they tell and the normative judgments they make, but also in their customary symbolic actions (Shweder et al., 1995). We advance a pluralistic conception of the moral domain grounded in a form of descriptive moral realism. We trace local substantiations and applications of abstract moral principles across cultures. By “morality” we mean perceived prescriptions for thought and action 1

To be clear, the notion of “identity-defining metaphysical beliefs and legends or origin stories” is not limited to illiberal or tribal societies, and we would include the origin story of the Enlightenment and its mythic role in Western human rights discourse as another core example.

495

496

    .      ,  .    ,       

believed to provide authoritative, binding, and objective (truth-bearing) answers to significant existential questions. Nevertheless, the abstract nature of what we have dubbed moral absolutes requires them to take on specific cultural and historical forms, which they inevitably do (Shweder, 1994). Adopting the language of moral realism, a moral prescription can be expressed in the following form: In a given situation (S), a person (P) ought to behave in such and such way (B) because it is the right thing to do, and it is the right thing to do because that behavior (B) by that person (P) in that situation (S) promotes some objective good (G) (for example, justice, loyalty, or sanctity) and makes manifest and concretely instantiates one of the rules of moral reason that meet Sidgwick’s test of undeniability (e.g., that “one should protect the vulnerable who are in one’s charge” [1884, p. 28]). One way to summarize this moral realist approach to the study of morality in culture is to define it as the investigation of the fate of moral absolutes in history. The anthropologist Clifford Geertz (2000, p. 72) once famously remarked that “relativism disables judgment, while absolutism removes judgment from history.” The intellectual disabling Geertz had in mind is obvious. If moral principles have no normative force independent of culture and history, then why should ways of life (including their social norms) different from our own ever be entitled to our admiration, respect, or even toleration? In the absence of any nonethnocentric rules of moral reason, what type of evaluative judgments could possibly survive the corrosive force of absolute moral skepticism? In the absence of any criteria (however abstract) for defining or exemplifying the moral domain, all value judgments simply become declarations of one’s subjective, socially conditioned, and ethnocentric likes and dislikes. If that were the case, one’s judgments of what is of value would be restricted to one’s own lights, which would illuminate very little beyond one’s own biography, culture, and history. The version of descriptive moral realism we describe in this chapter seeks to overcome the type of relativism Geertz criticized and found disabling. We propose a study of the way intuitively available abstract principles – which provide a frame for moral reasoning in everyday life – are given shape and substance locally, particularized, and made concrete, resulting in distinct and divergent moral traditions. Our approach maintains a pluralistic descriptive moral realist stance that strives to be free of presentism and ethnocentrism. We provide a framework for cultural anthropologists and cultural psychologists who see some interpretive value in a moral realist approach to the study of morality in culture. We provide an account of what a descriptive moral realist approach might look like, how it focuses on divergences in local cultural applications of moral absolutes, and how this approach relates to contemporary frameworks in psychology, anthropology, and related fields that also seek to understand the relationships between morality and culture. We examine variations in the way the peoples of the world make common moral principles manifest in their distinctive customary actions. Our pluralistic approach to the study of morality in culture is premised on the view that the illiberality of a

Morality and Moral Absolutes in Culture

cultural custom is not necessarily a measure of its immorality. We argue that it is possible to identify some abstract, transcultural, and transhistorical moral principles that enable us to recognize the moral status and authority of the social norms and evaluative judgments in historical ethical traditions different from our own. We point to productive directions for the comparative study of moral experience where investigators seek to represent different heritage traditions (customs, folkways, normative habits) as group-based variations in the fate and selective application of moral absolutes across time and space.

20.2.1 Moral Absolutes There are at least three types of claims that lie at the heart of our approach to the study of morality in culture. First, there is a base set of universal existential issues that inevitably arise in social settings. These are existential questions of great significance that must be answered in one way or another, if one is to sustain a way of life or create any cultural tradition at all. Being existential questions, they cannot be escaped. Because they are unavoidable questions, they are universal. Here are some examples of universal existential issues: The question of personal boundaries and the nature of personhood (what is me and what is not me?); the question of sex identity (what is male and what is female?); the question of hierarchy (how should the burdens and benefits of life be distributed, and why?); or the question of in-group versus out-group differentiation (who is of my kind and who is not of my kind? Who can I trust and who can I not trust?). These unavoidable existential questions are answered in different ways in different cultural/ontological traditions, and how those questions are answered has significant implications for the ordering of moral goods. Second, there exists a base set of self-evident, undeniably valid, and hence universally binding abstract rules or principles of moral reason – abstract enough to pass Sidgwick’s test. At least in their abstract form (here quoting Sidgwick again) these moral rules or principles “cannot, without error, be disapproved by any other mind” (1884, p. 28). They are not merely the creations of either a second-person or a first-person point of view. Just as the mathematical intuition that “two parallel lines cannot enclose a square” immediately – and without the need for deliberation, argument, or thoughtful reflection – commands our respect as obviously true, so too do the abstract rules of moral reason. These are the kinds of third-person rules of reason that nineteenth-century and early twentieth-century philosophers called “moral intuitions.” They possess something more than just that illusory air of self-evidence that is the commonplace atmosphere surrounding socially conditioned routines or habits. Unlike the many second-person declarations we assume to be selfevidently true simply because they are our declarations or because they are familiar or popular ways of doing things in our social world, these moral absolutes are not obviously contestable. For example, that one ought to give every person their due. That one ought to treat like cases alike and different

497

498

    .      ,  .    ,       

cases differently. That one ought to impartially apply rules of general applicability. That one ought to respond to the urgent needs of others if the sacrifice or cost to oneself is negligible. Third, there is the domain of objective values, goods, or principles. These goods, which are experienced as desire-worthy inherently valuable “ends in themselves,” can be classified based on their differential saliency across moral heritage communities (or across institutions and status relationships within a community; see Fiske, 1992). Examples include the values of respect and obedience to authority (expressed, for instance, in military command structures) versus the values of intellectual autonomy and free thought (more commonly expressed in the academy), or the values of choice, free association, and comparative shopping that govern marketplace relationships versus values such as loyalty and protection that govern kinship relationships. The plurality of these goods (consistent with what Robbins, 2012, calls “values”) are often related to variations in other features of self, culture, and society (see Shweder et al., 2003). Nevertheless, all of these somewhat abstract, noncontingent absolutes – the unavoidable existential questions, the undeniable rules of moral reason, and the goods that are inherently value-worthy ends in themselves – are given shape and substance, particularized and made concrete locally, not universally. Thus, the fate of moral absolutes in history is that these abstractions are implemented selectively and made manifest and instantiated in distinctly local contextsensitive and contingent ways, resulting in different moral mentalities and various traditions of value. Making analytic use of abstract moral absolutes while also attending to the culturally specific ways in which they are made manifest in diverse social realities is at the heart of the descriptive pluralistic moral realism that grounds our approach to understanding morality in culture.

20.2.2 An Illustration of Everyday Moral Realism: A Brahman Widow Eats Fish The nature of everyday moral realism can be illustrated by a concrete example from a moral judgment and reasoning interview typifying the moral thinking of both female and male Brahman residents in a Hindu heritage community in a temple town in India. The concrete local behavior in question concerns “a widow in your community who eats fish two or three times a week.” Locally, within this particular Hindu heritage community, the widow’s dietary behavior – eating fish two or three times a week – is viewed as a very serious moral transgression (the interview is based on collaborative research conducted 40 years ago in an Indian temple town that was then named Orissa and is now named Odisha; see Shweder et al., 1987). The interview unfolds as follows: Question: Is the widow’s behavior wrong? Answer: Yes, widows should not eat fish, meat, onions or garlic or any hot foods. They must restrict their diet to cool foods, rice, dhal, ghee, vegetables.

Morality and Moral Absolutes in Culture

Question: How serious is the violation? Answer: It is a very serious violation. She will suffer greatly if she eats fish. Question: Is it a sin? Answer: Yes, it is a great sin. Question: What if no one knew this had been done. It was done in secret or privately. Would it be wrong then? Answer: What difference does it make if it is done while alone? It is wrong. A widow’s time should be spent seeking salvation, seeking to be reunited with the soul of her husband. Hot foods will distract her. They will stimulate her sexual appetite. She will lose her sanctity and behave like a prostitute. Question: Would it be best if everyone in the world followed the rule that widows should not eat fish? Answer: That would be best. A widow’s devotion is to her deceased husband – who should be treated like a god. She will offend his spirit if she eats fish. Question: In the United States widows eat fish all the time. Would the United States be a better place if widows stopped eating fish? Answer: Definitely it would be a better place. Perhaps American widows would stop having sex and marrying other men. Question: What if most people in India wanted to change the rule so that it would be all right for widows to eat fish. Would it be okay to change the rule? Answer: No, it is wrong for a widow to eat fish. Hindu dharma [truth] forbids it. Question: Do you think a widow who eats fish should be stopped or punished in some way? Answer: She should be stopped, but the sin will live with her and she will suffer for it. Now imagine how that interview would go if conducted with residents in European or North American academic communities where some of us live, and where the ethics of autonomy reigns supreme (Shweder et al., 1987, p. 44 provides one example). The interview would likely be a testament to the ethics of autonomy and to the notion that “everyone should be free to eat fish if they want to.” There would be lots of talk about individual wants and preferences and the right of every individual to have the things they want. The respondent would probably invoke a moral concept that never appeared in hundreds of interviews with Hindu temple town residents; namely, in this instance, the idea of an inalienable, unalterable, unquestionable natural right to eat fish if you want to.

499

500

    .      ,  .    ,       

Recall the interview probe concerning the alterability of the rule? What if we asked a Hyde Park University of Chicago resident that question? “What if most people in the United States wanted to change the rule so that widows would be forbidden from eating fish? Would it be okay to change the rule?” Here is the way we imagine they might respond: “No, you can’t do that. It is not okay to impose an oppressive sexist patriarchal rule forbidding widows from eating fish. Secular liberal dharma – truth – forbids it. All people in the world have a right to eat fish if they want to.” Here, on the one hand, we are faced with the reasoning of a Hindu subject, whose arguments are rich in references to moral values or goods such as loyalty/ devotion and sanctity/purity, which are the moral absolutes privileged by an ethics of community and an ethics of divinity. On the other hand, we are faced with the reasoning of a secular liberal subject, whose arguments are rich in references to freedom, wants, and the right to have the things you want, which are moral absolutes in a liberal individual-focused ethics of autonomy. In the background, assumed but unstated in the interview, is a local ontological picture of what is real – a marriage tie between souls that persists after death – and a conception of the moral project and duties associated with the social status of widowhood in this particular ethical community. Those moral ideas, the duties and “oughts” of widowhood, include the symbolic display of one’s state of mourning, which is not viewed as something to be overcome or limited in time, a meditative focus on the soul of your departed husband, and an ascetic lifestyle. You renounce this-worldly pleasures (fancy adornments such as colorful saris, jewelry, or makeup). You adopt a bland diet free of tasty, spicy, or rich foods. You withdraw from all festive activities or celebrations, including weddings. You spend your time seeking penance through various spiritual activities, such as reading sacred texts or going on pilgrimage. That project of atonement is related to the local karmic belief that nature is just, that you reap what you sow in life, that the death of your husband prior to your own death signals accumulated and inherited personal moral debts that need to be erased for the sake of your own mortal fate and fate after death. Which is why a significant life-stage question for these Hindu widows concerns their own prospects for salvation and for being reunited with the soul of their husband. And which is why this custom complex of Brahmanical widowhood involves asceticism, penance, and renunciation and conveys various messages. There is the message to the soul of your dead husband that “life in this world can’t be a source of pleasure without you and my aim is to be reunited with you as soon as our just world allows.” There is the message to oneself that “by denying myself the pleasures of life and dedicating myself to spiritual activities I will burn off my karmic debts and be reunited with my husband.” For a moral realist the following question then arises: Is the reasoning of the two subjects – the devout Hindu in a temple town in India and the secular liberal in a university community in Europe or the United States – different but equal or different but unequal, and why? Investigators who have examined the relationship between morality and culture continue to offer alternative answers to that

Morality and Moral Absolutes in Culture

question about how to interpret this type of striking diversity in moral judgments, both within and across cultural groups.

20.3 From Developmentalism to Pluralism: Psychologists Making Sense of Morality across Cultures Faced with the challenge of interpreting and hence evaluating the diversity of moral mentalities and moral traditions, for example, liberal versus illiberal, it is possible for a moral realist to react in either of two ways – offering either a developmental interpretation or a pluralistic interpretation. The developmental interpretation (and its evaluation) is premised on a picture of moral reality in which different moral understandings are ranked as more advanced or less advanced in relationship to a single correct model or ideal for moral thinking, and all deviations from it are examples of error, ignorance, confusion, or bias. This developmental mode of interpretation and evaluation has been prominent in the liberal progressive Enlightenment tradition (and is increasingly in evidence among those cultural anthropologists who see it as their aim to liberate the individuals who are members of a cultural group from the constraints of inherited tradition and the burdens of ancestry). Under this approach, persons and social groups can be judged and scaled according to their proximity to a single defined objective standard for evaluating judgments of true versus false, right versus wrong, and good versus bad. As the political and moral philosopher Isaiah Berlin has noted: For many of the historically influential progressive European Enlightenment thinkers, “there is only universal civilization, of which now one nation, now another, represents the richest flowering” (Berlin, 1997, p. 255). According to that developmental interpretation and evaluation of group heritage traditions, there is one ideal endpoint definitive of progressive cultural development, and it can be used to rank the degree of development or underdevelopment of different mentalities and ways of life. Lawrence Kohlberg’s framework is both illustrative of this developmentalist approach and is foundational to modern moral psychology. Kohlberg (1981) understood children’s understandings of the meaning of the concept “that’s right” or “that’s good” as measured against a scale of progress from egocentric (subjective) to decentered (objective) understandings. This meant that not only were children portrayed as developing over the life course toward a rationally optimal way of moral thinking, but different societies could be measured according to a single, rationally superior endpoint of moral development. Building on the work of Jean Piaget, Kohlberg constructed a model of moral development in which children progressed through stages. At each stage, the child recognizes their current stage as rationally superior to the one that preceded it. These stages were grouped into three major levels (and then subdivided into six stages): (1) A preconventional level (stages 1 and 2, ages 0–6) involving subjective and emotive judgments, such as “I like it, therefore it is good.” At this stage,

501

502

    .      ,  .    ,       

according to Kohlberg, the child acts based on what feels good or bad from an egocentric point of view. (2) A conventional level (stages 3 and 4, ages 7–11) in which conformity and consensus reign. Moral reasoning is still subjective, but now it is based on the collective likes and subjective judgments of others (parents, legislatures) in one’s own in-group. Social consensus makes morality. The moral order is equated with one’s received social order. (3) A postconventional level (stages 5 and 6, ages 11 and up) based on abstract, objective judgments of justice, rights, and harm rooted in a conception of natural moral law. Applying Kohlberg’s categories on a global scale, comparative research (Simpson, 1974) found that relatively few people reach the postconventional level, with westernized elites overrepresenting those who do. Kohlberg’s (1981) approach was criticized by cultural anthropologists for assuming that a fully developed postconventional moral reasoning must always be secular, individualistic, rights-based, and concerned with harm, individual rights, justice, and equality, a stance critically labeled by Shweder as “Liberalism as Destiny” (Shweder, 1982; Shweder et al., 1987). Anthropologists pointed to instances where people do not privilege those particular values but nonetheless appear to have fully developed, reason-based postconventional moral mentalities (see, for example, Shweder & Much, 1991). Pluralism provides an alternative mode of interpretation and evaluation for moral realists who take these pitfalls of developmentalism seriously. Pluralistic moral realism runs somewhat counter to enlightenment progressivism. For a pluralist, moral reality is multiple rather than unitary or singular. It consists of diverse and often irreconcilable rules, goals, values, and the various cultural manifestations of moral absolutes under which human beings might flourish. Isaiah Berlin associated interpretive and evaluative pluralism with the eighteenth-century German Counter-Enlightenment philosopher Johann Herder, who interpreted cultural differences this way: [T]here is a plurality of incommensurable cultures. To belong to a given community, to be connected with its members by indissoluble and impalpable ties of common language, historical memory, habit, tradition and feeling, is a basic human need no less natural than that for food or drink or security or procreation. One nation can understand and sympathize with the institutions of another only because it knows how much its own mean to itself. (Berlin, 1997, p. 255)

We would like to suggest that Johann Herder, the pluralist, might be viewed as one of the founding figures of contemporary research on the moral foundations of diverse cultural traditions, which is the approach that is taken in this chapter. It is an approach that is open to the possibility a) that the illiberality of a cultural practice (a taboo against widows eating fish in a devout Hindu community) is not necessarily a measure of its immorality or moral backwardness (Shweder, 2009); b) that there can be moral universalism (the existence of a

Morality and Moral Absolutes in Culture

base set of noncontingent meta- and trans-societal abstract moral absolutes) without there being trans-societal uniformity in the local moral judgments or in the selection and application of those moral absolutes (Cassaniti & Menon, 2017); c) that instead of speaking of “traditional values” we should speak of various “traditions of value”; and d) that diversity in local cultural manifestations of moral absolutes is their fate in history. This pluralist critique set research in moral psychology off in several new directions, not the least of which is the search for recognizable moral foundations in ways of life different from one’s own.

20.3.1 Moral Domains and the “Big Three” of Morality In response to Kohlberg’s theory of moral development, a wave of psychological critiques emphasized the various ways that cultural models of reality challenged Kohlberg’s singular model of moral development. Elliot Turiel (2002) developed the “domain” theory critique of Kohlberg, which challenged the underlying development trajectory without rejecting the universalizing liberal ethics that underpinned Kohlberg’s approach, even employing the logic of liberalism to distinguish its domain of moral reasoning from reasoning about social conventions. Turiel also uncovered the existence of moral intuitions in children very early in life. Joan Miller and colleagues found significant cultural variation in the extent to which cultural models of personhood affect moral judgment, with special attention to how role- and status-based reasoning shape moral decision making (Miller, 1984; Miller et al., 2011). Shweder, Miller, and others also mapped out multiple alternative postconventional bases for moral reasoning (Miller & Bersoff, 1992; Shweder, 1982; Shweder et al., 1987; Shweder et al., 2003). This research led to the proposal that differences in moral thinking across cultures can be traced to three distinct conceptions of the self, each of which is associated with a distinct core ethics, or “Big Three” of morality (Shweder et al., 2003). Under the “ethics of autonomy,” the self is imagined as an individual agent or preference structure and, as a result, an emphasis is placed on notions of individual rights, harm, justice as equality, and freedom of choice. This ethic tends to be the prevailing moral concern in liberal regions of the Western world. Shweder and colleagues’ work revealed alternative fundamental ethics that manifested the same degree of philosophical depth as Kohlberg’s postconventional stage but were grounded in a competing set of moral principles and assumptions about persons, society, and nature (Shweder & Much, 1991). Under the “ethics of community,” the self is imagined as the occupant of a socially defined role, and the values of duty, loyalty, and hierarchical interdependency are privileged. Additionally, the “ethics of divinity” implicates the self as a piece or fragment of a divine or sacred order, with an emphasis placed on values of purity and sanctity. These distinct models of the self underpin distinct ethical orientations about people and the obligations associated with them. Critically, none of these ethics is reducible to any of the others.

503

504

    .      ,  .    ,       

While the core logics of these ethics – autonomy, community, and divinity – may play some role to some degree in all cultural traditions, the specific manifestations of these ethics in any given cultural context varies with the specific ontologies of personhood that underpin them. As ideal types, they point to essential logical forms, which are always insufficient for a full accounting of the ways that these ethics are filled out and made meaningful in cultural contexts (cf. Weber et al., 1978, pp. 18–22). Most cultural frameworks will entail models of personhood that invoke weighted dimensions of the three ethics. Nonetheless, the logical relationships between ethical rationales are critical to understanding the moral outlook of any community (Hickman & Fasoli, 2015). These three ethical ideal types (the ethics of autonomy, community, and divinity) have provided a framework for comparative research on distinct moral development trajectories across the life course in varying cultural contexts, a research program that Lene Jensen has dubbed the “cultural-developmental approach” (Jensen, 2011, 2015).

20.3.2 Moral Foundations Taking the Big Three as its starting point, a framework for comparative research known as moral foundations theory (MFT) expanded the list of ideal types to include five and eventually six “cognitive modules” (see Haidt, 2012, p. 146, p. 402 n34, for a concise summary of the modules).2 By identifying these “foundations” of moral thought, MFT sought to account for intra-societal differences in moral reasoning (liberals versus conservatives; see Haidt & Graham, 2007) as well as cross-cultural comparisons. Notably, MFT also sought to reconcile moral and evolutionary psychology by grounding the foundations – care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, sanctity/degradation, and (added later) liberty/oppression – in adaptive challenges in human evolutionary history (Haidt & Joseph, 2007). The basic idea is that certain paramount adaptive challenges led to the evolutionary selection of cognitive modules that are “triggered” by phenomena that help a person meet that adaptive challenge (such as caring for children), thereby ensuring the hereditary transmission of this tendency. For Haidt and Joseph, the linchpin between moral thought and these adaptive struggles was the evolved emotional salience that these triggers take on. If a person has a heightened emotional reaction to suffering, for example, one is more likely to act swiftly to alleviate suffering. To the extent that this reaction increases the likelihood of one’s offspring surviving, this trigger would be selected for. Moral 2

This mapping of the six cognitive modules of MFT onto the Big Three is not perfectly isomorphic, and the categories in the two schemes are organized and divided in slightly different ways. However, autonomy (care/harm, fairness/cheating, liberty/oppression), community (loyalty/betrayal, authority/subversion), and divinity (sanctity/degradation) account for these principles, with the caveat that care/harm can be applied to either autonomy or community, depending on whether the focus is the harm to the individual or the responsibility of the caregiver.

Morality and Moral Absolutes in Culture

emotions were thus postulated to mediate the relationship between the foundations and the judgments elicited by a trigger. Haidt and Joseph accounted for cultural variability in this model by focusing on the extent to which the triggers for these modules change over shorter timescales than that required for evolutionary changes to occur – sometimes, in the course of a single generation. For example, a salient emotional reaction to suffering might be generalized from a concern for human babies to a concern for baby seals, thus expanding the ethical reach of these triggers and the associated cognitive modules. According to Haidt and Joseph (2007), common foundations are universally available in the human psychological makeup, yet the scope of these modules might vary by culture, as would the degree of emphasis on one module over another.

20.3.3 Other Approaches The body of psychological research that engages with the relationship between morality and culture extends beyond the lines of investigation that began as fundamental critiques of the Kohlbergian approach. While a complete survey is beyond the scope of this chapter, a few key research programs should be noted. Rai and Fiske (2011), for example, employ “relational models theory” (Fiske, 1992) to understand how four proposed fundamental moral motivations – unity, hierarchy, equality, and proportionality – can be activated in distinct cultural and interpersonal contexts for developing divergent moral motivations. Fiske and Rai (2015) further deploy this approach to describe what they call “virtuous violence,” articulating a theory of the moral foundations of violence. Another body of work considers the ways that culture, morality, and emotion interrelate in very distinct ways from MFT. One thing to note about MFT is the strong causal link leading from emotional states to moral judgments. This stands in contrast to alternative accounts that see moral thought as having a motivating emotional component (among other components), but one that is founded on perceived moral “truths” and ontological beliefs that both motivate and justify one’s evaluative judgments, while not being reducible to feelings or emotions (see Beal, 2020; Prinz, 2007, p. 99; Shweder, 2004, pp. 82–83). Briggs (1998), Cassaniti (2014), and Shweder (2003) provide examples that emphasize culturally distinct relationships between morality and emotion based on ethnographic depth, while Keltner et al. (2014) and Tangney et al. (2007) provide broad reviews that emphasize more reductive (and universalizing) psychological approaches.

20.4 The Importance of (Cultural) Context: Two More Examples Beyond the psychological literatures that we have briefly reviewed – all of which take different angles on the question of moral realism – there are theorists of moral anthropology who argue that the study of everyday social life

505

506

    .      ,  .    ,       

should move forward without any attention to supposed rules of moral reason or abstract moral domain-defining absolutes of the type we have described (cf. Das, 2006, 2012; Lambek, 2010a, 2010b). As should be obvious by now, we believe that abstract moral principles – including the rules of moral reason, significant existential issues, and fundamental ends in themselves (values) we have described – afford the very possibility of recognizing the moral motives and mentalities that infuse everyday life with moral meaning. Everyday life gives local shape and substance to those abstract moral truths. Thus, the fate of moral absolutes in history is that they are implemented selectively and made manifest in distinctly local, context-sensitive ways. Of course, moral absolutes standing alone, without the particulars of everyday cultural life that give them substance and particularity, are empty. On the other hand, the particulars of everyday life, standing alone outside any moral reality, are blind to their own moral significance and are thus devoid of any moral meaning. To reduce the chance of misunderstanding, we wish to emphasize that central to our version of moral realism is a pluralistic vision of the domain of moral absolutes and their application in everyday life in local heritage communities. In other words, our version of moral realism is premised on the idea that the domain of moral absolutes is heterogeneous, and therefore rich in conflict between equally compelling moral principles – “the imagined truths or posited objective ‘goods’ that are the cognitive grounds for moral judgments around the world are many, not one” (Shweder, 2004, pp. 84–85). This version of moral realism is thus different from any universalist moral realist perspective that asserts a singular, paramount, maximal moral good to which all other moral goods could be subsumed. It is different as well from relativist perspectives that collapse the distinction between the moral and the social, reduce the moral to the social, or assert that there is nothing more to moral “truths” over and above the collective subjectivity of this or that cultural community. We believe it makes sense to distinguish between the fundamental nature of moral goods in such a way that they are not reduced to culture and history, while allowing for a fundamental tension between moral goods themselves. Thus, the pluralism in our moral realist approach operates at the level of the diverse rules of moral reason and objective moral goods and not just in cultural interpretations of moral goods. At the same time, the elaboration of these moral principles in everyday cultural experience is precisely what makes them meaningful and real. Our approach thus allows for extensive diversity and incommensurability at both the level of abstract moral principles and their practical deployment. We draw on ethnographic work by each of the authors of this chapter to illustrate these points, which fill out key elements of the “fate of moral absolutes in history” approach: (1) The Brahman example that we have already described demonstrates that moral principles unto themselves are insufficient, and that metaphysics and cosmology fundamentally matter in the way that moral goods are filled out and applied in everyday life.

Morality and Moral Absolutes in Culture

(2) The Native American example that we will describe next demonstrates that singular moral goods can be filled out in distinct and even competing, irreconcilable ways. (3) The Hmong example that we will also describe demonstrates that multiplicity and incommensurability in moral practice can result from diversity in the domain of moral goods, which can create conflicts within a particular domain of metaphysical belief.

20.4.1 Native American Whaling While we call them moral absolutes, it is important to emphasize that the expressions of these abstract moral principles vary greatly depending upon sociocultural context. And while we contend that a limited set of moral absolutes may be recognizable to any rational mind, these principles are not in themselves determining or dispositive with respect to specific situations – they provide no guidance on how to act – unless and until they are given shape and substance in the concrete milieu of social life. Hence the Kantian dictum that “thoughts without content are empty, intuitions without concepts are blind” (Kant, 1787, p. B75). The moral absolutes we have in mind may seem “empty” and they are certainly “abstract,” but they also provide a foundation for moral recognizability across traditions. Their abstract qualities ground the recognizability of culturally specific moral practices and judgments as moral, providing the basis for “moral” intelligibility across traditions, despite deep differences. One implication is that the same moral principle can underlie distinct or even opposing moral judgments. Consider, for instance, the abstract moral principle of care. Expressed in the form of a moral absolute, care is the idea that you ought to discharge the duties reasonably expected of you, especially as regards protecting others who are in your charge and who are in some sense vulnerable (meaning that their fate or welfare depends on your actions). More than just a matter of local norms or contractual obligation, the expectation to provide care triggers a moral judgment. Failure to provide care does not just place someone in breach of contract or out of step with local norms. It brands them as negligent, faithless, or selfish – perhaps even evil. The idea of care and the consequences of the failure to provide it can be recognized across historical and cultural contexts. But what counts as care, who is expected to provide it to whom, and to what extent, all depend on cultural elaboration. Going even further, this same abstract principle of care, when given cultural expression, may produce vastly different judgments about the same real-world issue. Consider how a Native American (Makah) whale hunter described the preparation that goes into a whale hunt: It boils down to your own individual responsibility, if you take the life of such a magnificent creature, that you prepare yourself mentally, physically, and spiritually. You take that responsibility of ushering that spirit, caring for that spirit, and preparing it to leave its body . . . to usher the spirit of the whale into the next world, in a manner in which it’s accepted by those who came before it, in the same manner. (Beldo, 2019, p. 95)

507

508

    .      ,  .    ,       

Care is a recurring motif in spiritual discourse about whales and whaling in the community, so much so that “taking care of the whale” is a shorthand for describing a proper hunt, from the planning stages to bringing a whale to the beach (Beldo, 2019). This dovetails with the idea, common among northern hunting peoples, that the hunted animal can be induced to “offer itself” to the hunter if the hunter goes through the proper spiritual preparation (see Brightman, 1993). Now consider how an activist from a nearby town in Washington described her motivation to oppose the resumption of whaling and form a protest group: They [the whalers] say they understand the whale, that nobody knows the whale better than they do . . . But I know what it wants. It wants to live and eat and be with its family. You don’t tell me it wants to be stabbed and blown up! (Beldo, 2019, p. 95)

The members of the group emphasized the special connection they felt to the whales who were frequently spotted in the nearby ocean waters during the summer months. At protests, they held up a sign reading “support the whales.” In their public statements, the group talked about the whales’ trusting nature and the vulnerability that this created and the need for someone to “speak for the whales.” Without attempting to reduce the many complex differences between these two worldviews, note how we can recognize the abstract principle of care in each of these two descriptions of how (and why) one ought to conduct oneself in relation to a gray whale. For the whaler, discharging one’s responsibility means accepting the nonnegotiable responsibility to competently usher the whale’s spirit into the next world, having completed all of the necessary physical and spiritual preparations before doing so, thus fulfilling the whale’s desire and the telos of its spirit. For the antiwhaling activist, care means stopping other humans from harpooning whales, which is understood as a violent act and a betrayal of the trust that these intelligent animals have placed in humans. The difference between these two worldviews can be rendered comprehensible without reference to divergent abstract moral principles. Indeed, each worldview appeals to a similar abstract notion of care and responsibility. But for this to be sensible, one must have a sufficient understanding of the different cultural conceptions of whales, and of what whales want from humans, and what counts as care. Yet the presence of the abstract notion of care and responsibility makes both positions comprehensible to people from different cultural traditions and may promote some mutual understanding, if not agreement, between the people who hold these opposing views. While it is true that a local emphasis on different underlying moral principles explains the diversity of moral mentalities (cf. Haidt & Graham, 2007), sometimes the same abstract moral principle can form the basis of different and conflicting judgments related to different ontological beliefs (see Beal, 2020). There is a temptation to conclude from this ethnographic comparison that – despite surface-level differences in how they are employed – ultimate moral

Morality and Moral Absolutes in Culture

goods are really, deep down, pretty much the same everywhere you go (cf. Shweder, 2004, p. 92). However, the observation that minimally substantive abstract moral truths can be culturally filled out in distinct (and even competing) ways is but one critical observation. The pluralist framework that we are advancing also allows for the possibility of multiple, incommensurable fundamental moral absolutes to be at play within a single cultural frame, without the need to reduce these distinct moral absolutes to one another. In short, the set of moral absolutes that are available to undergird cultural-moral frameworks are many, not one. While the Native American whaling example demonstrates that distinct cultural operationalizations of a single moral absolute can conflict, the following example from a Hmong pastor in Vietnam demonstrates that moral conflict can result from incommensurability between the moral absolutes that themselves undergird different elements of a single cultural-moral world.

20.4.2 A Master and Pastor in Vietnam The experience of a Hmong Christian pastor in Vietnam, named Xwb Fwb (pronounced “soo foo”), demonstrates this point. Prior to converting to Christianity, Xwb Fwb was a master of the repertoire of traditional ancestral rites that make up the Hmong canon (kev cai dab qhuas). These rites are intricate, complex, and any particular rite – a funeral blessing (txiv xaiv), a wedding (zaj tshoob), an ancestral propitiation (fiv yeem) – requires memorizing hours of poetic content in an archaic register to be chanted, sung, and accompanied by specific ritual actions. Most practitioners only gain mastery over one or two ritual forms, as each of them requires years of study under another master. Xwb Fwb had mastered most of the canon. He had done so from a relatively young age, gaining a reputation for ritual acumen and expertise that exceeded most practitioners. This status made him a ritual leader among his relatives, which consisted of about 15 households in a once-remote subsistence community that is now being economically transformed by tourism and other industries. Xwb Fwb eventually converted to Christianity, and most of his relatives followed him as their spiritual leader. He spent time in Saigon and Hanoi receiving Protestant theological training, and he eventually gained the status of Pastor, thus completing the transformation from traditional ritual master of the family to a Christian pastor of the family. They built a church just below Xwb Fwb’s house, where they would worship every Sunday, holding an additional Bible study once a week. While he considered himself a deeply committed Christian, Xwb Fwb still clearly retained a sense of the importance of the traditional rituals. On the one hand, he believed that Christianity would lead to the salvation of Hmong people’s souls in heaven. But on the other hand, he was saddened by the prospect of traditional Hmong rituals disappearing. This tension was heightened by the fact that, prior to his conversion, Xwb Fwb took on five students who sought him out to teach them the repertoire of traditional Hmong

509

510

    .      ,  .    ,       

rituals as their master. The pastor took his Confucian obligation to his students very seriously, even after his conversion to Christianity. However, this situation presented a fundamental tension between loyalty, on the one hand, and sanctity on the other. His loyalty is rooted in the Confucian obligation of a master to properly train one’s students. Failing to do so would constitute a fundamental abrogation of responsibility. Xwb Fwb’s notion of sanctity stood in direct conflict with this obligation, however. Both traditional Hmong ritual practice and Protestant Christian theology require a maintenance of ritual purity based on one’s religious commitments. As a Christian pastor, performing traditional Hmong ancestral rites for or with his students would violate the sanctity of singular Christian devotion. Hmong Protestants typically refuse to participate in ancestral rituals or even consume the meat of animals that were sacrificed by their relatives in these rituals, often even compromising family relationships (because of non-participation) in pursuit of this sanctity. Xwb Fwb’s initial attempt to resolve this tension was to convert his five students to Christianity, with the hope of absolving him of the responsibility to finish teaching them the ancestral rituals. Three of the apprentices ultimately did convert and became his deacons in the Christian congregation. However, two of his protégés were not interested in Christianity and wished for the pastor to continue to teach them the traditional canon of Hmong ritual utterances and rites. Xwb Fwb took this Confucian obligation so seriously that, while he admitted that he was ultimately providing them with knowledge that would foreclose them from attaining the highest degree of happiness in the post-mortal realm – going to heaven to abide with God – his obligation as a ritual master was not obviated by his conversion to Christianity. The way that he chose to reconcile this tension between loyalty and sanctity is instructive. Traditional Hmong ritual practice requires utterances to be performed in a certain way in order to be effective. Some messages have to be played through a bamboo reed instrument – the qeej – and others have to be sung or chanted to particular tunes. In other words, it is the marriage of form and content that brings about the actual ritual effects, such as guiding a deceased person back to the ancestors or ensuring that descendants receive blessings of health, wealth, and prosperity (Hickman, 2007, 2014). In order to avoid having to actually perform these rites – thus carrying out an act that would violate his Christian commitments – Xwb Fwb changed his mode of instruction to his students. He would teach them the tunes and the words separately – humming the tunes and articulating the words separately without any tune. By failing to personally combine the tune together with the words of the rite (i.e., form plus content), the words themselves would have no illocutionary force, and Xwb Fwb would have therefore not violated the sanctity of his Christian commitments. In this way, he could maintain his Confucian obligation to impart to his students the knowledge they need (even if he had to break rituals down into constituent parts for his students to re-assemble independently) without sinning against the Christian God by actually “performing” an ancestral ritual, which Xwb Fwb still considered to have real spiritual effects. His adapted mode of training his apprentices struck a balance between

Morality and Moral Absolutes in Culture

these competing moral demands, both of which constitute elements of his moral reality. However, this tension between loyalty and sanctity was not completely resolved. In a sense, Xwb Fwb was providing his students with the ritual knowledge and skill that would ultimately damn them spiritually, and he lamented that fact. But he saw himself as doing the best he possibly could in order to meet the competing demands of his loyalty to his apprentices and his commitments to Christian sanctity. This struggle with competing ethical demands is not specific to cases of religious conversion or migration but rather pervades ethical life in many contexts. While Western philosophy tends to assume that any system of thought ought to avoid inconsistencies, this is not the case in many traditions (Nuckolls, 1993), nor even, in actuality, in the Western tradition (Nuckolls, 1998). Tensions between moral absolutes within a cultural frame reveals an inherent dynamic that analysts should attend to, because much of everyday moral life involves negotiating localizations of competing moral absolutes. How moral actors navigate these complexities in specific contexts is instructive (see Hickman & Fasoli, 2015). Taken together, these three ethnographic examples paint a more complete picture of how moral absolutes take on specific lives over the course of history. The moral reasoning in the Brahman widow interview teaches us that abstract moral absolutes only take on their real power when made meaningful and inflected through particular cosmologies. The Native American whalers and their antagonists teach us that a single moral absolute can lead to divergent interpretations of the world and conflicting courses of action. The Hmong pastor teaches us that these moral conflicts can arise from the logical demands of competing moral absolutes within a cultural framework. These three ethnographic examples illustrate how local context is fundamental not only to how our interlocutors think about moral issues but also to how these issues become real and carry the force of genuinely felt moral obligation. Understanding these conflicts between moral absolutes, and attending to how moral actors deal with them, requires an understanding of the meanings that these moral goods take on in the local context, including the existential underpinnings of basic issues, as we have discussed. We conclude by briefly commenting on how an ethnographic focus on the fate of moral absolutes in cultural context is required for an approach to morality and culture that makes use of a language for comparison that is both realist and pluralist in form. In doing so, we respond to some prominent critiques that have emerged in specifically anthropological approaches to understanding morality and culture.

20.5 Realism versus Antirealism In both anthropology and psychology there are competing realist and antirealist frameworks for understanding morality in culture. Among those

511

512

    .      ,  .    ,       

psychological frameworks that we have described already, Kohlberg’s approach (building on Piaget and Kant before him) is realist in that he assumes that moral absolutes rationally demand respect. Moral foundations theory is realist in that “moral intuitions” as defined by that approach are real, not because of their rational quality of providing good reasons for one’s actions, but because they coincide with evolutionary adaptations in human history. There are also more distinctly antirealist positions taken up by some psychologists, such as the social constructionist approach of Kenneth Gergen, which includes the view that “what one takes to be the real, what one believes to be transparently true about human functioning, is a by-product of communal construction” (Gergen, 2001, pp. 805–806). This approach “views discourse about the world not as a reflection or map of the world but as an artifact of communal interchange” (Gergen, 1985, p. 266). In anthropology, an “ethical turn” has recently been declared (Cassaniti & Hickman, 2014; Laidlaw, 2002; Mattingly & Throop, 2018; Robbins, 2007), as anthropological scholarship on morality has proliferated at an accelerating pace, exhibiting both realist and antirealist tendencies. Of course, the interest in morality is hardly new to the discipline. One line can be traced to Emile Durkheim, who tried (unsuccessfully, in our view) to turn Kant on his head by equating the moral domain with social experience.3 Critical reactions against this “Durkheimian collapse” of the moral into the social have resulted in some anthropological frameworks that we would consider broadly realist. Two key contributions (Laidlaw, 2002, 2014; Robbins, 2007) explicitly responded to this “Durkheimian collapse” and developed positions consistent with the type of pluralistic moral realism that we articulate in this chapter. James Laidlaw developed a realist approach that seeks to ground morality in a rethinking of freedom and responsibility, emphasizing the fact that “everyday conduct is constitutively pervaded by reflective evaluation” (2014, p. 44). Joel Robbins sought to recover the notion of transcendence (which roughly maps onto what we have called moral absolutes) in the anthropology of morality while also reconciling this with ethnography of the everyday: Even in the course of everyday life, some of the desirability of values that is produced in transcendent encounters with them must surely still be felt. In the everyday . . . persons rarely attempt to realize single value-linked desires fully. But the pushes and pulls those different values exert give everyday life much of its sense of forward movement, or at least of ethical potential. (Robbins, 2016, p. 780)

Both Robbins and Laidlaw are able to take the moral realism of their interlocutors seriously, and both develop what we would call descriptive moral realist frameworks that underpin their comparative anthropologies of morality

3

Kant famously viewed the existence of moral reality on a par with the existence of time and space and as a synthetic apriorism that made it possible for human beings to have intelligible moral experiences in the first instance.

Morality and Moral Absolutes in Culture

(and, similar to this chapter, both frameworks do so descriptively, without needing to defend any particular metaphysical grounding of moral principles). One persistent antirealist influence in anthropology has been the tendency to account for moral discourse, belief, and action reductively, as the ideological product of other, nonmoral determining forces – such as the pressure of social norms, the interests of the dominant social class, or other “structural” factors. This tendency can be traced to two main sources. First, the search for comprehensive social explanations tends to leave little room for individual action as a cause rather than as an effect. Second, any analysis of human conduct must grapple with questions of power and inequality, recognizing that human societies are not always, or even typically, level playing fields. Responding to these challenges, anthropologists have generally erred toward what Bauman (1988) calls a “science of unfreedom,” beginning with the early pioneers of the discipline like Boas and his students, who essentially viewed morals as “socially approved habits” (Benedict, 1934, p. 73). More recent trends in anthropology have focused on power, oppression, and resistance as the primary explanans of social life (for a critique, see D’Andrade, 1995), thus preserving a conceptual opposition between the individual and society and implying that individual freedom begins only where social pressures end – a zone for freedom that many anthropologists (and plenty of social psychologists) believe does not in fact exist. The problem is that the very notion of morality depends on the ability of a moral subject to act freely. This follows from the basic philosophical insight that “ought implies can” (Ladd, 1957, p. 86). To be clear, freedom in this sense should not be confused with either: 1) a historically particular “liberal” conception of free choice that features prominently in the discourse of Western democratic societies; or 2) a liberationist ideal (common to contemporary anthropological analyses of power and politics) that imagines a utopian world completely free of constraints. Instead, by freedom we mean the much more basic human capacity for reflection upon imagined alternatives, which underpins the possibility of ethical action. “Something like this conception of reflective freedom,” Laidlaw (2014) writes, “is intrinsic to the very idea of ethics” (p. 149; see also Mahmood, 2004; Robbins, 2007; Shweder, 2009). If humans are merely acting out social scripts, functioning as part of oppressive systems, or acting based on unconscious drives, then there is little room for talking about morality or ethics as such. As social scientists think more about culture as a determining factor in human conduct, they would do well to avoid replicating this error that places culture, morality, and freedom in tension. Nor is it necessary to view power (or social structure) and freedom as opposites. No society has ever been free of normalizing pressures and other forms of social constraint, but if we consider relations of power, hierarchy, and inequality as part of the social context in which freedom (in its various culturally and historically particular forms) becomes possible (Laidlaw, 2014) – rather than as a frontier that marks the zone of freedom from the zone of

513

514

    .      ,  .    ,       

determinism – then this presents less of a problem for thinking about morality in culture. This approach obeys the commonsense intuition that while no society lacks extensive forms of social control (not even a liberal one), few societies completely lack individual freedoms. Further, it is possible (perhaps not even all that difficult) to recognize differences between societies in terms of the particular areas for reflective freedom afforded to its members (e.g., how to go about reducing one’s spiritual debts; or which marriage alliances to form for sake of the future well-being of one’s extended family). The Durkheimian conflation of the moral with the social also motivates several of the antirealist strains of anthropological theories of morality. However, some anthropologists who find a total relegation of the moral order to the social order unsatisfactory also sense serious problems with a stringent rule-based Kantian deontological moral realism as they try to give an account of the diverse ethnographic findings on moral life. For example, “phenomenological” approaches (Mattingly, 2014; Zigon & Throop, 2014) attempt to ground morality in experience (e.g., Mattingly’s “first person virtue ethics”) and define it by how people find themselves enmeshed in relations with people and things in the world. In a slightly different vein, the “ordinary ethics” approach (Das, 2012; Lambek, 2010a) takes fundamental issue with the transcendent nature of Kantian deontological ethics, and as a result focuses instead on everyday experience and practice. Both of those approaches can be juxtaposed against what we are calling the language of everyday moral realism. To philosophically locate these approaches, it is helpful to recall that Plato, in distinguishing appearances/experiences (the phenomenal world) from what is really real (the objective noumenal world), set off the longest unresolved quarrel in the history of philosophy. Among Plato’s adversaries and opponents to his philosophical rationalism are the skeptics, who argue that we can never get beyond appearances (so the really real cannot be known at all) and the phenomenologists, who argue that appearance is reality (so there is nothing else to know). Zigon (2008) articulates a phenomenological “moral breakdown” framework, arguing that “norms of morality are only constructed as total and unified after the fact of articulation in speech or thought” (Zigon, 2009, p. 287). Similarly, Lambek and Das argue for an ethnographic focus on the “everyday” and they resist any definition of morality as distinct from any other aspect of social life, even analytically. Das states, for example, that ethical work is “done not by orienting oneself to transcendental, objectively agreed-upon values but rather through the cultivation of sensibilities within the everyday” (Das, 2012, p. 134, original emphasis). Lambek frames his approach in similar terms: “If I have advocated the exercise of practical judgment at the expense of following (or rejecting) rules, that is in large part because it is a more accurate description of how we live” (Lambek, 2010b, p. 61). Both the phenomenological and ordinary ethics theorists share an antirealist stance rooted in the idea that everyday social life is messy and not rulegoverned, and that moral principles are neither inherently binding, nor do they

Morality and Moral Absolutes in Culture

capture the complicated nature and texture of real life (Beldo, 2014). There are always exceptions to any rule, they argue, and any rule fails to capture the entirety of what matters in ethical experience. For these reasons, phenomenologists and ordinary ethics theorists choose to focus on “embodied experience,” “relationality,” and “the everyday.” By way of contrast, the moral realist approach outlined in this chapter emphasizes the Kantian reciprocal rationalist maxim that “thoughts without content are empty, intuitions without concepts are blind” (Kant, 1787, p. B75). If we consider everyday experiences analogous to “intuitions” in this maxim,4 and further consider the moral absolutes identified in this chapter as analogous to Kant’s notion of “concepts,” then we can paraphrase the second clause of the maxim as “everyday experiences devoid of moral concepts are blind” and therefore devoid of moral significance or self-understanding. Similarly, just as “thoughts without content are empty,” “abstract moral concepts devoid of everyday experiences are empty.” Thus, moral absolutes leave considerable room for the contingent and concrete work of culture and history to give distinctive character and substance to those abstract properties which themselves are (as we have argued) necessary to give definition to an everyday experience as a moral experience (rather than, for example, as merely an aesthetic experience or the experience of coercion or oppression). The pluralist moral realist approach we develop here need not adopt Kant’s actual metaphysics that are the regular object of critique for these ordinary ethics and phenomenological approaches, and in fact we do not share those ultimate metaphysical positions. But we also argue that it would be a mistake to reject moral realism altogether because one does not agree with the metaphysical underpinnings of Kantian ethics. Both the phenomenological and ordinary ethics positions level important critiques against an oversimplified view of the power of a moral absolute to produce a particular moral judgment. However, our emphasis on the fate of moral absolutes in history both recognizes and responds to these problems of oversimplification, including a sole emphasis on abstract or “stand-alone” value concepts. Our three earlier ethnographic examples demonstrate how the moral issues at stake are quite specific to contexts and reliant on existential issues inherent in the perspectives and historical experiences of particular communities. But at the same time, the notion of rules, moral absolutes, or goods worthy of pursuit are not thrown out with the bath water. Just because the fabric of moral reality might be made up of moral absolutes does not imply that the manifestation or application of these absolutes in real life will be clearcut or determinative. Moral rules constitute only one potential element of a

4

Smit (2000) summarizes a key contrast in Kant’s philosophy: “intuitions are immediate and singular, concepts, mediate and general” (p. 236). While we are not strictly equating everyday experience with Kant’s “Anschauungen,” the comparative distinction between intuitions and concepts on the one hand, and everyday experience and moral absolutes on the other, is instructive.

515

516

    .      ,  .    ,       

deontological framework, and a critique of the limitations of “rules” per se is insufficient grounds to reject moral realism altogether (see Rescher, 1993, for a critique of an overemphasis on rules in critiques of moral realism). Ordinary ethics theorists may want to push issues of moral reality aside, but this does not mean that they do not creep back into their own work through what we would describe as the morally realist political critiques of ordinary ethics theorists. An obvious example is Veena Das’s critical account of collective violence in India, which emphasizes the everyday elements of these experiences, but also relies on moral absolutes for the very critique and appeals to the reader for compassion in the face of violence and injustice (e.g., Das, 2006). An analysis based on the fate of moral absolutes in history would fully expect this to be the case, precisely because the nature of human moral experience is to apply what one feels one knows about principles of moral reason to one’s everyday experience. One final point of critique needs to be addressed – namely the claim that moral reasoning is merely an ex post facto exercise in rationalizing one’s moral experience. In psychology (e.g., Haidt, 2012), this comes in the form of asserting the primacy of intuitions – commonly conceived as rooted in emotion or in inherited adaptive behaviors that emerged in the Pleistocene – over deliberative rational processes. In some corners of moral anthropology, moral rationales are seen as either irrelevant or subsidiary to practices (for a consideration of the issue see Lambek, 2010b), or as the work that is undertaken in moments of “breakdown” to return to a supposedly homeostatic mode of implicit and nonreflective moral life (Zigon, 2008). While there is surely more to moral experience than just the moral logics and moral discourses that people produce and circulate, we propose that moral rationales, justifications, and discourse are also critical elements of moral experience itself. Indeed, they are practices themselves, even if higher-order ones, that entail various levels of reflection. But, quite crucially, they also presuppose the intuitive grasp of the moral absolutes that help frame and guide what moral actors experience as right and wrong and as definitive of what it means to be “a good person.” Intuitions of the moral absolutes described in this chapter will vary in how they are manifest in discourse and practice, but we suggest that they matter nonetheless, and not just to theorists, but to moral actors themselves. The version of descriptive moral realism we have outlined begins with the assumption that there are abstract moral principles that inform everyday moral realism everywhere, and that these are discoverable much the way logical and mathematical truths are discoverable. Those moral principles are not necessarily harmonious or easily reconcilable with one another. In fact, our framework suggests that there will be deep, abiding, irreconciled tensions and trade-offs between different genuine moral goods (e.g., loyalty versus sanctity) and between different abstract rules of moral reason. These tensions and trade-offs are one of the reasons that moral judgments do not spontaneously converge over time or across different cultural communities (Shweder, 2004). In sum, we have suggested that one way to study morality in culture is to trace the fate of

Morality and Moral Absolutes in Culture

moral absolutes in history and in culture, identifying and paying attention to the base set of moral absolutes that – once given shape and substance—are particularized and made concrete in particular communities, making moral practices and moral experiences recognizable as such in the first place. A core mission for the study of morality in culture is to document, explicate, and explain the part played by those moral absolutes as they invest local moral evaluations with a recognizably reasonable and sensible moral meaning.

Acknowledgments This chapter incorporates in parts the conference address entitled “The Fate(s) of Moral Absolutes across Cultures: Moral Realism without the Ethnocentrism” delivered by Richard A. Shweder at the University of Sussex (England) Conference on “Methodologies in the Anthropology of Ethics” on December 21, 2020.

References Bauman, Z. (1988). Freedom. Open University Press. Beal, B. (2020). What are the irreducible basic elements of morality? A critique of the debate over monism and pluralism in moral psychology. Perspectives on Psychological Science, 15(2), 273–290. Beldo, L. (2014). The unconditional ‘ought’: A theoretical model for the anthropology of morality. Anthropological Theory, 14(3), 263–279. Beldo, L. (2019). Contesting Leviathan: Activists, hunters, and state power in the Makah whaling conflict. University of Chicago Press. Benedict, R. (1934). Anthropology and the abnormal. Journal of General Psychology, 10(1), 59–82. Berlin, I. (1997). The counter-enlightenment. In H. Hardy & R. Hausheer (Eds.), The proper study of mankind: An anthology of essays (pp. 243–268). Farrar, Straus and Giroux. Briggs, J. L. (1998). Inuit morality play: The emotional education of a three-year-old. Yale University Press. Brightman, R. A. (1993). Grateful prey: Rock cree human-animal relationships. University of California Press. Cassaniti, J. L. (2014). Moralizing emotion: A breakdown in Thailand. Anthropological Theory, 14, 280–300. Cassaniti, J. L., & Hickman, J. R. (2014). New directions in the anthropology of morality. Anthropological Theory, 14(3), 251–262. Cassaniti, J. L., & Menon, U. (Eds.). (2017). Universalism without uniformity: Explorations in mind and culture. University of Chicago Press. D’Andrade, R. (1995). Moral models in anthropology. Current Anthropology, 36, 399–408. Das, V. (2006). Life and words: Violence and the descent into the ordinary. University of California Press.

517

518

    .      ,  .    ,       

Das, V. (2012). Ordinary ethics. In D. Fassin (Ed.), A companion to moral anthropology (pp. 133–149). Wiley-Blackwell. Durkheim, E. (1953). The determination of moral facts. In E. Durkheim, Sociology and philosophy (pp. 35–62). Routledge. (Original work published 1906) Firth, R. (2013). We the Tikopia: A sociological study of kinship in primitive Polynesia. Routledge. (Original work published 1936) Fiske, A. P. (1992). The four elementary forms of sociality: Framework for a unified theory of social relations. Psychological Review, 99(4), 689–723. Fiske, A. P., & Rai, T. S. (2015). Virtuous violence: Hurting and killing to create, sustain, end, and honor social relationships. Cambridge University Press. Geertz, C. (2000). Available light: Anthropological reflections on philosophical topics. Princeton University Press. Gergen, K. J. (1985). The social constructionist movement in modern psychology. American Psychologist, 40(3), 266–275. Gergen, K. J. (2001). Psychological science in a postmodern context. American Psychologist, 56(10), 803–813. Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Vintage Books. Haidt, J., & Graham, J. (2007). When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. Social Justice Research, 20(1), 98–116. Haidt, J., & Joseph, C. (2007). The moral mind: How five sets of innate intuitions guide the development of many culture-specific virtues, and perhaps even modules. In P. Carruthers, S. Laurence, & S. P. Stich (Eds.), The innate mind (pp. 367–391). Oxford University Press. Hickman, J. R. (2007). “Is it the spirit or the body?”: Syncretism of health beliefs among Hmong immigrants to Alaska. NAPA Bulletin (Annals of Anthropological Practice), 27(1), 176–195. Hickman, J. R. (2014). Ancestral personhood and moral justification. Anthropological Theory, 14(3), 317–335. Hickman, J. R., & Fasioli, A. D. (2015). The dynamics of ethical co-occurrence in Hmong and American evangelical families: New directions for Three Ethics research. In L. A. Jensen (Ed.), Moral development in a global world: Research from a cultural-developmental perspective (pp. 141–169). Cambridge University Press. Jensen, L. A. (2008). Through two lenses: A cultural-developmental approach to moral reasoning. Developmental Review, 28(3), 289–315. Jensen, L. A. (Ed.). (2011). Bridging cultural and developmental approaches to psychology: New syntheses in theory, research, and policy. Oxford University Press. Jensen, L. A. (Ed.). (2015). Moral development in a global world: Research from a cultural-developmental perspective. Cambridge University Press. Kant, I. (1787). Kritik der reinen Vernunft (2nd ed.). Johann Friedrich Hartknoch. Keltner, D., Kogan, A., Piff, P. K., & Saturn, S. R. (2014). The sociocultural appraisals, values, and emotions (SAVE) framework of prosociality: Core processes from gene to meme. Annual Review of Psychology, 65, 425–460. Kohlberg, L. (1981). The philosophy of moral development: Moral stages and the idea of justice: Vol. 1. Essays on moral development. Harper & Row. Ladd, J. (1957). The structure of a moral code: A philosophical analysis of ethical discourse applied to the ethics of the Navaho Indians. Harvard University Press.

Morality and Moral Absolutes in Culture

Laidlaw, J. (2002). For an anthropology of ethics and freedom. Journal of the Royal Anthropological Institute, 8(2), 311–332. Laidlaw, J. (2014). The subject of virtue: An anthropology of ethics and freedom. Cambridge University Press. Lambek, M. (Ed.). (2010a). Ordinary ethics: Anthropology, language, and action. Fordham University Press. Lambek, M. (2010b). Toward an ethics of the act. In M. Lambek (Ed.), Ordinary ethics: Anthropology, language, and action (pp. 39–63). Fordham University Press. Mahmood, S. (2004). Politics of piety: The Islamic revival and the feminist subject. Princeton University Press. Mattingly, C. (2014). Moral laboratories: Family peril and the struggle for a good life. University of California Press. Mattingly, C., & Throop, J. (2018). The anthropology of ethics and morality. Annual Review of Anthropology, 47, 475–492. Miller, J. G. (1984). Culture and the development of everyday social explanation. Journal of Personality and Social Psychology, 46(5), 961–978. Miller, J. G., & Bersoff, D. M. (1992). Culture and moral judgment: How are conflicts between justice and interpersonal responsibilities resolved? Journal of Personality and Social Psychology, 62(4), 541–554. Miller, J. G., Das, R., & Chakravarthy, S. (2011). Culture and the role of choice in agency. Journal of Personality and Social Psychology, 101(1), 46–61. Nietzsche, F. (1974). The gay science (W. Kaufmann, Ed. and Trans.). Random House. (Original work published 1882) Nuckolls, C. W. (1993). The anthropology of explanation. Anthropological Quarterly, 66(1), 1–21. Nuckolls, C. W. (1998). Culture: A problem that cannot be solved. University of Wisconsin Press. Prinz, J. J. (2007). The emotional construction of morals. Oxford University Press. Rai, T. S., & Fiske, A. P. (2011). Moral psychology is relationship regulation: Moral motives for unity, hierarchy, equality, and proportionality. Psychological Review, 118(1), 57–75. Rescher, N. (1993). Pluralism: Against the demand for consensus. Clarendon Press. Robbins, J. (2007). Between reproduction and freedom: Morality, value, and radical cultural change. Ethnos: Journal of Anthropology, 72(3), 293–314. Robbins, J. (2012). Cultural values. In D. Fassin (Ed.), A companion to moral anthropology (pp. 115–132). Wiley-Blackwell. Robbins, J. (2016). What is the matter with transcendence? On the place of religion in the new anthropology of ethics. Journal of the Royal Anthropological Institute, 22(4), 767–781. Shweder, R. A. (1982). Liberalism as destiny. Contemporary Psychology, 27(6), 421–424. Shweder, R. A. (1994). Are moral intuitions self-evident truths? Criminal Justice Ethics, 13(2), 24–31. Shweder, R. A. (2003). Toward a deep cultural psychology of shame. Social Research, 70(4), 1109–1130. Shweder, R. A. (2004). Moral realism without the ethnocentrism: Is it just a list of empty truisms? In A. Sajó (Ed.), Human rights with modesty: The problem of universalism (pp. 65–102). M. Nijhoff Publishers.

519

520

    .      ,  .    ,       

Shweder, R. A. (2009). Shouting at the Hebrews: Imperial liberalism v liberal pluralism and the practice of male circumcision. Law, Culture and the Humanities, 5(2), 247–265. Shweder, R. A., Jensen, L. A., & Goldenstein, W. M. (1995). Who sleeps by whom revisited: A method for extracting the moral goods implicit in practice. New Directions for Child and Adolescent Development, 1995(67), 21–39. Shweder, R. A., Mahapatra, M., & Miller, J. G. (1987). Culture and moral development. In J. Kagan & S. Lamb (Eds.), The emergence of morality in young children (pp. 1–83). University of Chicago Press. Shweder, R. A., & Much, N. (1991). Determinations of meaning: Discourse and moral socialization. In R. A. Shweder (Ed.), Thinking through cultures (pp. 186–240). Harvard University Press. Shweder, R. A., Much, N. C., Mahapatra, M., & Park, L. (2003). The ‘Big Three’ of morality (autonomy, community and divinity), and the ‘Big Three’ explanations of suffering. In R. A. Shweder (Ed.), Why do men barbecue? Recipes for cultural psychology (pp. 74–133). Harvard University Press. Sidgwick, H. (1884). The methods of ethics (3rd ed.). MacMillan. Simpson, E. L. (1974). Moral development research: A case study of scientific cultural bias. Human Development, 17(2), 81–106. Smit, H. (2000). Kant on Marks and the immediacy of intuition. Philosophical Review, 109(2), 235–266. Tangney, J. P., Stuewig, J., & Mashek, D. J. (2007). Moral emotions and moral behavior. Annual Review of Psychology, 58, 345–372. Turiel, E. (2002). The culture of morality: Social development, context, and conflict. Cambridge University Press. Weber, M., Roth, G., & Wittich, C. (1978). Economy and society: An outline of interpretive sociology. University of California Press. Zigon, J. (2008). Morality: An anthropological perspective. Berg Publishers. Zigon, J. (2009). Phenomenological anthropology and morality: A reply to Robbins. Ethnos, 74(2), 286–288. Zigon, J., & Throop, C. J. (2014). Moral experience: Introduction. Ethos, 42(1), 1–15.

PART V

Applications and Extensions

21 Criminal Law, Intuitive Blame, and Moral Character Janice Nadler

Blame and punishment doctrines within Anglo-American criminal law in many ways track psychological intuitions about blame and punishment in ordinary social life, but there are variations as well. Psychologists have explored various information factors that play a role in intuitive judgments of blame, such as facts about the conduct in question, the person who performed the conduct, and the causal connection to the harmful outcome (Cushman, 2008; Guglielmo, 2015; Shaver, 1985; Weiner, 1995). Legal blame in the criminal context also takes account of the conduct in question, the mental state of the actor, as well as the causal link between conduct and outcome. This chapter explains the legal doctrine and explores some of the ways in which criminal blame and ordinary social blame (hereinafter “intuitive blame”) converge and diverge. (See Chapter 15, this volume, for a detailed analysis of ordinary blame.) In criminal prosecutions, the government’s task is to prove that the accused person’s conduct meets the requirements of the criminal offense in question. Blame, in a criminal law context, is a carefully calculated product of discrete judgments about a transgressor’s intentionality, conduct, and causal proximity to harm. The logic of criminal blame involves separate consideration of each element of the offense: the specified harmful act or result, performed with a specifically defined blameworthy mental state, in the absence of a claim of defense that would justify the harm (such as self-defense) or excuse the offender (such as duress). Criminal blame calculations can be quite complex, especially when there are multiple actors and complex statutory requirements. For example, it is not unusual for the indictment (the document formally accusing the defendant) in a complicated conspiracy case to be over 100 pages long. Intuitive blame, on the other hand, is often performed in situations that call for fast judgments (Monroe & Malle, 2017) and arguably is driven by basic motivations to express and defend social values and expectations (Bilz, 2016; Carlsmith et al., 2002; Kahan & Braman, 2008; Pizarro & Tannenbaum, 2012). According to this account, blaming wrongdoers expresses and enforces the social boundaries and rules of community after the wrongdoer threatens the validity of shared values by violating them (Durkheim, 1893/1964; Kleinfeld, 2015). The social function of intuitive blame might help explain why people are sometimes willing to make sacrifices to punish wrongdoers even when they themselves are not individually victimized (Fehr & Fischbacher, 2004). The criminal legal system arguably embodies efforts to express and defend social 523

524

   

values through individual blame judgments. But in the aggregate, this effort can sometimes go awry, as exemplified by the intuitive blame patterns and associated moral framework that undergirds the contemporary American criminal justice system, which incarcerates people at a rate that dwarfs almost every other nation, and both reflects and perpetuates deep economic and social inequalities (Forman, 2017; Garland, 2001; Kohler-Hausmann, 2019; McLeod, 2015; Roberts, 2003; Tonry, 2014). To the extent that intuitive blame expresses intuitions about the need to sort “bad” members of society from “good” members (Durkheim, 1893/1964; Vidmar, 2001), the legal instantiation of these intuitions inevitably devalues members of subordinated racial, ethnic, and economic groups, encouraging and rationalizing punitive policing, mass incarceration, and racial stratification (Armour, 2020; Tyler & Boeckmann, 1997; Yankah, 2003). This chapter will examine the mutual influence of criminal blame and punishment on the one hand, and intuitive blame and punishment on the other. These two systems do not, of course, operate independently from one another, and folk conceptions of intent (Kneer & Bourgeois-Gironde, 2017), causation (Greene & Darley, 1998; Spellman, 1997), and reasonableness (Tobia, 2018, 2022) play a special role in criminal law. At the same time, the criminal law – even within the Anglo-American legal tradition – does not speak with a single voice with respect to the elements of criminal blame. As we will see, the unwritten body of criminal law derived from English courts and traditionally used by US courts is less precise and often more explicitly tied to moral intuitions than the modern effort to clarify and codify criminal law doctrine as expressed in the (American) Model Penal Code. Section 21.1.1 examines the philosophical underpinnings of criminal law – state monopoly on force justified by individual violations of rules under conditions of free choice. This framework of autonomous choice is supplemented by perceptions of legitimacy, fairness, and social solidarity. Section 21.1.2 explores the legal notion of actus reus, in the context of the legal standard for sufficiency of conduct in the absence of a completed result, and compares the legal standard with intuitive blame for incomplete conduct. Section 21.1.3 turns to the role of mental state in intuitive blame, and implications for legal standards. Previous work has focused on the mental state of intent (Malle & Nelson, 2003), arguing that folk notions of intent are largely consistent and systematic, even though legal standards for intent do not always recognize or mirror the folk framework. But much harmful conduct in social life is produced by actors who do not purposely cause harm but rather consciously engage in risk taking. This section concludes by examining legal standards for recklessness, and how intuitive blame in situations of less than intentional conduct relies on heuristics both to help inform mental state and possibly to inform blame more directly. Section 21.2 explores the possibility that a fundamental human motivation to punish those with bad character sometimes influences perceptions of legal questions like consciousness of risk. Here we explore the debate regarding the role of moral character in intuitive blame in situations of conscious risk taking.

Criminal Law, Intuitive Blame, and Moral Character

Section 21.3 turns to the legal rules governing the standards and purposes for considering information that bears directly on moral character during a criminal trial. We will see that although intuitions about the role of moral character in legal blame have produced rules restricting the use of prior misdeeds, these rules ultimately rest on political and moral judgments rather than psychological insights. The chapter concludes in Section 21.4 by briefly exploring some remaining questions of criminal law and intuitive blame, such as the role of cultural commitments on motivations to impose legal blame.

21.1 Criminal and Intuitive Blame 21.1.1 Criminal Blame and the Myth of Free Choice Enlightenment and post-Enlightenment legal theorists did not initially focus on the social context of criminal law. Philosophy and economics dominated earlier theorizing about law and punishment, and a focus was on the legitimacy of permitting the state to use force against the citizenry, depriving individuals of liberty in the process. Deprivation of liberty was therefore reserved for seriously harmful conduct deemed worthy of societal moral condemnation; conduct that does not rise to the level of societal moral condemnation is dealt with by the civil side of the legal system, outside of criminal law. This definition separates crime from other conduct deemed too trivial to rise to the level of a criminal offense (for example, negligently breaking a window – possibly a civil offense but not a criminal one – or rudely slamming the door in someone’s face). The state’s power to deprive liberty and attach stigma is justified traditionally by a constellation of principles referred to by the term of art “legality.” Under these principles, the state is justified in imposing criminal liability and punishment when it gives fair notice as to which conduct is prohibited. These conditions enable each individual to make a choice at any given point in time about whether to comply with the law or not (Duff, 1993). So long as the rules are made public, are sufficiently specific, are announced in advance, and are promulgated preferably by a democratically elected branch such as a legislature, each person has sufficient opportunity to freely choose to comply or to face the consequences of not complying. The background assumptions of this model center on rational individuals exercising their free will after carefully weighing the risks and benefits of engaging in unlawful conduct. There are, to be sure, various ways in which this Enlightenment notion of rational actors making free choices is largely fictional. Law governs a complex array of human activity, and law does not generally influence individual behavior in a vacuum, as earlier philosophers generally assumed. Instead, group identity, social norms, and various social motivations interact with law to provide motivations to comply (Nadler, 2017). The view of law as a coercive tool to shape behavior implied that the function of legal blame is to demarcate instances in which individuals had engaged in

525

526

   

prohibited conduct. In the domain of criminal punishment, much emphasis was placed on the extent and circumstances under which punishment effectively deters, and in many domains, deterrence indeed plays a critical role in keeping undesirable behavior in check. For example, we rely on the threat of legal sanctions to prevent offenses like theft, even though moral values contribute as well. Most people would desist from cheating or stealing where doing so would clearly and directly harm another; at the same time many people are tempted and at times do cheat or steal when they are able to tell themselves a story that erases the victim or the harm (Feldman, 2018; Nadler, 2020). Specifying the circumstances under which the threat of criminal punishment is an effective deterrent is an important area of law and economics, and while there is still much yet to understand, there is a large body of well-established knowledge in this field (Becker, 1968; Posner, 1985; Shavell, 1985). At the same time, criminal law functions through means other than threatened sanctions. When the criminal law is perceived as legitimate, it produces a sense of obligation to obey (Papachristos et al., 2012; Tyler & Jackson, 2014); conversely, when particular decisions or laws are perceived as unjust, compliance in everyday life can decrease as a result (Mullen & Nadler, 2008; Nadler, 2005). For example, people – including violent offenders – who view police and prosecutors as being honest and even-handed are less likely to commit crime in the future (Papachristos et al., 2012). And people who learn of an unjust legal result (such as an unjust jury verdict) are more likely to flout the law in their everyday life (such as by stealing a small item when no one is looking) (Mullen & Nadler, 2008; Nadler, 2005). Qualities of the law itself, including the extent to which it is perceived as furthering justice or reflecting community values, influence the extent to which people feel generally bound by law. One ambition of criminal law in particular is to reflect the shared moral culture that forms the basis for social coordination and solidarity (Durkheim, 1893/1964; Kleinfeld, 2015).1 By examining psychological processes of blame for wrongdoing, we might gain insight into the development of the shared moral culture upon which criminal law is based. Before doing so, we review the basic structure of criminal blame.

21.1.2 Actus Reus, Legal Attempt, and Blame In Anglo-American legal systems, criminal blame depends on a few distinct components: actus reus, or guilty act; mens rea, or guilty mind; and for offenses

1

Of course, many criminal offenses do not reflect shared moral culture. For example, in most places in the United States, possessing marijuana even for personal and/or medical use can result in incarceration, and sometimes for a long period – even for life without the possibility of parole (www.nytimes.com/2016/04/14/opinion/outrageous-sentences-for-marijuana.html). Choices about what is a crime and what is not are made by politicians and within the economic, social, and racial systems in which politicians exist (Karakatsanis, 2019). The shared moral culture to which I refer here encompasses criminal offenses such as murder, rape, and arson, about which there is little dispute that the state has the coercive power to address them.

Criminal Law, Intuitive Blame, and Moral Character

where there is a prohibited outcome (such as death in homicide offenses), a causal connection between the act and the outcome. A basic tenet of criminal law theory is that we do not impose blame solely on the basis of bad thoughts in the absence of conduct. For example, suppose Jane conceives a plan to rob a bank. She tells a friend about the plan but does not take any action to carry it out. Jane is not criminally blameworthy because she took no action. The criminal law declines to punish solely for bad thoughts for a few reasons. First, bad thoughts are ubiquitous, in the sense that each of us has bad thoughts sometimes. Even assuming we could accurately detect instances of thinking about wrongdoing, imposing blame according to such a low threshold would brand most, if not all, members of society criminal. The second reason that criminal law generally2 refrains from blame in the absence of conduct is that it is difficult for the state to accurately detect and assess bad thoughts. Even when a person communicates their thinking about wrongdoing in a way that reveals it to others (through, for example, a diary or a conversation), it can be difficult to know whether the actor was serious or just blowing off steam or joking or fantasizing. Finally, even if the actor is serious about engaging in prohibited conduct at the time she has the bad thought, the law recognizes that sometimes bad thoughts are fleeting, and we come to our senses before acting. The criminal law encourages actors to move past bad thoughts by not blaming until and unless the actor engages in some conduct toward the commission of the wrongful act. Imposing blame prior to conduct disincentivizes rethinking and desisting. Criminal blame is imposed when a person goes beyond the stage of merely thinking a bad thought. One policy question that criminal law must resolve is how far beyond thought and toward completed conduct must a person go to be criminally liable for attempting to break the law? The draw of criminal attempt imposes blame when a person intends and initiates an offense but does not successfully complete it. Criminal attempt belongs to a group of doctrines (along with conspiracy, solicitation, and the like) that imposes liability before there is necessarily any concrete harm. The theory is that these acts are blameworthy because they impose social harm. We do not want to oblige law enforcement to wait until there is concrete harm before they can intervene to stop a wrongful act, and we want to be able to deal with dangerous persons before they do real damage. We also do not want to reward moral luck – for example, an assassin who fires but misses his target has done no physical harm but should not be absolved based on chance factors beyond his control. There are two approaches to pinpointing the juncture beyond which a person’s thoughts and preliminary actions constitute a criminal offense. The more traditional approach is the “proximity” test. The proximity test asks whether the person’s conduct constitutes “mere preparation” – which is not 2

There are occasions when criminal law imposes blame for omission to carry out a legal duty to act in specifically defined circumstances, such as a parent failing to provide food or medical care to a minor child. The general rule, more accurately stated, is that the criminal law does not impose liability for omissions in the absence of a legal duty to act imposed by criminal law.

527

528

   

considered sufficient to constitute criminal conduct – or whether instead the conduct came sufficiently close to the completed offense to be blameworthy. Note that the proximity test focuses on how close the person came to committing a completed crime. Under the proximity test, a would-be arsonist who makes a mental plan to burn a building and is caught with a can of gasoline and combustible materials in his garage might not be criminally liable because these acts were not in close proximity to the building and the fire – rather this conduct is arguably mere preparation. But if the person is caught as he is lighting the match while standing over poured gasoline at the site of the intended arson he would be liable for attempted arson. The second approach to deciding how much conduct is sufficient for an attempt crime examines the question from the opposite side and asks not how close was the actor’s conduct to the completed act, but rather how far the actor went beyond mere thought. This approach is called the substantial step test and asks whether the actor’s conduct constitutes a substantial step in a course of conduct planned to culminate in a crime. Once a person has formed an intent to commit the crime and has engaged in substantial conduct toward that end, they are liable for attempting to commit the crime they intended. Under the substantial step test, liability for attempted criminal conduct attaches at an earlier time, and conduct that would not give rise to liability under the more traditional proximity test could give rise to liability under the substantial step test. In the arson example, the person caught with a can of gasoline and combustible materials in his garage might well be liable for the crime of attempted arson under the substantial step test (but not under the proximity test). One advantage of the substantial step test is that it permits law enforcement to intervene at an earlier stage when the proximity to danger is more attenuated. One disadvantage is that there is a greater chance that an innocent actor’s conduct will be misconstrued. For both the proximity test and the substantial step test, the possibility of blaming an innocent person for attempt is shielded by a separate requirement of proof of intent to commit the criminal offense in question. There has been some empirical investigation of intuitive blame with respect to incomplete wrongdoing, both outside and within the legal context. Outside of the legal context, observers do blame an individual who is merely thinking about engaging in a harmful act, but they blame more when the individual intends to carry out the act, and they blame the most when the act is in fact carried out (Guglielmo & Malle, 2019). Within the legal context, intuitive blame seems to track the proximity doctrine of criminal liability for attempt, more than the substantial step doctrine. One study presented participants with a story about a locksmith who decides to steal rare coins from a safe in a shop. Participants chose the point of conduct in the storyline where blame should attach (locksmith enters the shop to find the safe; locksmith tries to crack the safe; and so forth) (Robinson & Darley, 1995). Most participants assigned blame only after the locksmith reached the point of dangerous proximity to harm – when he began cracking the safe. A vast majority thought that “casing” the shop to look for the safe is not sufficient for blame and punishment. This

Criminal Law, Intuitive Blame, and Moral Character

finding aligns more with the more traditional proximity rule for assigning blame for criminal attempt, rather than the more modern substantial step test. A follow-up study testing various robbery and murder cases yielded similar results: laypeople tend to wait until the wrongdoer is at the point of dangerous proximity before they attach blame (Darley et al., 1996). The studies just discussed examined the lay notion of blame for attempt, but a related question is how psychological processes of decision makers interact with existing legal standards. Mock juror studies that manipulated the legal standard for attempt (“engaged in a substantial step . . . strongly corroborative of intent” versus “conduct that came dangerously close or very near to committing. . .”) found that in ambiguous cases the proximity test resulted in a greater likelihood of blaming than the substantial step test (Sood, 2019, Study 1). And a study that examined the role of anti-Muslim bias in judging attempt liability found that mock jurors blamed a Muslim defendant much more harshly (compared with Christian or control) when deciding attempt liability under the proximity test, but not under the substantial step test (Sood, 2019, Study 3a). For the Muslim defendant only, the proximity test language gave rise to negative inferences not only about ultimate judgments of legal guilt but also about intent to commit the crime. Mere exposure to the language of the proximity test caused participants who judged the Muslim defendant (compared to those who judged the Christian defendant) to indicate more hostile attitudes toward Islam; no such difference emerged for participants exposed to the language of the substantial step test. These findings suggest that the stereotypical nature of the crime of terrorism combined with the Muslim identity of the defendant were primed by the language of the proximity test (“dangerously close”) but not by the substantial step test.3 In addition to sufficiency of conduct, a critical question for questions of criminal blame is mental state, which we turn to next.

21.1.3 Mental States and Blame 21.1.3.1 Mens Rea Hierarchy and Blame Criminal blame generally requires more than proof of harmful conduct; it also usually requires that the conduct be performed with a guilty mind, or mens rea.

3

The author of the study explained the pattern in the following way: Notably, participants who applied the proximity test in that study were inclined to feel more negatively toward the defendant than those who applied the substantial step test, regardless of the defendant’s implied religion. This finding, although only marginally significant, provides some indication that the proximity test may have primed a sense of peril that exacerbated the threat already associated with terrorism. Adding a seemingly Muslim defendant into the fact pattern likely further intensified that feeling of peril, leading to the observed biases in legal outcomes. Threat generally tends to trigger negative reactions toward political, ethnic, or religious outgroups. Moreover, priming can activate not only concepts like “nearness” and “peril” but also stereotypes associating certain social groups with these concepts . . . Muslims have increasingly come to be stereotyped in public and political discourse as “terrorists”: foreign, disloyal, and imminently threatening. (Sood, 2019, pp. 646–647)

529

530

   

The qualifier “usually” refers to the fact that there is a small subset of strict liability offenses in which criminal blame attaches to harmful conduct without regard to whether the actor had a guilty mind. Traditionally, strict liability offenses developed in response to matters where public welfare was put at risk. At the turn of the twentieth century, the industrial revolution brought about new dangers on a greater scale, including the sale of adulterated medicines and foods, improper handling of dangerous chemicals, and so forth. Strict liability shifts the blame for risk of harm from dangerous activities to those best able to prevent a mishap. Because the government can impose criminal liability without regard to guilty mind in strict liability cases, these offenses are graded at low levels (typically misdemeanors) and generally carry low-level punishment (usually fines and not imprisonment). For most criminal offenses, blame is imposed according to the culpability level of the wrongdoer’s guilty mind. Anglo-American criminal law is based on the theory that the state’s use of force against citizens is justified only when a person has engaged in conduct that is the product of free choice. The old English common law reflected that a harmful act “without a vicious will is no crime at all.” The notion of vicious will or mens rea has evolved over time. In the past, the inquiry was rather general and inquired into the wickedness of the person’s disposition or the general wickedness of the act. In the mid twentieth century, criminal law took a cognitive turn and defined a hierarchy of separate mental states. In general, intending harm is more culpable than expecting harm and consciously disregarding it, and expecting and disregarding harm is worse than not being aware of possible harm in circumstances where one should have been aware. For example, if Jane hits someone with her car with the purpose of killing him and he dies, she is liable for murder because her intention at the time was to cause death. If Jane hits and kills someone because she was driving extremely fast while in a hurry and was aware at the time that there is a substantial risk of fatally hitting someone, she is liable for the less serious offense of reckless manslaughter because she did foresee that someone might die and disregarded that risk. Intuitive judgments of blame and liability are consistent with the hierarchy of mental states in contemporary criminal law (Solan & Darley, 2001).

21.1.3.2 Recklessness: Conscious Risk Taking This latter scenario exemplifies the mental state of criminal recklessness, and its legal complexity converges to a remarkable degree with intuitive blame for risk taking. To assess this comparison, we first examine how the criminal law treats unintended harm. In cases where dangerous conduct leads to unintended harmful consequences, contemporary criminal law focuses intensely on the actor’s awareness of the risk of the prohibited harm. For example, reckless homicide occurs when a person is aware of a risk of causing death and engages anyway in dangerous conduct. Awareness of danger in some general sense is not

Criminal Law, Intuitive Blame, and Moral Character

legally sufficient – the actor must be consciously aware of the risk of whatever result is prohibited by the offense. Reckless homicide therefore requires proof of awareness of risk of death specifically; awareness of general danger is not sufficient. To illustrate, imagine that Joe picks up his phone to check a text while he is driving and then, while distracted, collides with and kills a pedestrian. The crime of reckless manslaughter involves causing death (which Joe did when he hit the pedestrian) and consciously disregarding the risk of death when engaging in the risky conduct. A key question, therefore, is what, specifically Joe was thinking when he picked up his phone and engaged in distracted driving. If he did not consciously disregard any risk – for example, if he was convinced that he was such an excellent driver that he could safely text and drive simultaneously, then he was not reckless as to causing death and not liable for reckless manslaughter (although he might be liable for some less serious criminal offense). Similarly, if Joe was aware of the risk that “something bad” could happen when he picked up his phone, but he never consciously thought about causing death, then he is not liable for reckless manslaughter because he did not consciously disregard the risk of causing death (again he might be liable for a less serious offense). Joe is only liable for reckless manslaughter if he consciously disregarded the specific risk of death while engaging in the dangerous conduct. This discussion sets aside the epistemic problem of how we can ever know the contents of Joe’s thoughts, and the specific risk about which Joe was aware. In a criminal case, the prosecution has the burden of proving each element of the offense, including mens rea, and proving specific mental states can be a difficult task. Using the hypothetical involving Joe, we can imagine that the government might show that Joe is an adolescent who recently took a course in drivers’ education and as part of the course watched a video about the dangers of distracted driving that depicted a teen who caused a fatal crash because she was distracted by looking at her phone. A jury could on this basis draw a reasonable inference that Joe was aware of a risk of causing death at the time of the crash. At the same time, there is often very little information available that could establish the specific mental state at issue here. The difference, for purposes of establishing recklessness, between being aware of a specific risk and being aware of a general danger is a critical one but notoriously difficult to detect. The jury is left with a great deal of discretion, then, to choose to believe that Joe had awareness of the specific risk, or not. As a result, it is important to consider the possibility that the inference about Joe’s specific mental state could be influenced by other factors, including perceptual judgments based on observations of behavior (Ambady et al., 2000) and appearance (Thorndike, 1920). The relationship between our perception of an actor’s behavior and appearance and our perception of that actor’s mental state might be mediated by inferences about that actor’s moral character, a topic to which we turn next.

531

532

   

21.2 The Role of Moral Character in Intuitive Blame and Legal Blame 21.2.1 Moral Character as Information about Mental State At first glance, the criminal law’s risk-awareness framework does not seem to comport particularly well with the way laypeople blame intuitively in situations of unintended harm. When we judge unintended harm in everyday life, we might be somewhat concerned with whether the actor was aware of a specifically enumerated risk (in the earlier example, risk of causing death), but we are often concerned with the reason the actor decided to engage in the risky behavior and, even further, what kind of person the actor is. In general, when we judge bad outcomes in everyday life, we care about the actor’s motive and their character (Siegel et al., 2017; Uhlmann et al., 2015), in addition to judging the appropriateness of the conduct and awareness of the specific risk of harm that ended up occurring. Character-based theories of moral blame hold that the motivation to evaluate others’ character is a fundamental feature of human social cognition (Pizarro & Tannenbaum, 2012). For example, Alicke’s culpable control model assumes that we constantly evaluate other people to determine which ones are trustworthy, that is, will promote rather than threaten our physical and psychological well-being (Abele & Wojciszke, 2014; Alicke, 2014). According to character-based theories of moral blame, we spontaneously evaluate wrongdoing based on features of character before having the opportunity to carefully consider the legally central features of mental state such as conscious disregard of risk. It is important to note that some theorists dispute that early spontaneous evaluations of character influence blame independently of diagnostic inferences about mental state, conduct, and causality (Malle et al., 2014). Experimental work suggests that blame is indeed susceptible to evaluations of the actor’s moral character. In one study, a person named Sam illicitly stored flammable oxygen that led to an accidental explosion and death of a bystander. Observers blamed Sam more severely if he stored the oxygen to cheat in athletics than if he stored it to start a business or to care for his sick daughter (Nadler & McDonnell, 2012). This is despite the fact that the legally relevant mens rea did not vary. Sam’s awareness that storing oxygen could be dangerous to human life would form the basis for recklessness, and there were no detectable differences in foreseeability of harm between Sam the cheater and Sam the caregiver. Similarly, in another study, participants blamed Frank more severely for causing a deadly fire when he was storing chemicals for a meth lab compared to when he was storing the same chemicals for a flower greenhouse (Nadler & McDonnell, 2012). And after John caused a traffic accident, he was blamed more if he was rushing home to hide drugs than if he was rushing home to hide a present (Alicke, 1992). Thus, we blame more severely for resulting harm when an actor undertakes risky conduct for questionable reasons than for laudable reasons.

Criminal Law, Intuitive Blame, and Moral Character

Interestingly, the law of criminal recklessness indeed reflects intuitive blame’s sensitivity to reasons for acting. The most influential source for contemporary American criminal code definitions is the Model Penal Code, which defines recklessness as the disregard of a substantial and unjustifiable risk of causing a prohibited result. The substantiality requirement exists to exclude criminal blame in cases of risks that are remote or unlikely. The justifiability exemption exists to permit conduct, such as a risky surgery designed to save life. Both terms – substantial and unjustifiable – raise questions about which risks are substantial and which are unjustifiable. To guide decision makers, the Model Penal Code definition of recklessness includes the following explanatory language: “the risk must be of such a nature and degree that, considering the nature and purpose of the actor’s conduct and the circumstances known to him, its disregard involves a gross deviation from the standard of conduct that a lawabiding person would observe in the actor’s situation.” This language helps make sense of some of the experimental results just discussed. Frank was blamed more severely for causing a deadly fire when he was storing chemicals for a meth lab than for a greenhouse. Was the risk posed by storing the chemicals substantial and unjustifiable? To answer that question, we consider the nature and purpose of Frank’s conduct. In the meth lab story, the purpose of the conduct of storing chemicals was to engage in later conduct that is both dangerous and criminal. Frank’s disregard of the risk therefore involves a gross deviation from the standard of conduct that a law-abiding person would observe. By contrast, Frank the gardener’s decision to store chemicals has a nature and purpose that is more benign and is arguably consistent with the standard of conduct that a law-abiding person would observe. The difference in blame judgments that emerged in the experiment is accounted for by the legal definition of recklessness. The same parallel between legal and intuitive blame can be seen in Sam’s decision to store oxygen for cheating or for his sick daughter, and John’s speeding home to hide drugs or a present. In sum, the legal definition of criminal recklessness is constructed in a way that reflects the tendency to intuitively blame according to the nature and purpose of the conduct that led to the harmful result. The nature and purpose of conduct standard from criminal law recklessness arguably is more concerned with risk regulation than it is with judging moral character. A fundamental truth of contemporary social life is that we agree to live with a variety of risks, and some of these are nonnegligible. Riding in a vehicle, walking, or biking on a public way are illustrative – traffic deaths in the United States alone number over 30,000 every year. We collectively accept this risk in exchange for the social benefit derived. Evaluating criminal recklessness requires consideration of the nature and purpose of the actor’s conduct to decide whether the risk consciously disregarded by the actor provided sufficient social benefit as to exempt the actor from this particular level of blame. The recklessness judgment of blame, arguably, is not of the actor’s character but the circumstances of risk taking. Under this interpretation, Sam, Frank, and

533

534

   

John in the studies referenced earlier were blamed more severely not necessarily because their moral character was lacking but more because their risky conduct was insufficiently justified by socially beneficial goals (Guglielmo, 2015). It is possible, then, that in the studies discussed earlier the spontaneous evaluation of wrongdoing is not focused solely on primarily moral character, because the specific factor that varied was not moral character (or at least not solely moral character) but rather the actor’s motive for engaging in the hazardous conduct. But in other studies, the circumstances of the risky conduct indicative of motive were held constant while the moral character of the actor was varied, and the predicted relationship between bad character and more severe blame again emerged. For example, Sara’s unruly dogs escaped her yard while she slept and mauled a child to death. Sara was blamed more severely for the death if she ignored her family, ate junk food, and smoked than if she was fit, healthy, and volunteered for charities (Nadler & McDonnell, 2012). There was no effect observed of moral character on perceptions of foreseeability of death, so the relevant conscious awareness of risk was apparently unaffected by moral character, and thus there are no apparent differences in the mens rea of recklessness that could explain differences in blame judgments apart from moral character. Similarly, Nathan was a young man skiing out of control for fun who collided with another skier, causing their death. Nathan was blamed more for the death if he was an unreliable worker and rarely helped his family business than if he was a model employee and helped his family business (Nadler, 2012). These vignettes arguably provide stronger, more direct evidence for the hypothesis that intuitive blame is sometimes character based (Pizarro & Tannenbaum, 2012). According to a character-based account of blame, we consider character information partly because we rarely have precise information about the actor’s mental state, especially in cases of disregard of risk. Knowing something about the actor’s previous behavior – reflected in traits like fairness, kindness, trustworthiness, and integrity – help us assess what they were doing and why at the time the harm occurred. The character-based account of blame helps make sense of more severe blame in certain cases of lesser direct harm. In one study, a company manager cut vacation days only of the 20 percent of employees who are African American because he is bigoted, or 100 percent of employees (20 percent of whom are African American) because he is misanthropic. Compared to the misanthrope, the bigoted manager who made cuts based on employee race was blamed more, his conduct was perceived as more diagnostic of his character, and diagnosticity judgments were correlated with blameworthiness judgments (Pizarro & Tannenbaum, 2012). If determining moral character of others is a fundamental human motivation, then blaming might be as much about the question of “is this person good or bad?” as it is about “is this act right or wrong?” (Uhlmann et al., 2015). To the extent that these two questions are two components of blame, then moral evaluations of act and character might sometimes diverge. Supporting this notion, one study found that although animal cruelty is viewed as less immoral

Criminal Law, Intuitive Blame, and Moral Character

than similar violence toward humans, the former can be more indicative of perceived moral character. Participants read about a man who found out his girlfriend had been unfaithful and either reacted violently toward his girlfriend or reacted violently toward his girlfriend’s cat. Participants judged the cat beater’s actions to be less wrong than the woman beater’s actions, but the cat beater was perceived as having worse moral character than the woman beater (Tannenbaum et al., 2011), suggesting that the badness of the act and the badness of the person are separate judgments in the process of blame.

21.2.2 Intuitive Blame, Character, and Mental State To what extent do processes of intuitive blame inform and influence formal legal processes of blame? If there is a fundamental human motivation to punish those with bad character, then perceptions of character might influence blame directly, as suggested earlier; in addition, perceptions of character might influence judgments of important elements of legal proof such as an actor’s mental state and causation, which in turn influence blame judgments (Chapter 15, this volume). Recall John, the driver who caused a car accident when rushing home. Participants perceived John to be not only more blameworthy but also more of the cause of the accident when he was going to hide cocaine than when he was going to hide a present (Alicke, 1992). And participants judged Nathan to be not only more blameworthy but also to have acted more intentionally in killing the other skier when he was an unreliable employee and unhelpful son than when he was the opposite (Nadler, 2012). Participants judged Sara to not only be more blameworthy but also to have acted more intentionally toward her dogs’ mauling of the child when she was an asocial, unfit eater of junk food than when she was the opposite (Nadler & McDonnell, 2012). The actor’s conduct was identical, and the physical chain of events that led to the harm was identical; the only thing that varied was the actor’s character or reason for acting. In these studies, bad character led perceivers to infer more intentionality and more causality than good character. This brings us back to criminal law. The law imposes more severe blame when mens rea is more culpable, all things being equal. Some of the studies discussed earlier suggest that the moral character of the actor, apart from that actor’s motive or reason for acting, plays an important role in inferences about mens rea (such as the awareness of risk required for recklessness) and overall blame. Compared to a virtuous person, we blame a morally flawed person more harshly and we bolster these harsh blame judgments with increased perceptions of the actor’s causal role and intent to cause harm. Because it is often difficult to glean another person’s mental state with the precision that criminal law demands, the process of inferring an actor’s mental state under conditions of uncertainty might be prone to the influence of character information. The implications for legal decision making are notable. Recall that the threshold requirement for criminal recklessness is conscious disregard of a specific risk – not risk of harm in general but rather the risk of the result

535

536

   

prohibited by the offense. For example, if we think that a distracted driver might have been aware of the risk of causing death, but we are not sure, then learning that the driver is a person of poor moral character might be enough to push us toward inferring awareness of the prohibited risk, resulting in a more severe blame judgment. Conversely, knowing the driver is an otherwise virtuous person might pull us in the other direction, toward less severe blame. In this way, moral character might serve as a kind of proxy for mental state, so that a person with a bad character is blamed as if he were reckless, and a person with a good character is blamed as if he were not reckless. Experimental results from two studies support this possibility. In the study discussed earlier involving Nathan the skier, Nathan’s awareness of risk was manipulated as one independent variable (aware or unaware), and his character traits were manipulated as another (unreliable employee, unhelpful son, or the opposite) (Nadler, 2012). Bad Nathan, who was unaware of the risk (and so, as a matter of law not reckless), was blamed at about the same level of severity as Good Nathan, who was aware of the risk. That is, having bad character traits served as a kind of substitute for reckless mental state, even when the actor was explicitly described as being unaware of the risk. The same pattern of results emerged for Bad Sara, whose dogs mauled and killed a child (Nadler & McDonnell, 2012). At the same time, it is worth noting that these results are also consistent with the inverse hypothesis that good character mitigates blame. In the end, it is likely that both processes are possible, with good character sometimes mitigating blame and bad character sometimes exacerbating blame. This possibility is suggested by the data in the earlier vignette about Sam, who stored oxygen that exploded (Nadler & McDonnell, 2012). This experiment included a control group in which Sam’s motive (and thus inferences about character) was described as neutral. Compared to Neutral Sam, participants blamed Bad Sam more and Good Sam less. Of course, this experiment manipulated motive (reason for storing oxygen) rather than character directly, so more investigation is needed to explore the mitigating and exacerbating influence of character on blame. The law of recklessness, on the other hand, assumes that decision makers will make a threshold judgment about whether the actor was aware of the risk of causing the prohibited result. Only after the actor was determined to have consciously disregarded the risk of prohibited harm can decision makers move to the next questions of deciding whether the risk was substantial and unjustifiable. During the substantiality and unjustifiability determination, observers judging legal blame consider the nature and purpose of the conduct and whether the actor deviated from that of a law-abiding person. But this all assumes an initial determination that the actor was aware of the risk. In the real world, awareness of risk, like other mental states, is rarely clear to observers. The studies discussed suggest that moral character, as well as reasons for acting, sometimes inform the threshold judgment of risk consciousness, contrary to the requirements of the legal standard.

Criminal Law, Intuitive Blame, and Moral Character

At the same time, the law does anticipate decision makers’ tendency to consider and even overweight information about character and propensity for evil. As discussed in greater detail in Section 21.3, the law of evidence prevents the jury from learning about a criminal defendant’s prior crimes or bad acts when those previous acts are presented to show that the defendant has a propensity to engage in wrongdoing. Empirical evidence supports the concern that the jury will use prior crimes as an additional reason to blame the defendant for the current accusation (Eisenberg & Hans, 2009; Greene & Dodge, 1995; Lloyd-Bostock, 2000; Wissler & Saks, 1985). But the findings on moral character’s influence on blame reach far beyond the traditional concerns with propensity evidence that excludes information about bad acts or similar crimes. The influence of prior crime and acts on legal blame diminishes substantially when the prior crime was minor or dissimilar to the current offense. But the moral character studies discussed earlier show that when we size someone up as a bad person – even through relatively subtle cues like lack of generosity and unreliability – we perceive their unintentional acts as more causal, their mental states more intentional, and their blameworthiness greater than a similarly situated good person. The subtlety of the manipulated traits (irresponsible worker, smoking, eating junk food) demonstrates that we perceive badness not only in people who have engaged in serious past wrongdoing but also in people whose common, everyday conduct indicates a lack of concern for group well-being. Although the character-focused nature of blame for unintended harms is in tension with contemporary definitions of recklessness that follow the Model Penal Code, such character-based blaming processes are more consistent with common law notions of mens rea that fell out of favor over the course of the twentieth century. Before the Model Penal Code prompted many American state legislatures to update their criminal codes, criminal offenses were composed holistically rather than as a collection of component parts. English common law used terms like “malicious,” “wanton,” or “depraved” when referring to mens rea. The language encouraged character assessments, and courts interpreted mens rea as indicative not only of evil intent but evil character (Pillsbury, 2000). For example, a judge writing in the late 1800s – in seeking to distinguish evil passions from legitimate excuses – declared that evil passions “are the outpourings of a wicked nature, not of an unsound or disabled mind” (Pillsbury, 2000, p. 84). Until the modernization brought about by the Model Penal Code and associated theorizing in criminal law and criminology, the standard for an unintentional homicide that qualified for elevation from manslaughter to murder was defined as killing that demonstrates “wickedness of disposition, hardness of heart, cruelty, recklessness of consequences, and a mind regardless of social duty” (Pillsbury, 2000 p. 162). As with intuitive blame, the focus under common law crime standards was on moral traits and character more centrally than state of mind. Does the contemporary cognitive turn in criminal law away from a central focus on character and toward assessment of the actor’s cognition matter?

537

538

   

Arguably in some cases the focus of the standard – character versus cognition – makes a difference. Consider, for example, the real-life case of Marjorie Knoller and Robert Noel, who kept two dogs, weighing 150 and 130 pounds respectively, in their San Francisco apartment. The dogs repeatedly bit neighbors and attacked other dogs, and Knoller admitted that she did not have the strength to control them. The dogs escaped and mauled to death a neighbor down the hall as she was entering her own apartment. The jury found Knoller guilty of murder because she caused a death while acting with “an abandoned and malignant heart” – the old character-based standard. At the time, the trial judge clarified that this standard required proof that Knoller acted with an awareness of endangering human life – the more contemporary cognitive standard. But in an unusual move, the trial judge set aside the jury’s verdict of guilty on the grounds that Knoller was not in fact aware of endangering a human life because Knoller – a lawyer herself – claimed that she did not anticipate that her “gentle and loving and affectionate” dogs would ever kill someone. The judge decided that no reasonable jury could have found beyond a reasonable doubt that Knoller was aware of the risk that her dogs would cause death. The trial judge used the contemporary cognitive focus – awareness of risk of death – to exonerate Knoller. The distinction between the older character-based standard of acting with an abandoned and malignant heart on the one hand, and the contemporary cognitive standard of awareness of a substantial risk of death on the other, becomes even more stark when considering what we know – and what the jury also knew – about Knoller’s moral character. The veterinarian who examined the dogs at the time Knoller adopted them wrote her a letter warning her that they were liable to attack and maim. The handler who fostered the dogs prior to turning them over to Knoller conveyed a similar warning. During each of many incidents of aggressive behavior by the dogs toward neighbors, Knoller and Noel not only refused to apologize but treated the victims with disdain and hostility. For example, after one neighbor complained that one of the dogs bit his rear end, Noel replied, “um, interesting.” There was some suggestion that the couple acquired the dogs as part of a breeding operation for fighting dogs – their correspondence with their business partners described one of the dogs as “Wardog” and “Bringer of Death: Ruin: Destruction.” Also, Knoller’s and Noel’s business partners were inmates in a state penitentiary and described by the court as members of the Aryan Brotherhood prison gang. Although the trial judge had vacated the reckless murder conviction based on the cognitive awareness of risk standard, the case did not end there. After an appellate court reversed, a new court reinstated Knoller’s conviction for reckless murder, which comported with the jury’s initial decision, and also arguably with a character-based assessment of blame. A character-based blame assessment would focus on the older common law standard – whether she acted with an abandoned and malignant heart. Considering Knoller’s callous, contemptuous behavior toward members of her community, as well as her partnership with white supremacists apparently in furtherance of breeding fighting dogs,

Criminal Law, Intuitive Blame, and Moral Character

blame for the elevated offense of reckless murder (as opposed to a lesser offense of manslaughter) appears easier to justify. When a person who caused harm is charged with a criminal offense, the severity of blame depends on inferences about intentionality, awareness of risk, and reasonableness, among other things, and prosecutors, judges, and jurors are free under character-based standards for blame to infer wicked subjective culpability when judging members of minority racial, ethnic, and religious groups. Empirical work on inferring mental states of wrongdoers suggests that the race of the actor can influence these inferences. For example, observers who watched two people in a heated discussion followed by one person ambiguously “bumping” his body into the other were more likely to describe the bump as a “violent shove” and to make dispositional attributions if the actor was Black rather than White (Duncan, 1976). When the protagonist was White, observers were more likely to excuse the conduct as “horsing around” and to make situational attributions to explain it. Further, in the criminal justice context, Black boys are perceived as older, less innocent, more agentic, and more responsible for their actions than White boys (Goff et al., 2014). In this sense, race itself can stand in for character: “In cases involving Black defendants, their pigmentation and identity performance are proof of their bad character or criminal propensity” (Armour, 2020, p. 166). If true, the older common law mental state terms might serve to exacerbate racial and other biases. More work is needed to explore the extent to which more general common law terms are susceptible to biased decision making. For example, does asking jurors whether a Black defendant or a Muslim defendant had a “vicious will” or “abandoned and malignant heart” at the time he caused death provide a greater role for biased decision making than asking them whether he consciously disregarded a risk or acted with extreme indifference to human life? These are empirical questions yet to be explored.

21.3 Evidence Law and Character Given that intuitive blame is sometimes informed by perceptions of character, to what extent is the influence of character on blame inconsistent with the values of the legal system? Certain legal rules of procedure reflect a long-standing concern with the possibility that perceptions of a person’s character will have an outsized influence on legal blame, especially when the person being judged is a defendant in a criminal proceeding. The body of rules most focused on these questions is the law of evidence, which governs questions about proof of facts during the trial. These rules apply to both civil and criminal proceedings, but much of the discussion about the role of character in blame judgments is focused on criminal proceedings. Criminal law involves imposing liability and punishment signaling serious social stigma, as well as the power to deprive individuals of freedom and even life. Criminal law is thus founded on the notion that individuals are held accountable for choosing to engage in

539

540

   

prohibited conduct, rather than for their past misdeeds or their bad character. To encourage decision makers to hew to this principle, the rules of evidence explicitly prohibit the use of past misdeeds in order to demonstrate that the person engaged in the conduct in question in a criminal proceeding. In fact, the rule prohibiting the use of past misdeeds is one of the most frequently used and cited rules of evidence (Imwinkelried et al., 2016), suggesting that the intuitive impulse to jump from learning of past misdeeds to inferring the person engaged in the conduct in question is feared to be a strong one. The rule prohibiting past misdeeds contains an important exception: Even though past (or sometimes concurrent or subsequent) misdeeds cannot be used to prove propensity to engage in the conduct in question, they can be considered for “non-propensity” purposes, specifically, “motive, opportunity, intent, preparation, plan, knowledge, identity, absence of mistake, or lack of accident” (Federal Rules of Evidence 404(b), hereinafter FRE). For example, in Smith’s trial for unlawful gun possession, evidence that the police also found cocaine and a large amount of cash in his car might be admissible to show Smith’s motive for possessing the gun, that is, to protect himself in his illegal drug dealings. Thus, although other misdeeds are not admissible to prove a defendant’s bad character or propensity to commit the criminal act in question, sometimes other misdeeds may be admissible to prove that a defendant had a special reason to commit the crime in question. Sometimes, however, the distinction between revealing a misdeed to prove propensity to commit the criminal act in question (the prohibited use) and revealing it to prove motive, opportunity, intent, and so forth (the permitted use) is unclear. Consider Jones, who is accused of being a passenger in a car involved in a high-speed police chase, and then evading police on foot, leaving behind a large quantity of cocaine in the car. Upon arrest two years later, Jones denies any involvement and claims that he was not the person in the car. The government then seeks to inform the jury that Jones was convicted of cocaine possession eight years earlier. The defense objects to this evidence on the grounds that it is being used to prove propensity to engage in the conduct in question, which is prohibited under FRE 404(b). The government argues that the prior drug possession is relevant to the defendant’s “knowledge” and “intent” and therefore is permitted under the exception discussed earlier (Capra & Richter, 2018). This conflict over the proper use of past misdeeds in inferences about moral character and legal blame plays out frequently in courtrooms all over the United States.4 To illustrate how these exceptions (motive, opportunity, intent, preparation, plan, knowledge, identity, absence of mistake, or lack of accident) operate in practice, consider the following examples. To demonstrate motive in a murder case the government might show that the accused and victim had recently committed a robbery together and the accused killed the victim to prevent her 4

See, e.g., United States v. Smith (2015) (affirming admission of the defendant’s eight-year-old conviction for possession of cocaine with intent to distribute to prove “knowledge” and “intent”).

Criminal Law, Intuitive Blame, and Moral Character

from confessing or testifying (Jones v. State, 2005). To prove opportunity in a child abuse case the government might show that the accused was unemployed and home with the victim (State v. McAbee, 1995). To prove intent of the accused (a police officer) to unlawfully possess narcotics (rather than possess them as part of an ongoing investigation) the government might show that the accused police officer previously accepted protection money from bootleggers (United States v. Benton, 1988). To prove a plan to steal narcotics, the government might show a prior failed pharmacy break-in by the accused (State v. Woodard, 2011). To prove preparation in a child sexual assault case, the government might show prior conduct of giving gifts and showing pornography to the child victim (State v. Heard, 2012). To prove that the accused had knowledge that heroin was present in her home, the government might show prior instances when the accused knew that drugs were present in her home (State v. Weldon, 1985). To prove identity in a robbery case, the government might show that the accused robbed another person a few weeks later using the very same weapon (State v. Garner, 1992). To show lack of accident or mistake in a murder case, the government might show that the accused shot a different person on a prior occasion (State v. Lloyd, 2001). In each of these examples, we can see that the prior misdeed is being offered to prove something other than propensity to commit the offense in question. Even in these examples, we can observe how the line between the exception and the prohibited propensity use can become blurred. Unfortunately, in many cases the purpose for which the misdeed is being offered is less clear than the examples just discussed. Over the decades since FRE 404(b) was adopted, some courts arguably have gone astray in permitting the government to introduce prior misconduct for reasons that are labeled “intent,” “motive,” and so forth, but boil down to showing that the accused has a propensity to engage in the conduct in question (Capra & Richter, 2018). For example, in a case where the government accused a person named Geddes of sex trafficking, the court permitted the government to tell the jury that four years earlier, he assaulted and threatened to kill a girlfriend. The government claimed that the prior misdeed helped to prove that the defendant had the intent to coerce the victim into sexual acts. The argument was that intent to hurt and threaten his girlfriend four years ago makes it more likely that he intended to coerce the victim – but notice that this is simply another way of saying that the accused has a propensity to hurt and threaten women (Capra & Richter, 2018). There are countless examples of cases in which courts have permitted the government to inform the jury about the accused’s prior misdeeds on the grounds that such use falls under an exception of FRE 404(b), but in which an examination of the facts reveals that the prior misdeed merely shows a propensity to engage in the conduct in question. That many courts are carelessly analyzing questions under FRE 404(b) is underscored by the fact that in many cases, the accused did not dispute what the government sought ultimately to prove. In the earlier example of sex trafficking, Geddes did not contest whether he had intent to coerce another person into sex; instead, he claimed he never

541

542

   

engaged in the conduct at all. And in the earlier example of the person who fled from police, Jones’ prior drug conviction was used to show that he had “knowledge” and “intent” to possess drugs on the occasion in question. But Jones did not dispute his mental state – instead he claimed that the person who led the police on a chase and ran away was not him, making intent and knowledge irrelevant. Some courts have pushed back against this extravagant use of the FRE 404(b) exceptions and have instead limited prosecutorial use of prior misdeeds to instances where the government clearly articulates a nonpropensity use for such evidence. There is now a split in the US federal courts’ approach to interpreting exceptions to the propensity prohibition under FRE 404(b). In some areas of the United States, the federal courts have been permissive in allowing the government to claim that a prior misdeed is relevant for reasons other than propensity. But more recently there has been resistance from federal courts in three circuits representing approximately a quarter of the US population. In these federal circuits, courts have attempted to curb the expansive use of other misdeeds by imposing limits and requiring the government to articulate the relevance of the evidence aside from propensity (Capra & Richter, 2018). This legal debate regarding the proper interpretation of the rules of evidence reflects anxieties about intuitive blame and the role of moral character in legal decision making. Historically, courts have sought to prohibit propensity evidence for centuries. The first concern is that jurors might place too much weight on the other misdeed; judges intuitively sensed that people often attribute other individuals’ bad acts to a bad disposition and make a further inference from bad disposition to guilt regarding the conduct in question. A second concern is that upon learning about the defendant’s prior misdeed, jurors will infer that the defendant committed prior crimes that went undetected and unpunished, and would use the occasion of the present accusation to seek to punish for those prior crimes, or to use incapacitation to prevent future crimes. Third, the revelation of misdeeds other than the accusation in question might give rise to an inference about the defendant’s bad character, and jurors might seek to punish for this character itself. These concerns are reflected in the axiom quoted earlier that criminal liability and punishment must be based only on what the defendant did rather than who the defendant is. It is important to observe that the rules of evidence rest, at bottom, on policy decisions that reflect various moral and political aspirations of the US legal system. For example, evidence that the defendant engaged in a similar conduct on a prior occasion is quite relevant from a logical perspective and, in the abstract, it would be reasonable to consider a prior bad act when we are deciding on questions of blame and responsibility. That is, even after considering and correcting for our overweighting tendencies produced by the fundamental attribution error, prior misconduct can reasonably inform current blame judgments, and we engage in this type of reasoning in everyday life in typical examples ranging from a friend’s cheating spouse, a child’s hurtful taunting, or a child sexual abuser’s conduct. In all these examples, we do not hesitate

Criminal Law, Intuitive Blame, and Moral Character

much about considering the actor’s prior similar conduct when blaming them. By contrast, the law does hesitate in allowing similar inferences in the courtroom, but not primarily because such inference can lead to a wrongful conviction of an innocent person. The primary reasons for prohibiting propensity evidence rest on values: When we delegate to the government the power to stigmatize, to deprive liberty, and to deprive life, we hold the government to standards of proof that are difficult by design. Just as we prohibit a finding of criminal liability in the presence of reasonable doubt (even when it is more likely than not that the defendant is guilty), we limit the extent to which inferences from bad moral character can inform judgments of criminal guilt, even when this means a person with a bad character who actually committed the offense in question is ultimately found not guilty because these limiting rules of evidence hampered the government’s ability to prove its case. In this sense, FRE 404(b) is a policy judgment based on certain moral and political values, and these values are certainly subject to debate. The main point for the discussion here is that the legal system anticipates that moral character will inform intuitive blame in ways that are inconsistent with values embodied in law, and as a result imposes limitations on the use of moral character inferences in an effort to reduce their improper influence.

21.4 Remaining Questions and Directions for Further Study The aspects of blame discussed here only scratch the surface of components and processes of judgments of blame in law and in life. The focus in this chapter is chiefly on intended but incomplete conduct, intended and unintended outcomes, and questions about awareness of risk. There are offenses that do not require proof of any mental state (strict liability offenses), and researchers have just begun to explore the extent to which intuitive blame converges with legal doctrine (Giffin & Lombrozo, 2016). Conversely, law sometimes blames for failing to act, although those situations are handled more commonly by the civil doctrines of tort and contract and not subject to criminal punishment. But occasionally criminal law does pose a duty to act, and the extent to which psychological processes are consistent with criminal blame and punishment for omissions remains to be explored (Cushman et al., 2012; DeScioli et al., 2011). In criminal law, there are mental states considered more culpable than recklessness, such as knowledge (e.g., knowingly causing death) or purpose (e.g., having the conscious intention to cause death). And there is negligence, which is strictly speaking not a mental state at all, but rather a normative judgment that an individual should have been aware of a risk. Negligence is more commonly utilized as a standard for judging civil rather than criminal harm, but there are a handful of criminal offenses that require only negligence in lieu of mens rea. There are some divergences between legal blame and intuitive blame with respect to the hierarchical structure of mens rea. For example, observers sometimes do not distinguish between knowing an outcome

543

544

   

will occur and disregarding a substantial risk that it will occur, a distinction which can elevate the seriousness of an offense (Ginther et al., 2014, 2018; Shen, 2011). At the same time, people seek information to support and later update their intuitive blame decision process in an orderly way (Guglielmo & Malle, 2017; Monroe & Malle, 2017) and when instructed, people seem to attribute blame in a manner congruent with the legal structure of criminal law mens rea (Ginther et al., 2018). Also not discussed in this chapter are a wide array of factors that serve to mitigate or even eliminate blame. In criminal law, these are categorized into justification (e.g., use of force in self-defense) and excuse (e.g., duress or insanity). In general, these ideas correspond well to intuitive blame modeled by Malle et al. (2014), in which observers first detect harm, causation, and mental state, and then consider whether blame is reduced or eliminated because of the agent’s reasons for acting, or lack of obligation or capacity to do otherwise. But even with these defenses to blame attributions, perceivers’ cultural commitments can influence perceptions of the actor’s reasons, mental states, and even physical conduct itself. Cultural commitments to ideals of equality (versus hierarchy) and community (versus individualism) are linked to conflicting perceptions of the same evidence regarding whether, for example, protestors were blocking access to a building (Kahan et al., 2012) , the degree of risk posed by a motorist fleeing police (Kahan et al., 2008), and whether a person consented to sexual conduct (Kahan, 2009). Similarly, conflicting perceptions of harm can be found in judgments as to whether an act should be criminal. For example, people who thought naked grocery shopping should be criminalized did not indicate that the conduct was harmful unless they were informed that harm was a prerequisite for making the conduct criminal (Sood & Darley, 2012), in which case they did find it to be harmful. Outcome-driven judgments can also be motivated by the need to ensure blame is attached to severe harm. In one study, severity of harm caused observers to perceive the existence of facts necessary to impose legal blame. Thus, observers perceived contraband to be more likely to be inevitably discovered (and thus admissible in court under the circumstances) when it was heroin targeted at teens than marijuana sold for medical use, even though the circumstances of discovery were identical (Sood, 2015). These studies adjacent to psychological studies of blame – both within and outside of legal decision making – illustrate both the breadth of opportunity for future empirical exploration and the extent to which criminal law implicitly relies on psychological processes of blame.

Acknowledgments For funding the author thanks the Nathaniel and Leah Nathanson Research fund at Northwestern University Pritzker School of Law, and the American Bar Foundation.

Criminal Law, Intuitive Blame, and Moral Character

References Abele, A. E., & Wojciszke, B. (2014). Communal and agentic content in social cognition: A dual perspective model. In H. T. Reis & C. M. Judd (Eds.), Advances in experimental social psychology (Vol. 50, pp. 195–255). Elsevier. Alicke, M. D. (1992). Culpable causation. Journal of Personality and Social Psychology, 63(3), 368–378. Alicke, M. (2014). Evaluating blame hypotheses. Psychological Inquiry, 25(2), 187–192. Ambady, N., Bernieri, F. J., & Richeson, J. A. (2000). Toward a histology of social behavior: Judgmental accuracy from thin slices of the behavioral stream. In M. P. Zanna (Ed.), Advances in experimental social psychology (Vol. 32, pp. 201–271). Academic Press. Armour, J. D. (2020). N*gga theory: Race, language, unequal justice, and the law. Los Angeles Review of Books. Becker, G. S. (1968). Crime and punishment: An economic approach. Journal of Political Economy, 76(2), 169–217. Bilz, K. (2016). Testing the expressive theory of punishment. Journal of Empirical Legal Studies, 13(2), 358–392. Capra, D. J., & Richter, L. L. (2018). Character assassination: Amending federal rule of evidence 404(b) to protect criminal defendants. Columbia Law Review, 118(3), 769–832. Carlsmith, K. M., Darley, J. M., & Robinson, P. H. (2002). Why do we punish?: Deterrence and just deserts as motives for punishment. Journal of Personality and Social Psychology, 83(2), 284–299. Cushman, F. (2008). Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment. Cognition, 108(2), 353–380. Cushman, F., Murray, D., Gordon-McKeon, S., Wharton, S., & Greene, J. D. (2012). Judgment before principle: Engagement of the frontoparietal control network in condemning harms of omission. Social Cognitive and Affective Neuroscience, 7(8), 888–895. Darley, J. M., Sanderson, C. A., & LaMantia, P. S. (1996). Community standards for defining attempt: Inconsistencies with the Model Penal Code. American Behavioral Scientist, 39(4), 405–420. DeScioli, P., Christner, J., & Kurzban, R. (2011). The omission strategy. Psychological Science, 22(4), 442–446. Duff, R. A. (1993). Choice, character, and criminal liability. Law and Philosophy, 12(4), 345–383. Duncan, B. L. (1976). Differential social perception and attribution of intergroup violence: Testing the lower limits of stereotyping of Blacks. Journal of Personality and Social Psychology, 34(4), 590–598. Durkheim, É. (1964). The division of labor in society. The Free Press. (Original work published 1893) Eisenberg, T., & Hans, V. (2009). Taking a stand on taking the stand: The effect of a prior criminal record on the decision to testify and on trial outcomes. Cornell Law Review, 94(6), 1353–1390. Fehr, E., & Fischbacher, U. (2004). Third-party punishment and social norms. Evolution and Human Behavior, 25(2), 63–87.

545

546

   

Feldman, Y. (2018). The law of good people: Challenging states’ ability to regulate human behavior. Cambridge University Press. Forman, J. (2017). Locking up our own: Crime and punishment in Black America. Farrar, Straus and Giroux. Garland, D. (2001). Mass imprisonment: Social causes and consequences. SAGE Publications, Ltd. Giffin, C., & Lombrozo, T. (2016). Wrong or merely prohibited: Special treatment of strict liability in intuitive moral judgment. Law and Human Behavior, 40(6), 707–720. Ginther, M. R., Shen, F. X., Bonnie, R. J., Hoffman, M. B., Jones, O. D., Marois, R., & Simons, K. W. (2014). The language of mens rea. Vanderbilt Law Review, 67(5), Article 2. Ginther, M. R., Shen, F. X., Bonnie, R. J., Hoffman, M. B., Jones, O. D., & Simons, K. W. (2018). Decoding guilty minds: How jurors attribute knowledge and guilt. Vanderbilt Law Review, 71(1), 241–284. Goff, P. A., Jackson, M. C., Di Leone, B. A. L., Culotta, C. M., & DiTomasso, N. A. (2014). The essence of innocence: Consequences of dehumanizing Black children. Journal of Personality and Social Psychology, 106(4), 526–545. Greene, E. J., & Darley, J. M. (1998). Effects of necessary, sufficient, and indirect causation on judgments of criminal liability. Law and Human Behavior, 22(4), 429–451. Greene, E., & Dodge, M. (1995). The influence of prior record evidence on juror decision making. Law and Human Behavior, 19(1), 67–78. Guglielmo, S. (2015). Moral judgment as information processing: An integrative review. Frontiers in Psychology, 6, Article 1637. Guglielmo, S., & Malle, B. F. (2017). Information-acquisition processes in moral judgments of blame. Personality and Social Psychology Bulletin, 43(7), 957–971. Guglielmo, S., & Malle, B. F. (2019). Asymmetric morality: Blame is more differentiated and more extreme than praise. PLoS ONE, 14(3), Article e0213544. Imwinkelried, E. J., Giannelli, P. C., Gilligan, F. A., Lederer, F. I., & Richter, L. (2016). Courtroom criminal evidence: Related procedures. LexisNexis. Jones v. State, 913 So.2d 436 (2005). Kahan, D. M. (2009). Culture, cognition, and consent: Who perceives what, and why, in acquaintance-rape cases. University of Pennsylvania Law Review, 158(3), 729–813. Kahan, D. M., & Braman, D. (2008). Self-defensive cognition of self-defense. American Criminal Law Review, 45(1), 1–65. Kahan, D. M., Hoffman, D. A., & Braman, D. (2008). Whose eyes are you going to believe: Scott v. Harris and the perils of cognitive illiberalism. Harvard Law Review, 122(3), 837–906. Kahan, D. M., Hoffman, D. A., Braman, D., & Evans, D. (2012). “They saw a protest”: Cognitive illiberalism and the speech-conduct distinction. Stanford Law Review, 64(4), 851–906. Karakatsanis, A. (2019). Usual cruelty: The complicity of lawyers in the criminal injustice system. The New Press. Kleinfeld, J. (2015). Reconstructivism: The place of criminal law in ethical life. Harvard Law Review, 129(6), 1485–1565.

Criminal Law, Intuitive Blame, and Moral Character

Kneer, M., & Bourgeois-Gironde, S. (2017). Mens rea ascription, expertise and outcome effects: Professional judges surveyed. Cognition, 169, 139–146. Kohler-Hausmann, I. (2019). Misdemeanorland: Criminal courts and social control in an age of broken windows policing. Princeton University Press. Lloyd-Bostock, S. (2000). The effects on juries of hearing about the defendant’s previous criminal record: A simulation study. Criminal Law Review, 734–755. Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A theory of blame. Psychological Inquiry, 25(2), 147–186. Malle, B. F., & Nelson, S. E. (2003). Judging mens rea: The tension between folk concepts and legal concepts of intentionality. Behavioral Sciences & the Law, 21(5), 563–580. McLeod, A. M. (2015). Prison abolition and grounded justice. UCLA Law Review, 62, 1156–1239. Monroe, A. E., & Malle, B. F. (2017). Two paths to blame: Intentionality directs moral information processing along two distinct tracks. Journal of Experimental Psychology: General, 146(1), 123–133. Mullen, E., & Nadler, J. (2008). Moral spillovers: The effect of moral violations on deviant behavior. Journal of Experimental Social Psychology, 44(5), 1239–1245. Nadler, J. (2005). Flouting the law. Texas Law Review, 83, 1399–1441. Nadler, J. (2012). Blaming as a social process: The influence of character and moral emotion on blame. Law & Contemporary Problems, 75(2), 1–31. Nadler, J. (2017). Expressive law, social norms, and social groups. Law & Social Inquiry, 42(1), 60–75. Nadler, J. (2020). Ordinary people and the rationalization of wrongdoing. Michigan Law Review, 118(6), 1205–1231. Nadler, J., & McDonnell, M.-H. (2012). Moral character, motive, and the psychology of blame. Cornell Law Review, 97, 255–304. Papachristos, A. V., Meares, T. L., & Fagan, J. (2012). Why do criminals obey the law – the influence of legitimacy and social networks on active gun offenders. Journal of Criminal Law and Criminology, 102(2), 397–440. Pillsbury, S. H. (2000). Judging evil: Rethinking the law of murder and manslaughter. NYU Press. Pizarro, D. A., & Tannenbaum, D. (2012). Bringing character back: How the motivation to evaluate character influences judgments of moral blame. In M. Mikulincer & P. R. Shaver (Eds.), The social psychology of morality: Exploring the causes of good and evil (pp. 91–108). American Psychological Association. Posner, R. A. (1985). An economic theory of the criminal law. Columbia Law Review, 85(6), 1193–1231. Roberts, D. E. (2003). The social and moral cost of mass incarceration in African American communities. Stanford Law Review, 56(5), 1271–1305. Robinson, P. H., & Darley, J. M. (1995). Justice, liability, and blame: Community views and the criminal law. Westview Press. Shavell, S. (1985). Criminal law and the optimal use of nonmonetary sanctions as a deterrent. Columbia Law Review, 85(6), 1232–1262. Shaver, K. G. (1985). The attribution of blame: Causality, responsibility, and blameworthiness. Springer-Verlag. Shen, F. X. (2011). How we still fail rape victims: Reflecting on responsibility and legal reform. Columbia Journal of Gender and Law, 22(1), 1–80.

547

548

   

Siegel, J. Z., Crockett, M. J., & Dolan, R. J. (2017). Inferences about moral character moderate the impact of consequences on blame and praise. Cognition, 167, 201–211. Solan, L. M., & Darley, J. M. (2001). Causation, contribution, and legal liability: An empirical study. Law and Contemporary Problems, 64(4), 265–298. Sood, A. M. (2015). Cognitive cleansing: Experimental psychology and the exclusionary rule. Georgetown Law Journal, 103(6), 1543–1608. Sood, A. M. (2019). Attempted justice: Misunderstanding and bias in psychological constructions of criminal attempt. Stanford Law Review, 71(3), 593–686. Sood, A. M., & Darley, J. M. (2012). The plasticity of harm in the service of criminalization goals. California Law Review, 100(5), 1313–1358. Spellman, B. A. (1997). Crediting causality. Journal of Experimental Psychology: General, 126(4), 323–348. State v. Garner, 331 N.C. 491, 509 (1992). State v. Heard, 166 Wash. App. 1024 (2012). State v. Lloyd, 354 N.C. 76, 90 (2001). State v. McAbee, 120 N.C. App. 674, 680-681 (1995). State v. Weldon, 314 N.C. 401, 404-407 (1985). State v. Woodard, 210 N.C. App. 725, 728-729 (2011). Tannenbaum, D., Uhlmann, E. L., & Diermeier, D. (2011). Moral signals, public outrage, and immaterial harms. Journal of Experimental Social Psychology, 47(6), 1249–1254. Thorndike, E. L. (1920). A constant error in psychological ratings. Journal of Applied Psychology, 4(1), 25–29. Tobia, K. (2018). How people judge what is reasonable. Alabama Law Review, 70, 293–359. Tobia, K. (2022). Experimental jurisprudence. The University of Chicago Law Review, 89(3), 735–802. Tonry, M. (2014). Remodeling American sentencing: A ten-step blueprint for moving past mass incarceration. Criminology & Public Policy, 13(4), 503–533. Tyler, T. R., & Boeckmann, R. J. (1997). Three strikes and you are out, but why? The psychology of public support for punishing rule breakers. Law & Society Review, 31(2), 237–265. Tyler, T. R., & Jackson, J. (2014). Popular legitimacy and the exercise of legal authority: Motivating compliance, cooperation, and engagement. Psychology, Public Policy, and Law, 20(1), 78–95. Uhlmann, E. L., Pizarro, D. A., & Diermeier, D. (2015). A person-centered approach to moral judgment. Perspectives on Psychological Science, 10(1), 72–81. United States v. Benton, 852 F.2d 1456, 1467-1468 (6th Cir. 1988). United States v. Smith, 789 F.3d 923, 929–30 (8th Cir. 2015). Vidmar, N. (2001). Retribution and revenge. In J. Sanders & V. L. Hamilton (Eds.), Handbook of justice research in law (pp. 31–63). Kluwer Academic Publishers. Weiner, B. (1995). Judgments of responsibility: A foundation for a theory of social conduct. The Guilford Press. Wissler, R. L., & Saks, M. J. (1985). On the inefficacy of limiting instructions: When jurors use prior conviction evidence to decide on guilt. Law and Human Behavior, 9(1), 37–48. Yankah, E. N. (2003). Good guys and bad guys: Punishing character, equality and the irrelevance of moral character to criminal punishment. Cardozo Law Review, 25, 1019–1067.

22 Moral Dimensions of Political Attitudes and Behavior Kate W. Guan, Gordon Heltzel, and Kristin Laurin

Many political decisions – how much governments should subsidize health care, at what age fetuses should be considered people, how to treat refugees, etc. – are inextricably tied to hard moral questions. Political views thus often reflect values about which people hold deep moral convictions, which influence people’s identities and relationships (Iyengar et al., 2019) and fuel passion and polarization in modern politics (Skitka et al., 2015). Political prejudice is growing in many countries across the world (Gidron et al., 2020), with political groups’ policy attitudes becoming increasingly homogenous (Pew Research Center, 2014). Conflict has grown beyond quarrels over individual issues to clashes over which group is morally superior (Finkel et al., 2020). In America, these growing political divides are deeper than elsewhere (Boxell et al., 2024; but see Gidron et al., 2020). Congressional gridlock has doubled in the last 65 years, leaving governments unable to pass legislation (Ingraham, 2014). In the past two decades alone, Americans have increasingly curated their world to exclude political opponents, as they seek out media sources that affirm their views (Rodriguez et al., 2017) and move to places replete with allies (Motyl et al., 2014). Trust in the community and social institutions is declining (Jones, 2015). Like most empirical work examining morality in politics, we concentrate here on the American political context. This context provides an informative case study for what happens when politics and morality become intensely entwined, and many of the political psychological processes we describe have been replicated elsewhere (McCoy et al., 2018; Viciana et al., 2019). After outlining how and why moral and political beliefs go hand in hand, we describe how morality motivates political behaviors that promote one’s favored agenda, inspiring people to use all available means to advance their causes. We then consider a paradox: While driving people to advance their political causes, moral motivations inhibit the especially pragmatic action of engaging with political opponents. Finally, we consider potential ways out of this paradox.

Kate W. Guan and Gordon Heltzel contributed equally to this chapter.

549

550

  .  ,        ,      

22.1 Moral and Political Beliefs Are Bound Together There are many pathways to adopting conservative (i.e., right-wing, traditional) or liberal (i.e., left-wing, progressive) stances on economic and sociocultural issues. Some people endorse opposing ideologies on these two classes of issues (Everett, 2013) – one can be economically conservative but socioculturally liberal (e.g., libertarians) or vice versa – but most people endorse the same ideology across both dimensions. People’s political beliefs can come from their genetics (Hatemi et al., 2014), developmental factors (Feinberg, Wehling, et al., 2020), or material self-interest (Feldman, 1982). But perhaps most often people adopt ideologies that fit their psychological dispositions (e.g., Hibbing et al., 2014). For instance, conservatism attracts people who crave structure and predictability (Jost, 2017) and are wary of negativity and threats (Crawford, 2017; Hibbing et al., 2014; but see Brandt et al., 2021); liberalism attracts those with greater empathic concern (Robbins & Shields, 2014). One of the psychological characteristics most strongly linked to ideology is moral conviction (Skitka et al., 2015). Many people see sociocultural political issues (e.g., abortion, drug-related crimes) as pertaining to key moral concepts like human rights. But even economic issues – which on their face may seem purely pragmatic (e.g., governmental spending, infrastructure) – can become moralized when citizens, elites, or media tie them to harmful consequences, to moral emotions like disgust, or to broader moral principles (for a review, see Rhee et al., 2019). For example, it was widespread awareness of the harmful consequences of second-hand smoke that turned public smoking bans into a moral imperative (Rozin, 1999). Moralized political beliefs feel much stronger and more urgent than mere opinions, and liberals and conservatives generally moralize their political beliefs to similar degrees (Skitka et al., 2015). But the specific issues they feel conviction about differ: Liberals more strongly moralize issues like climate change and the environment; conservatives, issues like abortion and physician-assisted suicide. Liberals and conservatives also differ in who they think deserves more protection when it comes to questions about abortion and immigration, in who they hold responsible for poverty, and in whether they prioritize the nation or humanity as a whole (Koleva et al., 2012; Skitka & Tetlock, 1993). Existing theories disagree about the origins of these differences.

22.1.1 Moral Foundations Theory: Morality Causes Political Views Moral foundations theory (MFT; Graham et al., 2013), today’s predominant account of the relationship between moral and political views, posits that biological, cultural, and developmental differences determine what people count as morally relevant, and in turn shape their political identities. According to MFT, people evolved modular intuitions, such that they can feel moral concern in five different domains: 1) care/harm, 2) fairness/ cheating, 3) loyalty/betrayal, 4) authority/subversion, and 5) purity/degradation

Moral Dimensions of Political Attitudes and Behavior

(liberty/oppression may be a sixth domain; Graham et al., 2013). These intuitions are activated to different degrees in different people, in part depending on biological predispositions to feel specific moral emotions more intensely (Inbar et al., 2012); for instance, disgust is specifically tied to judgments of purity (Tracy et al., 2019). These intuitions are also activated differently depending on things like sensitivity to threats and sociocultural factors (Graham et al., 2009, 2013). Thus, a child biologically more sensitive to disgust is more likely to moralize purity; one raised by vegans who taught compassion for animals is more likely to moralize care. By the MFT account, people adopt political beliefs that appeal to their most activated moral foundations. In some people, the care and fairness foundations (called the individualizing foundations, because they concern individual rights and freedoms) are much more strongly activated than the others. These people are drawn to liberal policies like protecting the welfare of all people, regardless of their identity. In others, the purity, loyalty, and authority foundations (called the binding foundations because they unite people into larger, cohesive groups) are activated almost as strongly as the individualizing ones. These people are drawn to conservative policies that prioritize the strength and welfare of their in-group. Returning to the earlier example, the disgust-sensitive child likely has more conservative attitudes about sexuality (Inbar et al., 2012), while the vegans’ child likely has attitudes about factory farming more characteristic of liberals. Supporting this account, liberals and conservatives reliably differ in their endorsement of the binding foundations. Across hundreds of thousands of people from around the world (Graham et al., 2009, 2011), conservatives ascribed greater moral relevance than liberals did to considerations such as “whether or not someone did something to betray his or her group,” “whether or not someone showed a lack of respect for authority,” and “whether or not someone did something disgusting,” compared to liberals. Likewise, conservatives demanded more money to consider behaving in disloyal, impure, and subversive ways (Graham et al., 2009); for example, they demanded $10,000 to blaspheme their parents, while liberals would do it for only $600. And conservatives reference loyalty, authority, and purity more when describing peak experiences and turning points in their lives (McAdams et al., 2008). In these studies, liberals also endorse the individualizing foundations somewhat more than conservatives, though these differences are generally smaller. These robust differences help explain liberals’ and conservatives’ diverging policy attitudes. Across two studies totaling nearly 25,000 participants (Koleva et al., 2012), people’s moral foundation profile predicted their stance on all political issues surveyed. For example, the more people endorsed the care foundation, the more they opposed the death penalty; likewise, those endorsing the purity foundation disapproved more of impure sexual acts (e.g., casual sex, pornography), same-sex relationships, and impure genetic practices (e.g., cloning). Despite the mountain of research MFT has generated, its detractors note that MFT’s most popular measure conflates the moral foundations with well-known liberal and conservative differences (e.g., pride for one’s country’s history;

551

552

  .  ,        ,      

Kugler et al., 2014) and thus may have misidentified the true core differences in morality that cause political disagreements. Moreover, MFT’s proponents have rarely measured experimental effects of morality on political views, thus leaving open the possibility that they have misidentified the direction of causality (Bakker et al., 2021; Hatemi et al., 2019). In light of these criticisms, we consider two alternative accounts that suggest morality affects political ideology, as well as a third set of ideas that suggests political views may shape morality, rather than the reverse.

22.1.2 Two Alternative Accounts of How Morality Fuels Political Beliefs Among the theories that challenge MFT’s account of moral differences without disputing that these likely cause political disagreements, two notable ones are the model of moral motives (MMM; Janoff-Bulman & Carnes, 2013) and the affective harm account (AHA; Gray et al., 2022). While these models differ in important ways, both propose novel frameworks to understand human moral concerns and, in doing so, argue against MFT’s view that liberals and conservatives are morally mismatched. Instead of the five moral foundations proposed by MFT, MMM suggests there are six moral concerns that can be mapped along two dimensions. The first dimension distinguishes approach from avoidance moral motives: prescriptive calls to provide help versus proscriptive calls forbidding harm. The second dimension distinguishes different contexts to which these motives apply: the self, the other, or the collective. Crossing these dimensions produces three approach-based goals oriented toward helping the self (industriousness), others (helping/fairness), and the collective (social justice), and three avoidance-based goals oriented toward protecting the self (moderation), others (not harming), and the collective (social order). MMM presents both a new framework for organizing the moral realm and a challenge to MFT’s claim that only conservatives moralize group concerns. MMM argues that conservatives specifically have stronger avoidance grouplevel concerns: They are proscriptively driven to protect threats to social order within their group by enforcing homogeneity and strict norm adherence. In contrast, liberals have stronger approach group-level concerns: They are prescriptively driven to promote social justice across groups via encouraging interdependence and shared responsibility. Supporting this hypothesis, across two studies, liberal participants more strongly endorsed the importance of providing for communal welfare whereas conservative participants more strongly endorsed social conformity and order. But liberals and conservatives did not differ in their endorsements of interpersonal (as opposed to group-level) concerns, like going out of one’s way to help others or not taking advantage of others (Janoff-Bulman & Carnes, 2016). A second alternative account to MFT, the AHA (Gray et al., 2022; also see its theoretical ancestor, the theory of dyadic morality; Schein & Gray, 2017), argues that political disagreement arises solely from divergent perceptions of

Moral Dimensions of Political Attitudes and Behavior

harm. Like MMM, this account challenges MFT’s five distinct moral concerns, suggesting that harm is the fundamental perception involved in all moral judgments: People judge an act as morally wrong when they perceive that an intentional agent has harmed a vulnerable patient (i.e., an entity capable of suffering). To explain MFT’s five-dimensional findings while positing an evolved aversion only to harm – not to injustice, impurity, disloyalty, or disrespect of authority – AHA argues that culture and personal experiences lead groups and individuals to differ in who or what they perceive to be an intentional agent or vulnerable patient. Through this lens, the fundamental moral concerns specified by MFT (or even MMM) are merely descriptive labels that people use to categorize different types of harm in different types of dyads. Thus, the fundamental conflict between MFT and the AHA hinges on whether people can have intuitive moral judgments in situations where there is no explicit harm nor a harmed victim. While MFT points to moral judgments in scenarios that are ostensibly harm-free (e.g., a person cooking and eating their pet after it dies), AHA argues that such scenarios still involve perceived harm, in this case perhaps to the soul of the pet or its owner. Turning to a political example, when conservatives say it is immoral to do something disloyal like burn one’s country’s flag, AHA assumes they must infer a vulnerable patient who suffers as a result (perhaps their country) and that they endorse the binding moral value of loyalty because of that perceived harm. Supporting this account, both liberals’ and conservatives’ judgments of an action’s (im) morality most closely track their perceptions of how harmful (as opposed to impure, disloyal, unfair, or disobedient) the act is (Schein & Gray, 2015). Likewise, support for anti-GMO policies tracks perceptions of how harmful GMOs are better than how impure they are (Gray & Schein, 2016). Both MFT and its challengers continue to accrue evidence in support of their respective theories of human moral concerns and how these concerns fuel political disagreements. In responding to each other’s theoretical and empirical challenges, they have evolved over time. For instance, MFT researchers have expanded their moral pantheon to include the liberty foundation, creating new measures to capture concerns over freedom from oppression (Clifford et al., 2015). They find that liberals emphasize this foundation more than conservatives, which helps to account for liberals’ social justice motives highlighted by MMM. They also find that libertarians emphasize this foundation more than liberals or conservatives do (Iyer et al., 2012), extending moral-political theorizing beyond liberal-conservative dichotomies. While sorting out their disagreements about how morality fuels political beliefs and divides, these accounts have largely overlooked the possibility that political beliefs fuel morality. We consider this possibility next.

22.1.3 Can Political Beliefs Instead Cause Moral Beliefs? All three accounts we have discussed assume that moral values cause people to adopt particular political beliefs. This position is intuitively plausible: Most

553

554

  .  ,        ,      

people’s introspective experience is that they carefully consult their moral values, and of course the relevant facts, and use those as the basis from which to choose their policy positions. Nevertheless, there is reason to entertain the opposite possibility: That people often know which side of an issue they want to support, and they recruit moral values (and facts) to justify this position. This pattern reflects motivated cognition, which often occurs when people defend their political views (Liu & Ditto, 2012). At least some evidence suggests that liberals and conservatives selectively endorse moral values that justify their preferred political positions (Uhlmann et al., 2009). This work examined preferences for moral consequentialism, or the principle of maximizing positive outcomes overall. Conservatives read about a military policy that would ensure the greater good at the cost of some civilian lives, but the researchers manipulated whether those lives would be Iraqi or American. These participants supported the policy more in the former condition and justified their position by more strongly endorsing consequentialist values. Liberal participants faced the prospect of sacrificing one individual’s life to save a hundred, but the researchers manipulated whether that one individual had a stereotypically Black or White American name. These participants preferred not to sacrifice the Black individual and justified their position by denouncing consequentialist values. That set of studies suggests people’s moral values (in this case, their endorsement of consequentialism) can come from, rather than shape, their political preferences. Providing converging evidence, cross-lagged analyses in three separate panel studies revealed that political ideology better predicts people’s endorsement of the five moral foundations over time than the opposite (Hatemi et al., 2019). These data suggest that people’s political allegiances change which moral values they adopt, rather than the other way around. That said, both processes likely coexist and each may dominate at different times.

22.2 How Morality Motivates (and Demotivates) Political Action Moral concerns are not only related to the contents of political attitudes but they also motivate people to act on those attitudes. People’s behavior sometimes contradicts their attitudes: They love animals but eat cheeseburgers; loathe their in-laws but are friendly to them anyway. But this is less often the case for moralized political attitudes: Since they feel objectively true, universally applicable, and deeply emotional (Skitka et al., 2021), people are highly motivated to act on them, investing time and money to support their favored political causes. These efforts often take the form of constructive and democratic action, but strong moral beliefs can also inspire more destructive – even violent – behaviors. Morality fuels all sorts of political action with one notable exception: It inhibits, rather than promotes, engagement with opposing political parties.

Moral Dimensions of Political Attitudes and Behavior

Cross-party engagement – hearing opponents and persuading them to change their mind or to compromise – is often necessary or at least helpful for furthering one’s political aims, yet moral concerns can ironically reduce people’s motivation to participate in this helpful channel of political action. As we will see, this aversion to cross-party engagement often contradicts people’s explicit moral values, which raises questions about whether and how they reconcile this hypocrisy.

22.2.1 Moral Concerns Encourage People to Act in Support of Their Favored Policies In pluralistic democratic societies where people disagree and successful policies require majority support, people can directly help their preferred policies’ chances both through individual efforts (e.g., voting, speaking out, signing petitions) and by successfully inspiring others to join in collective action. When people’s political beliefs are based on moral convictions, they are more likely to undertake these sorts of behaviors, in ways that can be more or less constructive.

22.2.1.1 Moral Concerns Can Encourage Democratic and Constructive Political Behavior Moral convictions can motivate individual actions, such as speaking and acting in support of one’s preferred political causes. For instance, people who feel their political choices reflect their core moral values more often vote in national elections (Skitka et al., 2021). Likewise, people speak up for their morally grounded views even at the risk of corporate backlash (Dungan et al., 2019) or social media ostracism (Crockett, 2017). For instance, those who choose to be vegan for moral reasons are especially willing to evangelize their unpopular views on animal consumption to everyone from family to social media networks (Judge et al., 2022). And moral concerns can motivate individuals to join in collective political action like demonstrating and fundraising to support their stance on government-mandated university tuition increases (Sabucedo et al., 2018), discrimination against women (Zaal et al., 2011), and graduate student labor issues (Morgan, 2011). Voting, voicing one’s views, and participating in collective action represent key civic responsibilities in well-functioning democracies – in this way, moral concerns can motivate individuals to take democratically sanctioned routes to promote their favored policies. Moral concerns can also help individuals rally others, inspiring them to pursue these same causes. Moralizers inspire effective group action by raising awareness of moral issues and signaling which stance their group should adopt (Spring et al., 2018), and by compelling copartisans to vote (Gerber & Rogers, 2009). People with strong, moralized views also seem prototypical of their political group (Goldenberg et al., 2022), which makes other group members want to befriend them and take up their causes (Hogg, 2001).

555

556

  .  ,        ,      

22.2.1.2 Moral Concerns Can Also Inspire Less Democratic Political Behaviors Moralized political stances can also motivate less democratic, even violent means to political ends (Finkel et al., 2020). Because moral concerns feel absolute, people prioritize them so much so that they will subvert other values and norms to achieve their moral ends. This can lead people with moralized views to become vigilantes, skirting due process to punish perceived transgressors; for example, sanctioning copartisans who stray from party norms (Marques et al., 1988; for a review, see Skitka et al., 2021) and excessively piling onto single targets online (Sawaoka & Monin, 2018). People with strong moral convictions might also try to draw attention to and rally support for their cause via destruction or even violence (Skitka et al., 2021). For example, people who moralize gender equality are more willing to vandalize and riot against organizations that discriminate against women (Zaal et al., 2011). Though drastic means may sometimes be necessary for progress, vigilantism and violence subvert democratic norms and endanger peaceful routes to societal change. And since most people find violent activism off-putting (Feinberg, Willer, & Kovacheff, 2020), moral movements that use violence to draw attention to their cause might, ironically, deter public support. Another way moral concerns can impede constructive political action is by heightening identity concerns. When people care more about seeming rather than being moral (Aquino & Reed, 2002), they might choose superficially attractive yet ineffective actions. On social media, people can curate a morally concerned public persona through low-cost, low-impact behaviors, like calling out others’ missteps (Rothschild & Keefer, 2017). On one hand, these behaviors could help rally people around their cause: If people are unaware of an issue, these public posts can raise awareness to new audiences; likewise, when audiences see someone called out for their missteps, they can learn to avoid similarly condemnable behaviors. On the other hand, awareness and learning have less of a tangible impact than other behaviors: Social media advocates may feel that they have done enough to rally support for their moral cause, licensing them to skip out on higher-cost, higher-impact actions like volunteering, voting, or protesting (Merritt et al., 2010). These image-focused actions can also undermine collective action, as in the misguided July 2020 Instagram campaign to post black squares tagged #blacklivesmatter: This public moral signal crowded out organizing messages by Black Lives Matter leaders using that same hashtag (see also Brady & Crockett, 2019). Moral outrage and other online behaviors can catalyze collective action (Spring et al., 2018) but when motivated by selfpromotion, they more often impair it (Smith et al., 2019).

22.2.2 Morality Undermines Motivation to Engage Constructively with Political Opponents As we have seen, moral concerns can drive people to engage in actions aimed at promoting their political goals, though sometimes these actions are less effective

Moral Dimensions of Political Attitudes and Behavior

and socially sanctioned. But in pluralistic democratic societies, promoting one’s favored policies may not be enough: Achieving majority support for a policy often requires engagement between opposing factions, as opponents can be persuaded to join the cause or negotiated with to at least partially advance it. Though people endorse various moral values that encourage such cross-divide engagement in principle, their other moral concerns (paradoxically) undermine it in practice. Though there are practical and principled reasons for politically motivated people to engage across political divides, they seldom do so. The same moral values that fuel direct political action also keep people away from their political opponents and out of cross-party conversations that could actually help their political cause. Specifically, moral concerns both pull people toward those with politically similar beliefs, and push them away from those who hold different political opinions.

22.2.2.1 Moral Concerns Encourage Cross-Divide Engagement in Principle but Obstruct It in Practice In a democratic society, where successful policies require majority support, people with moralized political beliefs need to engage with political opponents if they hope, practically, to garner majority support. When people do not have majority support for their favored policy, engagement is practically necessary to get it: Advocates can hear out opponents’ concerns and either persuade them to change their minds or compromise toward a mutually acceptable solution (e.g., a moderate or integrative policy). Even when they have majority support, advocates might still find it practically helpful to engage, since this promotes longer-lasting policies: When policies are passed with only slim majority support – without input from the minority group, as is often the case amid polarization – they risk being overturned as soon as that opposing minority gains power in the future (Barber et al., 2015). In contrast, engagement can help advocates conjure majority support for long-lasting policies, meaningfully and sustainably advancing their moral causes. Since engaging with opponents is often pragmatically necessary to overcome sharp disagreement and pass longlasting policies, having moralized political beliefs should presumably motivate people to engage with opponents, if only to advance their causes. Moreover, pragmatics aside, both liberals and conservatives hold values that seem like they would promote engagement across political divides. They both endorse care and fairness above all (Graham et al., 2009), so they should gladly cooperate on policies advancing these shared moral values (e.g., reforms to curb gun violence or improve low-income students’ access to education). Likewise, they both agree that it is morally important for people to form their political beliefs through rational means (Ståhl et al., 2016), such as evaluating all available evidence and facts, including those that support the opponents’ position. They also prefer open-minded, tolerant, and cooperative individuals and wish to espouse these traits themselves (Heltzel & Laurin, 2021), which should

557

558

  .  ,        ,      

compel them to open-mindedly engage and cooperate with opponents. Finally, both groups support in principle democratic values like tolerance of opposing views (though conservatives somewhat less; Benjamin et al., 2022), despite sometimes subverting these values for political gain (McCoy et al., 2020). Together, both sides’ tolerant, rational, open-minded, and cooperative moral values, not to mention their pragmatic concerns, should motivate well-meaning engagement with opponents. Although these pragmatics and principles should push Americans toward constructive engagement, evidence instead suggests they have become more politically segregated in recent decades (Heltzel & Laurin, 2020; Iyengar et al., 2019) and moralization seems to be the culprit. In other words, rather than helping to bridge divides, people’s moral concerns interfere with their ability and motivation to communicate with opponents (Kovacheff et al., 2018). We suggest two types of processes at work here: Broader psychological motives may pull people toward politically and morally similar others, while political and moral differences may also push people to actively reject political opponents and cross-divide communication.

22.2.2.2 Broad Motives That Pull People Toward Similar Political Others There are at least three basic psychological motives that pull people toward likeminded moral others and away, though not intentionally so, from political opponents. In each case, people are seeking to fill a need that has nothing to do with politics, but ends up having political consequences. First, people have a psychological need to feel that they understand the world, which they can fulfill by seeking out confirmation for their views. Most straightforwardly, this means partisans drift toward news media outlets that align with their political leanings (Iyengar & Hahn, 2009): Conservatives tune in to Fox News while liberals turn on MSNBC. In person and online, people prefer to spend time with and be close to others who share their traits, hobbies, or attitudes, and this is especially true for political and moral attitudes. People prefer politically like-minded neighbors, physicians, and in-laws, and will even sit closer to strangers who appear to share their political beliefs (Skitka et al., 2021). Thus, just as people’s need to understand the world can lead them to seek out congenial information, it can also draw them to spend time with like-minded others (Hillman et al., 2022). Not only do people prefer to socialize with political allies, but they especially prefer allies who show strong commitment to their moral values by expressing outrage at opponents (Goldenberg et al., 2022). Online, X (previously Twitter) users with similar moral beliefs interact more often with each other and use morally laden language that appeals to their own group but affronts opponents (Brady et al., 2017; Dehghani et al., 2016). For example, compared to conservatives, liberals spontaneously use more language invoking concerns about harm and fairness and less invoking concerns about authority, purity, and loyalty (Feinberg & Willer, 2019). They also go a step further than this; on Facebook,

Moral Dimensions of Political Attitudes and Behavior

when people see a post that violates their moral foundations, they often unfriend whoever posted or shared it (Neubaum et al., 2021). As a result of these processes, whereby people seek to have their moral values and beliefs validated by politically like-minded others, they can become enmeshed, sometimes unintentionally, in social networks that preclude friendly contact with political opponents. Second and relatedly, when people try to fulfill their needs to feel belonging and social connectedness, they often end up gathering in places with politically like-minded others (Hillman et al., 2022). People seek out places where they expect to belong, and these can often be the same places their political allies choose. For example, people want to move to communities with subtle cues that appeal to them and, as it happens, to their copartisans – churches and ruralthemed restaurants for conservatives, art galleries and organic food stores for liberals (Motyl et al., 2020). Sometimes seeking out belonging entails moving away from opponents: For instance, participants who identified as strong liberals or conservatives were 60 percent more likely to move when they lived in ideologically misfitting communities compared to when they lived in ideologically fitting communities (Motyl et al., 2014). An unintended consequence of these actions, driven by a desire to feel connected to others around them, is that people segregate themselves from opponents, and have fewer interactions across divides. Finally, people’s instinct to protect their emotions and preserve their energy might drive them to avoid cross-party interactions (for a review, see Minson & Dorison, 2022). Regarding emotions, people want to feel pleasant feelings and may find this easier when they avoid cross-party interactions. Imagine that you enter into a conversation with a political opponent. If this person seems like a reasonable person, or if they make an argument you find compelling, this could threaten your certainty in your core moral beliefs, leading you to feel anxious and confused. Alternatively, if the political opponent seems unreasonable, or makes an argument you do not buy, you may feel angry and frustrated that anyone could be so selfish or stupid. Indeed, many people avoid hearing from opponents in part because they expect doing so would upset them (Dorison et al., 2019), and will sometimes even pay to avoid this (Frimer et al., 2017). In other words, people’s basic desire to preserve their emotional well-being may lead them to avoid engaging with dissidents. Regarding energy, people might avoid hearing opposing views because to consider them and logically weigh their merits requires time and immense cognitive effort, which people typically do not want to exert (Kahneman et al., 1982). These psychological needs have the side effect of making crossparty conversation rarer, even though people’s intention is merely to preserve their emotional well-being and mental stamina.

22.2.2.3 Moral and Political Differences Actively Push People Apart Compounding these effects of basic motives, moral differences can also directly motivate people to deliberately reject contact with political opponents. For one

559

560

  .  ,        ,      

thing, people often hear of their opponents doing things that violate their moral values. Liberals hear of conservatives protecting the rich and deporting immigrants; conservatives hear of liberals degrading revered statues and disrespecting the national anthem. These likely prompt condemnation and, since people feel impotent to change their opponents’ unsavory behaviors, they are left with cold contempt (Malle et al., 2018), further dissuading conversation. These reactions to perceived moral violations may be further linked to people’s stereotypes of their opponents as morally miscalibrated. For example, conservatives stereotype liberals as being unpatriotic and overly sensitive (Clifford, 2020) – in other words, too low on the loyalty and too high on the care foundation; conversely, liberals stereotype conservatives as callous to the suffering and injustice of others (i.e., low on care and fairness). More broadly, each group stereotypes the other as hypocritical, selfish, and close-minded (Iyengar et al., 2019). Making matters worse, due to the geographical and social clustering described earlier, people rarely have opportunities to correct these stereotypes; rather, their primary exposure to opponents is through partisan news sources, which often present sensationalized coverage of opponents’ moral violations (Yang et al., 2016). Setting aside actual behavior, people may choose to reject contact with opponents simply because of the positions they endorse or even merely entertain. When the policies people support become sacred values, they may find it offensive for anyone to even question these truths and debate alternatives (Critcher et al., 2012; Merritt & Monin, 2011; Tetlock, 2003). As an example outside of the political domain, many people hold the protection of children’s lives as a sacred value; the mere thought of sacrificing a child’s life for money fills people with moral outrage and the desire to cleanse themselves of such an immoral thought (Tetlock, 2003). Moreover, when people learn that someone has entertained a debate on the matter, they are motivated to punish that individual and cut ties with them – even if that person eventually made the right decision to pass up the money and save the child’s life. Translating this into the political domain, even if conservatives and liberals ultimately come to agree – as many do, for instance, on issues like marriage equality and climate change – it is likely that their latitudes of acceptance differ. That is, a conservative who embraces marriage equality may consider that it is legitimate to oppose it and may have spent time deliberating both sides before ultimately coming down in support. Liberals for whom marriage equality is a sacred value might find that deliberation horrifying and disgusting and prefer to shun anyone who was not immediately on their side. Some claim that conservatives are more likely than liberals to actively avoid or try to silence their political opponents – that conservatives more strongly dislike dissimilar others (Jost, 2017; Kugler et al., 2014) and are responsible for more politically motivated violence than are liberals (Kalmoe & Mason, 2022). Others disagree, arguing that liberals and conservatives are similarly prejudiced, feeling equally strong animosity toward each other (Crawford & Pilanski, 2014; Ganzach & Schul, 2021). This debate may eventually be resolved, but for the time

Moral Dimensions of Political Attitudes and Behavior

being it is clear that, whether to the same or different degrees, liberals and conservatives both openly discriminate against opponents and negatively stereotype them – beyond this, they also censor their opponents’ opinions and support violence against them (Crawford & Pilanski, 2014; Kalmoe & Mason, 2022).

22.2.3 How People Morally Justify Disengagement from Cross-Ideological Dialogue If people in principle believe they should be tolerant and wish to make political progress but in practice avoid opponents and even intentionally suppress their views, how do they not see themselves as moral hypocrites? Consider these three explanations. First, people might not even notice that their segregationist behavior violates their tolerant values and pragmatic political interests (Crawford & Pilanski, 2014). If people unintentionally gravitate toward politically congenial people and information, they may be oblivious to their exclusion of opponents. When they move to a neighborhood that feels right to them, they may not realize this precludes friendly connections with political adversaries. And when they choose the comfort of ideologically aligned news, they may fail to notice how this violates their open-minded values. Second, people may notice the disconnect but refuse responsibility for it. As noted earlier, people excel at justifying desired conclusions; if partisans are motivated to find fault in their opponents, they may find ways to blame their lack of contact with opponents on the opponents themselves. For example, partisans might claim that they are willing to engage, if only their closeminded opponents were equally willing to socialize (Iyengar et al., 2019; Iyengar & Westwood, 2015). Or they might argue that they have tried to engage with opponents before and therefore know how useless it would be to engage further, perhaps claiming that their opponents are stubbornly immune to persuasion or compromise, or that they already know exactly what they will say (Yeomans, 2021). These rationalizations blame opponents for the impasse, allowing partisans to acknowledge that they are not engaging without feeling guilty about it. Finally, people may see their political stonewalling as morally righteous (Hawkins et al., 2019). If conservatives view liberals as overly sensitive, unpatriotic flag-burners who welcome criminal immigrants, they likely find it perfectly justified – or even necessary – to sacrifice tolerance at the more sacralized altar of national security. If liberals view conservatives as heartless, gun-brandishing racists, they may similarly forgo tolerance to advance their sacred value of racial justice. Indeed, people perceive prejudice toward moral opponents to be uniquely justified (Cole Wright et al., 2008), feeling no dissonance when they disparage, censor, and disregard groups with dissimilar morals (Crawford & Pilanski, 2014; Iyengar & Westwood, 2015). While people endorse empathy and tolerance in the abstract, they believe these should be withheld from immoral people (Haidt et al., 2003; Wang & Todd, 2020).

561

562

  .  ,        ,      

22.2.3.1 Overcoming Moral Barriers to Engagement with Political Opponents Interventions could help overcome these challenges and foster bipartisan engagement with opposing opinions and individuals. We consider how to motivate people to engage across political divides, to ensure people approach these opportunities in good faith, and how to structure this engagement for optimal results.

22.2.4 Improving Motivation to Engage With, and Attitudes Toward, Political Opponents People actively avoid engaging with their political opponents, so interventions fostering positive cross-party engagement must first overcome this motivational barrier, by making people see positive engagement as a desirable goal. Since most people endorse tolerance, helping them see the hypocrisy of their avoidant behavior might induce them to act differently (Batson et al., 1999). Of course, people could instead resolve this hypocrisy by disavowing tolerance; to prevent this, interventions should emphasize tolerance as a primary moral virtue (see Kovacheff et al., 2018). Alternatively, interventions could leverage social pressures. People are strongly motivated to behave in ways that their group approves of (Hillman et al., 2022); since partisans strongly prefer copartisans who seek to better understand, rather than avoid, opponents (Heltzel & Laurin, 2021), interventions could harness this social approval to motivate political discourse. Such interventions could increase people’s awareness of their avoidance of opposing political views and motivate them to start seeking ways to engage. Still, even if partisans want to engage across divides, their dislike for opponents may doom their attempts. Partisans dislike their opponents and expect to be disliked in return (Lees & Cikara, 2020). This may create a self-fulfilling prophecy, whereby partisans enter into a cross-party interaction feeling defensive and unforgiving, causing it to go poorly. That is, because they expect to be disliked, partisans may approach an opposing interlocuter with cold indifference, offending the opponent and thereby creating the chilly atmosphere they initially expected. Such interactions likely only reinforce negative stereotypes of opponents, hampering productive conversation and dissuading well-intentioned partisans from trying again. Interventions seeking to foster productive and sustained cross-divide engagement, then, should not only motivate people to engage but also inoculate against partisan animosity that would otherwise foil pleasant engagement. To this end, we discuss intervention strategies that target individuals’ attitudes and interpersonal relationships (for a broader review, see Hartman et al., 2022).

22.2.4.1 Interventions Targeting Individuals’ Attitudes As a first step toward ensuring more productive cross-party conversations down the line, interveners can focus on the individual level, changing individuals’ thoughts and feelings about their opponents, ahead of any interactions with

Moral Dimensions of Political Attitudes and Behavior

these opponents. A recent large-scale study tested the efficacy of 25 interventions aimed at improving attitudes toward political opponents (Voelkel et al., 2023) and found the most successful ones worked by helping foster individuals’ empathy toward, and perceived similarity to, out-group partisans; this finding provides some good starting places. Perspective-taking interventions effectively foster empathy and self–other overlap between apolitical groups (Todd & Galinsky, 2014) and so might also work to reduce partisan animosity (Saveski et al., 2021). These interventions have people imagine the thoughts and feelings of someone else (e.g., an outgroup member), raising awareness of and responsiveness to their experiences. But features of the political context make it likely that these interventions would backfire. Many partisans think their opponents are immoral (Finkel et al., 2020) and feel hated by them (Lees & Cikara, 2020). When asked to imagine conservatives’ perspectives, liberals might feel that their core identities and moral worldviews are under threat, leading their attitudes to worsen (and vice versa for conservatives imagining liberals’ perspectives; Sassenrath et al., 2016; Vorauer, 2013). Likewise, when conservatives feel that liberals have cheated to gain a political advantage (e.g., by passing laws that restrict or expand voting access), imagining their perspectives might lead conservatives to dwell on liberals’ cheating, dislike them, and cheat in response (Epley et al., 2006). In other words, perspective-taking interventions are unlikely to improve partisan animosity because they do not change people’s beliefs about opponents; when people’s stereotypes about opponents are negative, taking their perspective can backfire by encouraging people to recall and dwell on these unsavory stereotypes – thereby justifying and reinforcing their prejudices. A more promising intervention strategy is to invalidate negative stereotypes by correcting partisans’ misperceptions, or inaccurate beliefs about their opponents, helping them see that they are more similar to these out-group members than previously believed. This can be done in two ways. First, interventions can correct people’s first-order perceptions of their opponents. Americans overestimate the extremity of their opponents’ average policy preferences (Fernbach & Van Boven, 2021) and how often their opponents talk about politics (Druckman et al., 2022). For instance, liberals overestimate how many conservatives are obscenely wealthy, and conservatives overestimate how many liberals are militant atheists (Ahler & Sood, 2018). When partisans discover that these stereotypes are not true – that their opponents are more moderate and less vocal than previously thought – they like them more. Second, interventions can correct second-order perceptions of how they are seen by opponents. For example, people overestimate how much they are disliked and dehumanized by their opponents (Lees & Cikara, 2020; MooreBerg et al., 2020); when liberals discover that conservatives detest and dehumanize them much less than anticipated, liberals tend to like and humanize conservatives more (and vice versa for conservatives). Conceptually, these interventions likely work because they highlight commonalities between

563

564

  .  ,        ,      

partisans and their opponents, thereby fostering empathy and self–other overlap (Hartman et al., 2022; Voelkel et al., 2023). Methodologically, interventions aimed at correcting misperceptions are more likely to succeed when they show these commonalities through videos or stories of real people interacting, rather than telling participants that these commonalities exist using results from polls or studies (Hawkins et al., 2019). Indeed, in the large-scale test of interventions mentioned earlier (Voelkel et al., 2023), three of the top five most successful interventions used vivid, engaging videos of real partisans discussing their beliefs and values either alone or with an opposing partisan; this likely helped partisans to realistically see what their opponents are like and how those opponents feel about them.

22.2.4.2 Interventions Targeting Interpersonal Relationships Interventions levied at individual attitudes, such as the misperceptioncorrecting strategies discussed in Section 22.2.4.1, may be simpler to implement than those levied at interpersonal interactions. However, individual-level interventions should be followed with interventions that bring partisans in actual contact with opponents, as these likely have a stronger psychological impact, helping to more effectively improve attitudes and facilitate better dialogue. Contact reliably improves prejudicial attitudes between even adversarial groups with long-lasting effects, boasting a meta-analyzed average effect of r ¼ –0.22 (Dovidio et al., 2017; Pettigrew & Tropp, 2006). Though contact interventions were originally designed to improve interracial relations, they have been applied to other group settings, including morally conflicting groups: Israeli and Palestinian children who attended summer camp together developed lasting positive attitudes toward each other (White et al., 2021). Contact interventions are most likely to succeed under specific conditions (Pettigrew & Tropp, 2006): when contact is repeated, institutionally sanctioned, lasts more than 10 minutes, and when participating groups feel they have equal status and are working toward a common goal under a shared identity (Levendusky, 2018). For conservatives and liberals, then, contact should include multiple, not-too-brief interactions where institutional authorities (e.g., leaders, policymakers) encourage cooperation toward any shared goal – even a nonpolitical one – under the banner of a broader, shared identity. Care should also be taken to ensure both parties feel they have equal footing. Interactions such as those on partisan news platforms, where a conservative spokesperson joins a liberal broadcast only to have their opinions ridiculed, are unlikely to benefit participants’ intergroup feelings. Contact works for a variety of reasons. For one, it can correct negative stereotypes and exaggerated perceptions of the out-group (Pettigrew & Tropp, 2006). Because contact allows partisans to see first-hand proof that their opponents are not as awful as expected, these interventions likely work better than indirect interventions in which partisans read about opponents or see them in videos or stories. That said, direct conversation between liberals and

Moral Dimensions of Political Attitudes and Behavior

conservatives – who may go into these interactions deeply disliking each other – can easily go wrong; as such, we recommend preceding contact-type interventions with the individual-level interventions described earlier. Contact also works by deemphasizing group boundaries. By having partisans work together toward a shared goal, they feel like part of one superordinate group (e.g., as Americans; Levendusky, 2018). As a result, partisans can see each other as individuals and bond over shared nonpolitical interests and values (e.g., hobbies, family).

22.2.5 Improving Constructiveness of Contact and Dialogue Once people are motivated to engage in cross-party dialogue and like opponents enough to approach it in good faith, there are still many opportunities for the conversation to derail. We close by identifying conditions under which political conversations, once initiated, remain pleasant and constructive, especially given that conversation may drift to morally relevant (and therefore potentially divisive) topics. Conversations between partisan opponents may fare better when participants foster empathy by highlighting shared moral ground. Since morality is key to building trust and liking, cross-party conversations will naturally fare better when they invoke shared moral values (e.g., care and fairness; Graham et al., 2009). But even when conversations bring up moral disagreements, partisans can avoid conflict by discussing how their personal experiences have informed their political views: Compared to facts that can be dismissed as fake news, personal experiences are difficult to refute and easier to empathize with (Kubin et al., 2021), allowing even staunch opponents to see opposing views as reasonable and legitimate (Stanley et al., 2020). For example, conservatives respect a liberal’s gun control stance more if the stance stems from having suffered from gun violence and less if it stems from statistics supporting gun regulation. These strategies can allow people to talk about the loaded moral beliefs that infuse their political views while still increasing empathy and improving attitudes toward opponents. Another strategy to facilitate political discussions is to make political opponents feel respected, heard, and included. Respect can make interactions friendlier. Telling political opponents that one respects their status can make them less defensive (Moore-Berg et al., 2020) and more friendly toward you. In one study, when an opponent who disagreed nevertheless acknowledged respect for participants’ views on the Affordable Care Act, participants viewed the opponent more positively and were more willing to give them money in a dictator game (Bendersky, 2014). Making sure your opponents feel heard can have similar effects (Yeomans et al., 2020). This is even true among groups with a history of violence: When members of conflict-affected communities in Colombia were able to tell their personal experiences to ex-combatants on the other side, they liked the ex-combatants more (Ugarriza & Nussio, 2017). And when people include rather than exclude an opponent, even in low-stakes online

565

566

  .  ,        ,      

conversations, the opponent likes them more and sees them as more moral (Voelkel et al., 2021). These strategies may also succeed because they elicit reciprocation, opening the door to good-faith compromise and agreement. When people listen to each other, their opinions shift closer to the center. For example, after door-to-door canvassers nonjudgmentally listened to participants’ personal narratives about immigration policy, those participants shifted their views closer to the canvassers’ (Kalla & Broockman, 2020). When a person hears their opponent out, they can better acknowledge their opponent’s arguments, making them seem informed and unbiased (Hussein & Tormala, 2021; Xu & Petty, 2021); as a result, opponents may feel more willing to soften their position and compromise. For example, when liberals show they properly understand conservatives’ pragmatic concerns about the economic opportunities of pipeline projects, conservatives may feel more receptive to reasons for canceling pipeline projects. When people have heard each other, they can also speak to each other’s moral concerns, and find common ground in that way; for example, liberals can highlight the purity violations inherent to pollution, and conservatives can highlight the fairness benefits of funding the military as an employer and educator of the disadvantaged (Feinberg & Willer, 2013).

22.3 Conclusion Politics have become increasingly intertwined with morality. As political issues become moralized, people feel compelled to ensure their side succeeds. The standoff between the two sides increasingly feels like a highstakes conflict between good and evil. Despite the noble aims underlying moral values, their strong ties to politics have impaired goodwill among citizens and the efficacy of their government. People striving to satisfy basic motives gravitate toward morally and politically like-minded others, but in a more direct sense they also feel contempt and condemnation toward those who disagree, resulting in record levels of partisan animosity. As a result, partisan communities increasingly segregate rather than communicate. To combat these forces and increase people’s willingness and ability to communicate across divides, one might moralize political tolerance, correct misperceptions of the political divide that emerge when people rely on their imaginations to picture their typical opponent, and instead promote real and constructive contact between opposing political sides. Globally, activist groups like Braver Angels (in the United States) and Diskutier Mit Mir (in Germany) have begun these efforts, facilitating conversations across political divides. Similarly, Stanford’s Center for Deliberative Democracy has brought together hundreds of politically diverging Americans and encouraged courteous political debate. Given the digital and geographical segregation between liberals and conservatives, more active and widespread efforts are needed to encourage both

Moral Dimensions of Political Attitudes and Behavior

virtual and in-person contact with opponents. Combined, these strategies can strengthen cross-divide communication and, in doing so, help people and democracies reach their political goals and maybe even find common moral ground.

References Ahler, D. J., & Sood, G. (2018). The parties in our heads: Misperceptions about party composition and their consequences. Journal of Politics, 80(3), 964–981. Aquino, K., & Reed, A. (2002). The self-importance of moral identity. Journal of Personality and Social Psychology, 83(6), 1423–1440. Bakker, B. N., Lelkes, Y., & Malka, A. (2021). Reconsidering the link between selfreported personality traits and political preferences. American Political Science Review, 115(4), 1482–1498. Barber, M., McCarty, N., Mansbridge, J., & Martin, C. J. (2015). Causes and consequences of polarization. In J. Mansbridge & C. J. Martin (Eds.), Political negotiation: A handbook (pp. 37–90). Brookings Institution Press. Batson, C. D., Thompson, E. R., Seuferling, G., Whitney, H., & Strongman, J. A. (1999). Moral hypocrisy: Appearing moral to oneself without being so. Journal of Personality and Social Psychology, 77(3), 525–537. Bendersky, C. (2014). Resolving ideological conflicts by affirming opponents’ status: The Tea Party, Obamacare and the 2013 government shutdown. Journal of Experimental Social Psychology, 53, 163–168. Benjamin, R., Laurin, K., & Chiang, M. (2022). Who would mourn democracy? Liberals might, but it depends on who’s in charge. Journal of Personality and Social Psychology, 122(5), 779–805. Boxell, L., Gentzkow, M., & Shapiro, J. M. (2024). Cross-country trends in affective polarization. Review of Economics and Statistics, 106(2), 557–565. Brady, W. J., & Crockett, M. J. (2019). How effective is online outrage? Trends in Cognitive Sciences, 23(2), 79–80. Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & Van Bavel, J. J. (2017). Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences, 114(28), 7313–7318. Brandt, M. J., Turner-Zwinkels, F. M., Karapirinler, B., Van Leeuwen, F., Bender, M., van Osch, Y., & Adams, B. (2021). The association between threat and politics depends on the type of threat, the political domain, and the country. Personality and Social Psychology Bulletin, 47(2), 324–343. Clifford, S. (2020). Compassionate democrats and tough republicans: How ideology shapes partisan stereotypes. Political Behavior, 42(4), 1269–1293. Clifford, S., Iyengar, V., Cabeza, R., & Sinnott-Armstrong, W. (2015). Moral foundations vignettes: A standardized stimulus database of scenarios based on moral foundations theory. Behavior Research Methods, 47(4), 1178–1198. Cole Wright, J., Cullum, J., & Schwab, N. (2008). The cognitive and affective dimensions of moral conviction: Implications for attitudinal and behavioral measures of interpersonal tolerance. Personality and Social Psychology Bulletin, 34(11), 1461–1476.

567

568

   .  ,        ,      

Crawford, J. T. (2017). Are conservatives more sensitive to threat than liberals? It depends on how we define threat and conservatism. Social Cognition, 35(4), 354–373. Crawford, J. T., & Pilanski, J. M. (2014). Political intolerance, right and left. Political Psychology, 35(6), 841–851. Critcher, C. R., Inbar, Y., & Pizarro, D. A. (2012). How quick decisions illuminate moral character. Social Psychological and Personality Science, 4(3), 308–315. Crockett, M. J. (2017). Moral outrage in the digital age. Nature Human Behaviour, 1, 769–771. Dehghani, M., Johnson, K., Hoover, J., Sagi, E., Garten, J., Parmar, N. J., Vaisey, S., Iliev, R., & Graham, J. (2016). Purity homophily in social networks. Journal of Experimental Psychology: General, 145(3), 366–375. Dorison, C. A., Minson, J. A., & Rogers, T. (2019). Selective exposure partly relies on faulty affective forecasts. Cognition, 188, 98–107. Dovidio, J. F., Love, A., Schellhaas, F. M. H., & Hewstone, M. (2017). Reducing intergroup bias through intergroup contact: Twenty years of progress and future directions. Group Processes and Intergroup Relations, 20(5), 606–620. Druckman, J. N., Klar, S., Krupnikov, Y., Levendusky, M., & Ryan, J. B. (2022). (Mis) estimating affective polarization. Journal of Politics, 84(2), 1106–1117. Dungan, J. A., Young, L., & Waytz, A. (2019). The power of moral concerns in predicting whistleblowing decisions. Journal of Experimental Social Psychology, 85, Article 103848. Epley, N., Caruso, E., & Bazerman, M. H. (2006). When perspective taking increases taking: Reactive egoism in social interaction. Journal of Personality and Social Psychology, 91(5), 872–889. Everett, J. A. C. (2013). The 12 Item Social and Economic Conservatism Scale (SECS). PLoS ONE, 8(12), Article e82131. Feinberg, M., Wehling, E., Chung, J. M., Saslow, L. R., & Paulin, I. M. (2020). Measuring moral politics: How strict and nurturant family values explain individual differences in conservatism, liberalism, and the political middle. Journal of Personality and Social Psychology, 118(4), 777–804. Feinberg, M., & Willer, R. (2013). The moral roots of environmental attitudes. Psychological Science, 24(1), 56–62. Feinberg, M., & Willer, R. (2019). Moral reframing: A technique for effective and persuasive communication across political divides. Social and Personality Psychology Compass, 13(12), Article e12501. Feinberg, M., Willer, R., & Kovacheff, C. (2020). The activist’s dilemma: Extreme protest actions reduce popular support for social movements. Journal of Personality and Social Psychology, 119(5), 1086–1111. Feldman, S. (1982). Economic self-interest and political behavior. American Journal of Political Science, 26(3), 446–466. Fernbach, P. M., & Van Boven, L. (2021). False polarization: Cognitive mechanisms and potential solutions. Current Opinion in Psychology, 43, 1–6. Finkel, E. J., Bail, C. A., Cikara, M., M., Ditto, P. H., Iyengar, S., Klar, S., Mason, L., McGrath, M.C., Nyhan, B., Rand, D. G., Skitka, L. J., Tucker, J., Van Bavel, J. J., Wang, C. S., & Druckman, J. N. (2020). Political sectarianism in America. Science, 370(6516), 533–536.

Moral Dimensions of Political Attitudes and Behavior

Frimer, J. A., Skitka, L. J., & Motyl, M. (2017a). Liberals and conservatives are similarly motivated to avoid exposure to one another’s opinions. Journal of Experimental Social Psychology, 72, 1–12. Ganzach, Y., & Schul, Y. (2021). Partisan ideological attitudes: Liberals are tolerant; the intelligent are intolerant. Journal of Personality and Social Psychology, 120(6), 1551–1566. Gerber, A. S., & Rogers, T. (2009). Descriptive social norms and motivation to vote: Everybody’s voting and so should you. Journal of Politics, 71(1), 178–191. Gidron, N., Adams, J., & Horne, W. (2020). American affective polarization in comparative perspective. Cambridge University Press. Goldenberg, A., Abruzzo, J. M., Huang, Z., Schöne, J., Bailey, D., Willer, R., Halperin, E., & Gross, J. J. (2022). Homophily and acrophily as drivers of political segregation. Nature Human Behaviour, 7(2), 219–230. Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., & Ditto, P. H. (2013). Moral foundations theory: The pragmatic validity of moral pluralism. Advances in Experimental Social Psychology, 47, 55–130. Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96(5), 1029–1046. Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011). Mapping the moral domain. Journal of Personality and Social Psychology, 101(2), 366–385. Gray, K., MacCormack, J. K., Henry, T., Banks, E., Schein, C., Armstrong-Carter, E., Abrams, S., & Muscatell, K. A. (2022). The affective harm account (AHA) of moral judgment: Reconciling cognition and affect, dyadic morality and disgust, harm and purity. Journal of Personality and Social Psychology, 123(6), 1199–1222. Gray, K., & Schein, C. (2016). No absolutism here: Harm predicts moral judgment 30 better than disgust – Commentary on Scott, Inbar, & Rozin (2016). Perspectives on Psychological Science, 11(3), 325–329. Haidt, J., Rosenberg, E., & Hom, H. (2003). Differentiating diversities: Moral diversity is not like other kinds. Journal of Applied Social Psychology, 33(1), 1–36. Hatemi, P. K., Crabtree, C., & Smith, K. B. (2019). Ideology justifies morality: Political beliefs predict moral foundations. American Journal of Political Science, 63(4), 788–806. Hatemi, P. K., Medland, S. E., Klemmensen, R., Oskarsson, S., Littvay, L., Dawes, C., Verhulst, B., Mcdermott, R., Nørgaard, A. S., Klofstad, C., Christensen, K., Johannesson, M., Magnusson, P. K. E., Eaves, L. J., & Martin, N. G. (2014). Genetic influences on political ideologies: Twin analyses of 19 measures of political ideologies from five democracies and genome-wide findings from three populations. Behavioral Genetics, 44(3), 282–294. Hartman, R., Blakey, W., Womick, J., Bail, C., Finkel, E. J., Han, H., Sarrouf, J., Schroeder, J., Sheeran, P., Van Bavel, J. V., Willer, R., & Gray, K. (2022). Interventions to reduce partisan animosity. Nature Human Behaviour, 6(9), 1194–1205. Hawkins, S., Yudkin, D., Juan-Torres, M., & Dixon, T. (2019). Hidden Tribes: A study of America’s polarized landscape [Report]. More in Common. https:// hiddentribes.us/media/qfpekz4g/hidden_tribes_report.pdf

569

570

   .  ,        ,      

Heltzel, G., & Laurin, K. (2020). Polarization in America: Two possible futures. Current Opinion in Behavioral Sciences, 34, 179–184. Heltzel, G., & Laurin, K. (2021). Seek and ye shall be fine: Attitudes toward politicalperspective seekers. Psychological Science, 32(11), 1782–1800. Hibbing, J. R., Smith, K. B., & Alford, J. R. (2014). Differences in negativity bias underlie variations in political ideology. Behavioral and Brain Sciences, 37(3), 297–307. Hillman, J. G., Fowlie, D. I., & MacDonald, T. K. (2022). Social verification theory: A new way to conceptualize validation, dissonance, and belonging. Personality and Social Psychology Review, 27(3), 309–331. Hogg, M. A. (2001). A social identity theory of leadership. Personality and Social Psychology Review, 5(3), 184–200. Hussein, M. A., & Tormala, Z. L. (2021). Undermining your case to enhance your impact: A framework for understanding the effects of acts of receptiveness in persuasion. Personality and Social Psychology Review, 25(3), 229–250. Inbar, Y., Pizarro, D., Iyer, R., & Haidt, J. (2012). Disgust sensitivity, political conservatism, and voting. Social Psychological and Personality Science, 3(5), 537–544. Ingraham, C. (2014, May 28). Congressional gridlock has doubled since the 1950s. The Washington Post. https://www.washingtonpost.com/news/wonk/wp/2014/05/ 28/congressional-gridlock-has-doubled-since-the-1950s/ Iyengar, S., & Hahn, K. S. (2009). Red media, blue media: Evidence of ideological selectivity in media use. Journal of Communication, 59(1), 19–39. Iyengar, S., Lelkes, Y., Levendusky, M., Malhotra, N., & Westwood, S. J. (2019). The origins and consequences of affective polarization in the United States. Annual Review of Political Science, 22, 129–146. Iyengar, S., & Westwood, S. J. (2015). Fear and loathing across party lines: New evidence on group polarization. American Journal of Political Science, 59(3), 690–707. Iyer, R., Koleva, S., Graham, J., Ditto, P., & Haidt, J. (2012). Understanding libertarian morality: The psychological dispositions of self-identified libertarians. PLoS ONE, 7(8), Article e42366. Janoff-Bulman, R., & Carnes, N. C. (2013). Surveying the moral landscape. Personality and Social Psychology Review, 17(3), 219–236. Janoff-Bulman, R., & Carnes, N. C. (2016). Social justice and social order: Binding moralities across the political spectrum. PLoS ONE, 11(3), Article e0152479. Jones, D. R. (2015). Declining trust in Congress: Effects of polarization and consequences for democracy. Forum (Germany), 13(3), 375–394. Jost, J. T. (2017). Ideological asymmetries and the essence of political psychology. Political Psychology, 38(2), 167–208. Judge, M., Fernando, J. W., & Begeny, C. T. (2022). Dietary behaviour as a form of collective action: A social identity model of vegan activism. Appetite, 168, Article 105730. Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. Cambridge University Press. Kalla, J. L., & Broockman, D. E. (2020). Reducing exclusionary attitudes through interpersonal conversation: Evidence from three field experiments. American Political Science Review, 114(2), 410–425.

Moral Dimensions of Political Attitudes and Behavior

Kalmoe, N. P., & Mason, L. (2022). Radical American partisanship: Mapping violent hostility, its causes, and the consequences for democracy. University of Chicago Press. Koleva, S. P., Graham, J., Iyer, R., Ditto, P. H., & Haidt, J. (2012). Tracing the threads: How five moral concerns (especially purity) help explain culture war attitudes. Journal of Research in Personality, 46(2), 184–194. Kovacheff, C., Schwartz, S., Inbar, Y., & Feinberg, M. (2018). The problem with morality: Impeding progress and increasing divides. Social Issues and Policy Review, 12(1). Kubin, E., Puryear, C., Schein, C., & Gray, K. (2021). Personal experiences bridge moral and political divides better than facts. Proceedings of the National Academy of Sciences, 118(6), Article e2008389118. Kugler, M., Jost, J. T., & Noorbaloochi, S. (2014). Another look at moral foundations theory: Do authoritarianism and social dominance orientation explain liberalconservative differences in “moral” intuitions? Social Justice Research, 27(4), 413–431. Lees, J., & Cikara, M. (2020). Inaccurate group meta-perceptions drive negative out-group attributions in competitive contexts. Nature Human Behaviour, 4(3), 279–286. Levendusky, M. S. (2018). Americans, not partisans: Can priming American national identity reduce affective polarization? Journal of Politics, 80(1), 59–70. Liu, B., & Ditto, P. H. (2012). What dilemma? Moral evaluation shapes factual belief. Social Psychological and Personality Science, 4(3), 316–323. Malle, B. F., Voiklis, J., & Kim, B. (2018). Understanding contempt against the background of blame. In M. Mason (Ed.), The moral psychology of contempt (pp. 79–105). Rowman & Littlefield International Ltd. Marques, J. M., Yzerbyt, V. Y., & Leyens, J. P. (1988). The “black sheep effect”: Extremity of judgments towards ingroup members as a function of group identification. European Journal of Social Psychology, 18(1), 1–16. McAdams, D. P., Albaugh, M., Farber, E., Daniels, J., Logan, R. L., & Olson, B. (2008). Family metaphors and moral intuitions: How conservatives and liberals narrate their lives. Journal of Personality and Social Psychology, 95(4), 978–990. McCoy, J., Rahman, T., & Somer, M. (2018). Polarization and the global crisis of democracy: Common patterns, dynamics, and pernicious consequences for democratic polities. American Behavioral Scientist, 62(1), 16–42. McCoy, J., Simonovits, G., & Levente, L. (2020). Democratic hypocrisy: Polarized citizens support democracy-eroding behavior when their own party is in power. APSA Preprints. Merritt, A. C., Effron, D. A., & Monin, B. (2010). Moral self-licensing: When being good frees us to be bad. Social and Personality Psychology Compass, 4(5), 344–357. Merritt, A. C., & Monin, B. (2011). The trouble with thinking: People want to have quick reactions to personal taboos. Emotion Review, 3(3), 318–319. Minson, J. A., & Dorison, C. A. (2022). Why is exposure to opposing views aversive? Reconciling three theoretical perspectives. Current Opinion in Psychology, 47, Article 101435.

571

572

   .  ,        ,      

Moore-Berg, S. L., Ankori-Karlinsky, L. O., Hameiri, B., & Bruneau, E. (2020). Exaggerated meta-perceptions predict intergroup hostility between American political partisans. Proceedings of the National Academy of Sciences of the United States of America, 117(26), 14864–14872. Morgan, G. S. (2011). Toward a model of morally motivated behavior: Investigating mediators of the moral conviction-action link [Unpublished doctoral dissertation]. University of Illinois at Chicago. Motyl, M., Iyer, R., Oishi, S., Trawalter, S., & Nosek, B. A. (2014). How ideological migration geographically segregates groups. Journal of Experimental Social Psychology, 51, 1–14. Motyl, M., Prims, J. P., & Iyer, R. (2020). How ambient cues facilitate political segregation. Personality and Social Psychology Bulletin, 46(5), 723–737. Neubaum, G., Cargnino, M., Winter, S., & Dvir-Gvirsman, S. (2021). “You’re still worth it”: The moral and relational context of politically motivated unfriending decisions in online networks. PLoS ONE, 16(1), Article e0243049. Pettigrew, T. F., & Tropp, L. R. (2006). A meta-analytic test of intergroup contact theory. Journal of Personality and Social Psychology, 90(5), 751–783. Pew Research Center. (2014, June 12). Political polarization in the American public [Report]. https://www.pewresearch.org/politics/2014/06/12/political-polariza tion-in-the-american-public/ Rhee, J. J., Schein, C., & Bastian, B. (2019). The what, how, and why of moralization: A review of current definitions, methods, and evidence in moralization research. Social and Personality Psychology Compass, 13(12), Article e12511. Robbins, P., & Shields, K. (2014). Explaining ideology: Two factors are better than one. Behavioral and Brain Sciences, 37(3), 326–328. Rodriguez, C. G., Moskowitz, J. P., Salem, R. M., & Ditto, P. H. (2017). Partisan selective exposure: The role of party, ideology and ideological extremity over time. Translational Issues in Psychological Science, 3(3), 254–271. Rothschild, Z. K., & Keefer, L. A. (2017). A cleansing fire: Moral outrage alleviates guilt and buffers threats to one’s moral identity. Motivation and Emotion, 41(2), 209–229. Rozin, P. (1999). The process of moralization. Psychological Science, 10(3), 218–221. Sabucedo, J. M., Dono, M., Alzate, M., & Seoane, G. (2018). The importance of protesters’ morals: Moral obligation as a key variable to understand collective action. Frontiers in Psychology, 9, Article 418. Sassenrath, C., Hodges, S. D., & Pfattheicher, S. (2016). It’s all about the self: When perspective taking backfires. Current Directions in Psychological Science, 25(6), 405–410. Saveski, M., Gillani, N., Yuan, A., Vijayaraghavan, P., & Roy, D. (2021). Perspectivetaking to reduce affective polarization on social media. arXiv preprint arXiv:2110.05596. Sawaoka, T., & Monin, B. (2018). The paradox of viral outrage. Psychological Science, 29(10), 1665–1678. Schein, C., & Gray, K. (2015). The unifying moral dyad: Liberals and conservatives share the same harm-based moral template. Personality and Social Psychology Bulletin, 41(8), 1147–1163.

Moral Dimensions of Political Attitudes and Behavior

Schein, C., & Gray, K. (2017). The theory of dyadic morality: Reinventing moral judgment by redefining harm. Personality and Social Psychology Review, 22(1), 32–70. Skitka, L. J., Hanson, B. E., Morgan, G. S., & Wisneski, D. C. (2021). The psychology of moral conviction. Annual Review of Psychology, 72, 347–366. Skitka, L. J., Morgan, G. S., & Wisneski, D. C. (2015). Political orientation and moral conviction: A conservative advantage or an equal opportunity motivator of political engagement? In J. P. Forgas, K. Fiedler, & W. D. Crano (Eds.), Social psychology and politics (pp. 57–74). Psychology Press. Skitka, L. J., & Tetlock, P. E. (1993). Providing public assistance: Cognitive and motivational processes underlying liberal and conservative policy preferences. Journal of Personality and Social Psychology, 65(6), 1205–1223. Smith, B. G., Krishna, A., & Al-Sinan, R. (2019). Beyond slacktivism: Examining the entanglement between social media engagement, empowerment, and participation in activism. International Journal of Strategic Communication, 13(3), 182–196. Spring, V. L., Cameron, C. D., & Cikara, M. (2018). The upside of outrage. Trends in Cognitive Sciences, 22(12), 1067–1069. Ståhl, T., Zaal, M. P., & Skitka, L. J. (2016). Moralized rationality: Relying on logic and evidence in the formation and evaluation of belief can be seen as a moral issue. PLoS ONE, 11(11), Article e0166332. Stanley, M. L., Whitehead, P. S., Sinnott-Armstrong, W., & Seli, P. (2020). Exposure to opposing reasons reduces negative impressions of ideological opponents. Journal of Experimental Social Psychology, 91, Article 104030. Tetlock, P. E. (2003). Thinking the unthinkable: Sacred values and taboo cognitions. Trends in Cognitive Sciences, 7(7), 320–324. Todd, A. R., & Galinsky, A. D. (2014). Perspective-taking as a strategy for improving intergroup relations: Evidence, mechanisms, and qualifications. Social and Personality Psychology Compass, 8(7), 374–387. Tracy, J. L., Steckler, C. M., & Heltzel, G. (2019). The physiological basis of psychological disgust and moral judgments. Journal of Personality and Social Psychology, 116(1), 15–32. Ugarriza, J. E., & Nussio, E. (2017). The effect of perspective-giving on postconflict reconciliation. An experimental approach. Political Psychology, 38(1), 3–19. Uhlmann, E. L., Pizarro, D. A., Tannenbaum, D., & Ditto, P. H. (2009). The motivated use of moral principles. Judgment and Decision Making, 4(6), 479–491. van Zomeren, M., Postmes, T., Spears, R., & Bettache, K. (2011). Can moral convictions motivate the advantaged to challenge social inequality? Extending the social identity model of collective action. Group Processes and Intergroup Relations, 14(5), 735–753. Viciana, H., Hannikainen, I. R., & Gaitán Torres, A. (2019). The dual nature of partisan prejudice: Morality and identity in a multiparty system. PLoS ONE, 14(7), Article e0219509. Voelkel, J. G., Ren, D., & Brandt, M. J. (2021). Inclusion reduces political prejudice. Journal of Experimental Social Psychology, 95, Article 104149. Voelkel, J. G., Chu, J., Stagnaro, M. N., Mernyk, J. S., Redekopp, C., Pink, S. L., Druckman, J. N., Rand, D. G., & Willer, R. (2023). Interventions reducing

573

574

   .  ,        ,      

affective polarization do not necessarily improve anti-democratic attitudes. Nature Human Behaviour, 7(1), 55–64. Vorauer, J. (2013). The case for and against perspective-taking. In J. M. Olson & M. P. Zanna (Eds.), Advances in experimental social psychology (pp. 59–115). Elsevier Academic Press. Wang, Y. A., & Todd, A. R. (2020). Evaluations of empathizers depend on the target of empathy. Journal of Personality and Social Psychology, 121(5), 1005–1028. White, S., Schroeder, J., & Risen, J. L. (2021). When “enemies” become close: Relationship formation among Palestinians and Jewish Israelis at a youth camp. Journal of Personality and Social Psychology, 121(1), 76–94. Xu, M., & Petty, R. E. (2021). Two-sided messages promote openness for morally based attitudes. Personality and Social Psychology Bulletin, 48(8), 1151–1166. Yang, J. H., Rojas, H., Wojcieszak, M., Aalberg, T., Coen, S., Curran, J., Hayashi, K., Iyengar, S., Jones, P. K., Mazzoleni, G., Papathanassopoulos, S., Rhee, J. W., Rowe, D., Soroka, S., & Tiffen, R. (2016). Why are “others” so polarized? Perceived political polarization and media use in 10 countries. Journal of Computer-Mediated Communication, 21(5), 349–367. Yeomans, M. (2021). The straw man effect: Partisan misrepresentation in natural language. Group Processes & Intergroup Relations, 25(2), 1904–1927. Yeomans, M., Minson, J., Collins, H., Chen, F., & Gino, F. (2020). Conversational receptiveness: Improving engagement with opposing views. Organizational Behavior and Human Decision Processes, 160, 131–148. Zaal, M. P., Van Laar, C., Ståhl, T., Ellemers, N., & Derks, B. (2011). By any means necessary: The effects of regulatory focus and moral conviction on hostile and benevolent forms of collective action. British Journal of Social Psychology, 50(4), 670–689.

23 Moral and Religious Systems Benjamin Grant Purzycki and Theiss Bendixen

Unraveling the relationship between morality and religion remains a central form of inquiry in the scientific study of human culture and society. Ongoing research suggests that the relationship is more nuanced, complicated, and conditional on many factors than popularly recognized (for various views, see Bloom, 2012; Galen, 2012; Graham & Haidt, 2010; McKay & Whitehouse, 2014; Teehan, 2016). In this chapter, we explore the landscape of current thinking and research on the topic and review various positions on the relationship between morality and religion. We structure our chapter as follows. In Section 23.1.1, we offer some general definitions and, for the sake of conceptual clarity, point to ways in which contemporary research could be better aligned. Then, in Section 23.1.2, we outline a few of the most pressing contemporary problems in the current evolutionary and cognitive sciences of morality and religion. Following this, in Section 23.2, we review an array of proposed evolutionary foundations of religious thought and behavior and briefly discuss the current climate of the corollary literature about morality. On this basis, we evaluate in Section 23.3 current attempts to account for the connection between these two domains. In this section, we also suggest ways out of the wilderness and develop an informal cultural evolutionary framework with which to fruitfully investigate the complicated connections between religion and morality.

23.1 Definitions and Contemporary Problems 23.1.1 Conceptual Space and Terminology To begin, let us define our central concepts. We treat both “morality” and “religion” systemically, that is, each is comprised of causally interconnected components that produce some output (Meadows, 2008; von Bertalanffy, 1968). As such, without further specification, we use “morality” and “religion” to mean moral and religious systems, respectively. The general question that drives much of contemporary research is the question of the degree to which these systems overlap and influence each other. We define religious systems as shared beliefs in spiritual or supernatural agents (e.g., gods and ghosts) or processes (e.g., karma or mana), shared 575

576

                   

behaviors done with appeals to them (Jensen, 2019; Tylor, 1920; cf. Purzycki & Sosis, 2022), and the dynamic relationship between them. This is a broad and cross-culturally inclusive definition and therefore does not limit the focus to relatively more organized, formalized, dogmatic, or doctrinal systems (cf., Boyer, 2018, p. 21; Sperber, 2018). It also stresses a supernatural element and hence excludes other domains of “ultimate concern” (Tillich, 1957) such as politics, music, and sports from its conceptual space. Moreover, this definition also stresses religious behavior and therefore avoids the ethnocentric assumption of treating religion simply as faith or belief in supernatural agents (see Cohen et al., 2003; Kavanagh & Jong, 2020). We define moral systems as ideas and behaviors associated with norms (i.e., content) of interpersonal social behaviors. These behaviors impose benefits (the “good”) or costs (“bad”) on others (Alexander, 1987; Curry et al., 2019; Graham et al., 2013; Purzycki, Pisor, et al., 2018) and their corollary behaviors (or absence thereof ). Moral systems’ constituent parts include: 1) intuitions (i.e., judgments about the “good” and “bad”); 2) reasoning (i.e., the process by which one answers questions like “do I think x is good or bad?” or “which is less immoral, x or y?”); 3) models (e.g., “bad behaviors are x, y, z. . .”); 4) culture (e.g., “we think x, y, z. . .” or “population Q thinks x, y, z are bad”); and 5) behavior (i.e., actions judged as morally good or bad). Like our conception of religion, this definition stresses the behavioral aspect of morality and its dynamic relationship with individual and group values (see Box 23.1 and McKay & Whitehouse, 2014, for further discussion).

23.1.2 Some Problems in the Study of Morality and Religion Contemporary research asks: Where do morality and/or religion come from? Do they relate to each other? If so, how? Answers, of course, come from all quarters. For instance, both believers and nonbelievers embedded in the Abrahamic traditions claim that religion and morality are fundamentally intertwined and that religion is “about” morality (see Abrams, 2022; Dennett 2006, ch. 8), often even claiming that humanity’s collective moral standards were passed down from God or Holy Scripture and that the world would descend into chaos without religion upholding those standards. In active research, the main questions driving the field are: 1) What best accounts for religion/morality: cognitive processes or social learning? 2) Does religion get us to behave in ways that count as “moral”? 3) Are aspects of morality necessary for religious beliefs? 4) What best accounts for religions that are explicitly associated with morality? We address these questions in the present chapter.

23.2 Evolutionary Foundations In this section, we first discuss various evolved psychological mechanisms posited to undergird humans’ propensities to be religious and moral,

Moral and Religious Systems

577

Box 23.1 Complexities in Defining “Morality” and “Religion” • Whose “morality” or “religion” are we talking about? As with any complex notion, definitions abound. Very obviously, what we might theoretically define as “morality” or “religion” is likely to diverge from local models of the same concepts (Purzycki, Pisor, et al., 2018), if they are even available (see Harris, 1976 on the important distinction between emic and etic perspectives). As we have already noted, we use theoretical definitions of each. • Where does “morality” lie? Beliefs about morality (i.e., moral values) are analytically divorced from moral behaviors. For example, some traits are functionally moralistic in the sense that they benefit or harm someone else, but might not be explicitly or intuitively moralized in the sense that they are perceived as what one ought to do toward others (Purzycki, 2013; Teehan, 2016). In a similar vein, one might explicitly describe a particular behavior as “good” to someone else but never engage in the behavior. • When does “morality” occur? Aspects of morality are often situational and context- and time-dependent. Our cold, reflective moral reasoning about the justifications of particular behaviors might yield different conclusions than our hot, intuitive conclusions about the same behaviors when they actually happen to us (see Evans & Rand, 2019). Similarly, why some behaviors are “moral” in one context might have deeper historical or adaptive reasons than simply localized mechanistic or developmental ones (see Mayr, 1961; Tinbergen, 1963). • What analytical level of moral or religious systems are we addressing? Vacillating between group-level and individual-level morality complicates discussions. For example, when attempting to address the global ubiquity of the so-called moralistic religions (i.e., those that explicitly endorse moral guidelines and/or include moralistically punitive deities), testing hypotheses with individual-level experiments is not necessarily or obviously addressing the target question about “moral religions.” Similarly, when framing a study with appeals to theories that make predictions about individual-level processes, one should carefully consider whether the use of group-level data (e.g., cultural or national-level data) is appropriate. • To whom does “morality” apply? Sometimes, issues of moral scope or breadth are left without explicit clarification. For example, some maintain the classical Kantian (Kant, 1785/1997) view that the “moral” refers to universally pre- and proscribed behaviors while others might assume that the “moral” implies any normative behavior with a cost or benefit to others (H. C. Barrett et al., 2016). Either explicitly or implicitly, moralizing may be universal or only directed parochially toward one’s in-group and community (see Pisor & Ross, 2024).

focusing on mechanisms responsible for agency detection and how these systems make representing gods possible. We then discuss various views that endorse the possibility that human morality is a system dedicated to the reduction of selfishness and increasing the likelihood of altruistic behavior. To frame the subsequent discussion of the relationship between religious and moral systems, we discuss various approaches in the social sciences.

23.2.1 Agency-Detection and Religious Beliefs There is broad scholarly agreement that religious thought and behavior rely on evolved, cognitive foundations that reliably develop in humans. There is less agreement, however, on the specific cognitive mechanisms that constitute these foundations. One mechanism, however, plays a major role in most cognitive and evolutionary accounts of religion, namely, the ability to represent other minds.

578

                   

Humans are equipped with propensities – most likely unrivaled in the animal kingdom – for detecting and inferring the existence and content of other minds (Call & Tomasello, 2008; Penn & Povinelli, 2007; Premack & Woodruff, 1978). This so-called theory of mind system, also known as the mentalizing system, allows people to make sense of and predict intentions and behavioral patterns of other individuals (Baron-Cohen, 1995; Veissière et al., 2020). Presumably, such capabilities have been favored by selective pressures during hominid evolution in a dynamic relationship with increased group size and social complexity, which in turn may have presented both opportunities and challenges for higher-level flexible social cognition, including Machiavellian tactics and sophisticated social learning (Dunbar & Shultz, 2007, 2017; Markov & Markov, 2020; Muthukrishna et al., 2018; van Schaik & Burkart, 2011; van Schaik et al., 2012). Indeed, the human propensity for detecting and inferring other minds appears relatively “oversensitive” as it sometimes extends to domains where minds are not present, such as trees, fluids, and abstractions (Heider & Simmel, 1944). On this basis, many cognitive and evolutionary approaches to religion assume that beliefs about and acts dedicated to supernatural agents partly originate from these mentalizing capabilities (e.g., Atran & Henrich, 2010; J. L. Barrett, 2000; J. L. Barrett & Richert, 2003; Bering, 2006; Boyer, 2001; Guthrie, 1980, 1995; Norenzayan et al., 2016a; Peoples et al., 2016; for an alternative perspective, see Andersen, 2019). These capabilities allow us to imagine, infer, and articulate the desires and perceptions of spiritual agents. In a similar way, many current researchers approach morality as a set of evolved, foundational cognitive systems. What distinguishes research into these domains is that while virtually no one claims that mentalizing evolved for believing in gods, many suggest that cognitive systems evolved for – or even are – morality (e.g., Baumard, 2016; Greene, 2013). We discuss these hypothesized cognitive systems in Section 23.2.2.

23.2.2 Morality as Cognitive Machinery While the enormity of the evolutionary literature on morality implies its diversity, there are some core themes and elements that cross-cut topics. These stem from the evolution of cooperation literature, which generally asks: Why would an individual engage in costly acts that benefit others? The standard way to begin answering this question is by appealing to kin selection, or the preferential investment in those more closely related to you. In evolutionary terms, individuals are more inclined to engage in costly behavior when it benefits closer relations because it still benefits themselves genetically. This leads to the question: Why would nonkin engage in cooperative behavior toward others if they do not share genes to benefit? The typical response is that some form of “reciprocal altruism” (Axelrod, 1984; Trivers, 1971) can be adaptive insofar as individuals will punish those who cheat them and reciprocate after others invest in them. Such reciprocity is relatively rare in the natural world, however, and humans are among the most adept at designing ways to ensure various forms of reciprocity.

Moral and Religious Systems

Table 23.1 Prisoner’s dilemma payoff matrix Player 2 Player 1

C

D

C D

b–c b

–c 0

Note. Noted values are for Player 1. C refers to cooperative strategy, D refers to defecting strategy, b represents benefits gained while c refers to costs incurred.

As many others before us, we illustrate the problem of morality and cooperation using the game-theoretic prisoner’s dilemma, “an abstract formulation of some very common and very interesting situations in which what is best for each person individually leads to mutual defection, whereas everyone would have been better off with mutual cooperation” (Axelrod, 1984, p. 9). Touted as “the purest expression of the conflict between individual and group interests” (McElreath & Boyd, 2008, p. 72), the prisoner’s dilemma clearly points to features of morality and the evolution of cooperation. We therefore use it in this chapter to illustrate how various approaches address the relationship between morality and religion. Table 23.1 represents the basic dilemma in the form of a payoff matrix. Here, b and c refer to benefits and costs, respectively. In this dilemma, you can either cooperate (C) or defect (D). Assuming we are Player 1, we can only choose between these two options, but what we end up getting depends on what Player 2 does. The best-case scenario for us is defecting when the other player cooperates. If we both defect, there is relatively no gain or loss, and if we cooperate and the other player defects, we are the worst off. In this game, without further specifications like relatedness or reciprocal strategies, it is always better to defect. Let’s say you have a population of individuals who always cooperate and some who always defect. If p is the proportion of cooperators in the population and 1  p is the remaining proportion of defectors, then the total payoffs for those who always cooperate is the probability of interacting with a cooperator times the payoff, p(b  c), and the probability of interacting with a defector times that payoff, (1 p)(c), for a total payoff of pb  c. The payoffs for individuals who always defect, then, would be pb + (1  p)0 ¼ pb. As p(b  c) < pb, defectors will always outcompete cooperators. How, then, do we avoid the world engulfed by avarice that this model predicts? Thankfully, the world in which we live is not this bleak, simplified world (Johnson et al., 2002). We therefore require some account of 1) mechanisms that offset the probability of choosing defection, 2) means to prevent defectors from entering the playing field, 3) strategies that would outcompete defectors, or 4) ways to alter the payoff structure entirely. As already discussed, researchers posit that a variety of mechanisms such as kinship and reciprocity overturn the draw of defection.

579

580

                   

Recent movements in the evolutionary psychology of morality explicitly link the problem of cooperation with morality (for critical discussion, see Baumard et al., 2013; Curry, 2016). For example, Greene (2013, p. 23) writes: “Morality is a set of psychological adaptations that allow otherwise selfish individuals to reap the benefits of cooperation” and “[t]he essence of morality is altruism, unselfishness, a willingness to pay a personal cost to benefit others.” Another example comes from Baumard (2016, pp. 72–73) who emphasizes the mutualistic aspects of morality and argues that “the selective pressures that explain the emergence of a moral sense push [individuals] toward considering each person’s interests impartially.” Such approaches focus on mental machinery that increase the likelihood of overcoming selfish behavior. According to these views, then, morality and the “moral sense” are mechanisms that make people cooperate. As we discuss in Section 23.3, some suggest that elements of religions function to galvanize such systems that contribute to increasing the likelihood that individuals will cooperate. The story becomes a little more complicated when we examine the range of complexity in human societies. While these aforementioned mechanisms generally hold for small-scale social organizations, humans have also created large interconnected social systems of nonkin who cannot possibly directly reciprocate. For some, this fact underscores limitations to classical mechanisms of cooperation; because humans are “hypersocial” and invest in many anonymous individuals who can never reciprocate, some other mechanisms are required to account for this level of sociality. Researchers have proposed a host of mechanisms to account for human levels of costly social behavior, including conflict, punishment, and social norms (e.g., Fehr & Fischbacher, 2004; Fehr et al., 2002; Richerson & Boyd, 1999; Turchin et al., 2013). When it comes to religion, current debates revolve around whether certain traditions can contribute to increased complexity or if certain aspects of religious systems are better thought of as responses to social dilemmas and problems, including those particular to social complexity. As these debates tend to revolve around the persuasion of evolutionary social science of those involved, we briefly situate these views in their varied backgrounds.

23.2.3 Evolutionary Approaches to Social Life Broadly speaking, there are three general approaches in the evolutionary social sciences (Smith, 2000), evolutionary psychology, dual-inheritance, and behavioral ecology. In their pursuit of understanding human behavior and culture, evolutionary psychologists tend to emphasize evolved psychology (see Section 23.2) in its pursuit of understanding human behavior and culture. In the case of religion, the reigning evolutionary psychological view is that while we have cognitive systems selected for moral behavior, religious concepts and behaviors are simply extensions of these and other evolved traits (e.g., Boyer, 2001). Dualinheritance theory stresses the importance of both genetic and cultural transmission as lifting the bulk of explanatory weight for human social life. In the

Moral and Religious Systems

field of religion and morality, it tends to emphasize norms and institutions and much of the work branded as dual-inheritance in this sense focuses primarily on explaining the aforementioned problem of human hypersociality rather than religion per se (Atran & Henrich, 2010). In contrast to these approaches, human behavioral ecology focuses on optimal adaptive behaviors that provide benefits for individuals and tends to ignore mental processes and cultural transmission (Shaver & Sosis, 2014; Sosis & Bulbulia, 2011). Arguably, all three of these approaches to understanding cooperation are inherently ecological as they focus on the distribution of energy throughout social systems with an eye toward the cost-benefit trade-offs. In practice, virtually no one argues that humans have a specifically evolved predisposition for producing religious beliefs and practices. But there are many influential arguments that posit that religious beliefs and practices are easier to learn because they exploit evolved cognitive systems. Content-biased cultural transmission (Richerson & Boyd, 2005) is the process by which beliefs or traditions spread and stabilize in a population because they are inherently more “attractive” (Sperber, 1996) than others, as they resonate with deeper evolved psychological systems. For example, moral norms might be easier to learn because violating them is costly or that they resonate with evolved moral cognition. In addition to our social cognitive systems, a range of content biases have been proposed as cognitive foundations for religious thought, including mentalizing. In other words, cultural concepts that activate this mentalizing machinery may enjoy advantages in terms of memorability and transmissibility compared to cultural concepts that do not (Mesoudi et al., 2006). This enables the genesis and diffusion of mental representations of nonphysical entities with their own wants, needs, knowledge, intentions, etc., such as gods, ghosts, and spirits. In Section 23.3.3, we discuss moral cognition as another potential content bias of spiritual agents.

23.3 Accounting for the Relationship between Morality and Religion 23.3.1 Supernatural Monitoring, Punishment, and Cooperation What has become known as the “supernatural punishment hypothesis” (Johnson, 2005, 2016) posits that a suite of mechanisms that contribute to cooperation evolved in competition with manipulative Machiavellian behavior motivated by selfishness. The evolutionary story of supernatural punishment goes as follows. In hominids’ distant past, agency detection (see Section 23.2.1) proliferated as it ostensibly facilitated anticipating the behaviors of other entities. This aptitude might have contributed to our mastery of hunting and propelled our sociality in unprecedented ways. However, once hominids could anticipate others’ mental states, they could also manipulate them for their own benefit at others’ expense. This opens up a new avenue for problems associated

581

582

                   

with cooperation and coordination. To reduce the appeal of defection, a sensitivity toward commitment to a being that monitors your behavior and can punish you may have evolved. A simple model stipulates the conditions under which such “god-fearing” (i.e., the psychology that supports the deference to dominant agents who are not obviously there) would outcompete Machiavellian strategies. Specifically, it posits that when the probability, p, and costs, c, of getting caught for defecting outweighs the missed benefits, m, that could have been reaped if one defected – that is, when pc > m – god-fearing will evolve. In other words, wherever the costs of punishment and effective monitoring outweigh whatever benefits one would have gotten by being bad, god-fearing will eventually become widespread in a population. While the supernatural punishment model addresses the contexts in which god-fearing will outcompete self-interested strategies, it takes only a little more effort to appreciate the implications it has on cooperation more generally. Accepting that a god can watch and punish you increases the product of pc. In a social dilemma like the prisoner’s dilemma, this mechanism might propel individuals toward taking the cooperative option (i.e., forgoing m). Appeals to gods’ punishment might also alter the perceived payoffs for defecting. For example, while b < c in Table 23.1, god-fearing might redefine this inequality to b > c, where the benefits of cooperation outweigh the spiritual costs of defecting. We can also achieve this by adding a penalty for defecting (e.g., b  p and p for the payoffs in the second row of the table). In this case, cooperators would outcompete defectors whenever p > c. Other possibilities abound (see discussion between Johnson, 2011: Lane, 2018, Schloss & Murray, 2011, for a more elaborate simulation), but this example illustrates how supernatural punishment might adaptively avoid such costs of actual punishment. Assuming there is always some variation between (and within) individuals, we should expect to see that harnessing aspects of god-fearing should reduce self-interested behaviors both situationally and longitudinally. There is considerable evidence showing that the beliefs in and/or primed threat of spiritual observers can alter individual performance in economic game experiments in the predicted manner of reducing selfish behavior, at least among coreligionists (e.g., Lang et al., 2019; McKay et al., 2011; McNamara & Henrich, 2018; McNamara et al., 2016; Piazza et al., 2011; Purzycki et al., 2016; Rand et al., 2014; Shariff & Norenzayan, 2007, 2011). This literature tends to focus on whether or not variation in explicit beliefs or priming psychological systems associated with spiritual punishment induces cooperative, “moral” behavior and/or reduces selfish, “immoral” behavior. As we now turn to discuss, considerable research also shows that engaging in ritual behaviors associated with spirits and gods can induce the same effects for both observers and participants.

23.3.2 Religious and Moral Behaviors Religious ritual is directly implicated in the moral lives of people. For Roy Rappaport (1999), ritual “not only brings conventions into being but invests

Moral and Religious Systems

them with morality. Moral dicta are not explicit in all liturgies, but morality, like social contract, is implicit in ritual’s structure” (p. 132). Here, ritual enacts or expresses convention but also endows conventions with the kind of obligatory prescriptions that come with the urgency of ritual participation and the salience of its ordered steps. Engaging in this suite of obligations, then, conveys to participants “that he or she accepts the order encoded in the ritual in which he or she is participating” (Rappaport, 1994, p. 339, emphasis in original). In this view, as ritual intrinsically contains the mores and rules of social life, engaging in ritual reaffirms this connection and conveys to others that he or she accepts those obligations as well. In other words, when people perform a ritual, they transmit a much wider range of information about conduct in social life than merely their beliefs or that they are performing a ritual. Rather, they are conveying adherence to the greater moral expectations of their community. Rappaport’s sentiments resonate with an important distinction raised by Teehan (2016), who argues that religions need not be explicitly about morality (e.g., with explicit rules or doctrines about morality and/or with gods believed to care about moral behavior) in order to be morally relevant in a more practical sense. For instance, cross-culturally, ritual participation is something gods are often concerned with (Bendixen & Purzycki, 2020; Bendixen et al., 2024; Purzycki & McNamara, 2016; Swanson, 1960). However, if the association between deities and ritual participation actually increases the frequency and effort with which one participates in rituals, and if – as a growing body of experimental work testifies – communal rituals can play a powerful role in strengthening social bonds, increasing prosociality, and signaling in-group membership (e.g., Fischer et al., 2013; Purzycki & Sosis, 2022; Reddish et al., 2014; Wiltermuth & Heath, 2009; Xygalatas et al., 2013), then beliefs in these deities certainly has some practical moral relevance for human relationships. Indeed, one such approach builds on Rappaport’s views and reframes them in the context of evolutionary theory. Specifically, some posit that ritual can be a form of costly signaling inasmuch as the costs of rituals reliably convey one’s commitment to the general mores of the community. Others have found that engagement in costly rituals elicits more cooperation in experiments (Soler, 2012), cooperative requests in social networks (Power, 2017a, 2017b), and predicts longevity of communes (Sosis & Bressler, 2003) due to effectively keeping out those who are unwilling to pay ritual costs, a mechanism of which has been experimentally substantiated (Lang et al., 2022). Moreover, the presence and intensity of ritual costs have been found to covary with contexts such as territorial disputes and warfare where the temptation to defect or leave the group is relatively high (Sosis et al., 2007). One stream of research finds that those who engage in rituals are perceived as more trustworthy generally (Purzycki & Arakchaa, 2013; Sosis, 2005; Tan & Vogel, 2008). While signaling trustworthiness is an effect of participation in rituals, gods are also interested in a host of behaviors other than ritual. Yet, the behaviors

583

584

                   

they care about do appear to refer to behaviors associated with social dilemmas (Bendixen & Purzycki, 2020; Purzycki & Sosis, 2022; Purzycki, Bendixen, et al., 2022). Take, for example, socially consuming alcohol. Refusing to drink with a friend who drinks might entail some social costs, while drinking together is often perceived as entailing a lot more benefits. In the Quran (Surah Al-Baqarah 2:219), drinking is explicitly framed as entailing more harm than benefits. Institutionalizing this Quranic passage and outright banning alcohol changes the payoff structure to make social drinking much riskier and more costly (Purzycki, Bendixen, et al., 2022). In other contexts, spirits are angered by over-exploiting resources and, in some cases, entire regions are off-limits to human use by virtue of their association with the gods (see Purzycki, 2011; S. Singh et al., 2017). In such cases, it might be tempting to hunt or take plant materials from such areas, even though persistent exploitation might lead to devastation. Populating a region with gods might suffice to maintain local biodiversity and thus sustain a mobile resource. To the extent that living up to these regulations conveys strong moral character, such cases further illustrate the close relationship between religion and morality.

23.3.3 Explaining the Ubiquity of “Moralistic Religious” Beliefs While Section 23.3.2 attended to the view that it is partly by virtue of religious beliefs and/or behaviors that humans are as cooperative as they are, other research focuses on a particular form of religion. These are the so-called moralistic religions, often treated in apposition to otherwise non- or submoralistic religions. Indeed, recent decades have seen a flurry of scientific attention to this particular construct, yet few researchers are clear on the critical questions that framed the current chapter. One view offers that such traditions are extensions of certain reproductive strategies exerted on class-structured societies. Appealing to a popular – but not unproblematic (see Baldini, 2015; Nettle & Frankenhuis, 2020; Sear, 2020) – view of “life history theory,” some (Baumard & Boyer, 2013; Baumard et al., 2015) have suggested that religions become explicitly associated with morality primarily because wealthy elites who opt for having fewer but higher-quality children “moralize” the behaviors of the poorer sectors of society who have more and lower-quality children. Others (Purzycki, Ross, et al., 2018) show that at the individual level (i.e., not the tradition or society level), despite there being an association between food security and number of children, there is no obvious relationship between commitment to moralistic traditions – that is, the degree to which individuals claim their gods care about morality – and food security. In this case, one group of researchers treats the concept of “moralistic traditions” as a traditional-level property, whereas others treat the concept as an individual-level variable. Another debate revolves around the relationship between social complexity and so-called moralistic religions (Purzycki & McKay, 2023). For decades,

Moral and Religious Systems

Figure 23.1 Reported presence of moralistic supernatural punishment across society size (left) and probability (logistic transformations of posterior estimates) of selected ethnographies reporting moralistic supernatural punishment with 95 percent credible intervals (right). Data are from Swanson (1960). Proportions are of each category on x-axis of left panel. Types of moralistic supernatural punishment include 1) health-related punishments, 2) punishments in the afterlife, and 3) unspecified “other.” If present across any of these variables, a society got a score of MSP ¼ 1. Society size categories are as follows: 0 ¼ 1–49 people (n ¼ 17); 1 ¼ 50–399 (n ¼ 13); 2 ¼ 400–9,999 (n ¼ 9); and 3 ¼ 10,000 (n ¼ 10). Data and code can be accessed here: https://gist.github.com/bgpurzycki/4edc36a10a3d1ff4e6035a6ab463cee2.

anthropologists investigated whether or not small-scale societies had gods that cared about how people treated each other or functioned as moral models to which individuals should aspire (Rappaport, 1979; Tylor, 1920). Largely due to the findings of anthropological fieldwork, Evans-Pritchard (1965) rendered the debate moribund. Some cross-cultural data sets (Boehm, 2008; Swanson, 1960) show that evidence of supernatural sanctions for immoral behavior is abundant in the ethnographic literature (see Figure 23.1; for more detailed analyses, see Lightner et al., in press). A crude analysis of Swanson’s data shows that ethnographies of societies with less than 50 people have a 51 percent chance of mentioning moralistic supernatural punishment; those between 50 and 399 have a 68 percent chance of mentioning moralistic spiritual punishment. Yet, the current mainstream view maintains that small-scale societies lacked gods that cared about morality or had traditions that were about morality (e.g., Baumard & Boyer, 2013; Norenzayan, 2013; cf. Beheim et al., 2019). Considering the first attempt (Purzycki, 2011, 2013) to systematically and directly ask people how much they thought their traditional gods cared about and punished people for immoral conduct appeared only a decade ago (for more recent reports, see Bendixen et al., 2024; Purzycki, Willard, et al., 2022; M. Singh et al., 2021; Townsend et al., 2020), we wager that any strong conclusions are at best premature (Bendixen et al., 2023).

585

586

                   

23.3.4 Skepticism about the Relationship between Morality and Religion Virtually all of these approaches posit that in some way or another, the relationship between morality and religion is both causal and measurable. However, some strains of thought deny that the relationship is as significant or informative as such studies suggest. For example, one idea in the literature is that spirit concepts are generally easier to retain and transmit partly because they are intuitively endowed with “socially strategic information” (Boyer, 2001; Purzycki et al., 2012). In other words, because people portray their gods and other spiritual agents as interested in some aspect of our behavior, they are intuitively associated with moral concerns and this makes such concepts salient and therefore increases their perceived importance and transmissibility (i.e., content-biased transmission). There is some evidence that spiritual agents are intuitively associated with moral information (Purzycki et al., 2012; Purzycki, Willard, et al., 2022) and some evidence that socially relevant information is easier to retain, even in the context of experiments requiring participants to remember religious-like stimuli (e.g., Beebe & Duffy, 2020; Swan & Halberstadt, 2019, 2020). However, Boyer contends that this association between moral domains and religion is limited to this intuitiveness and that religion itself does not contribute to behavior that we might construe as “moral.” A challenge to this view, though, is evidence indicating that as triggered by spiritual agentic entities, the intuitiveness of moral cognition can alter behavior. For example, Piazza et al. (2011) told a treatment group of children that a spirit by the name of “Princess Alice” frequented the lab. Despite not having explicitly told the children that Princess Alice cares about or will punish them for misbehavior, these children were less likely to cheat in a virtually impossible game than the control group. In other words, the perception of god-like agents with underspecified moral interests can alter the kinds of behavior that count as “moral.” Others call for greater conceptual and methodological precision in disentangling the possible relationships between religion and moral behavior (e.g., Bloom, 2012; McKay & Whitehouse, 2014; Teehan, 2016). For instance, Galen (2012) systematically reviewed the literature on “religious prosociality” across both naturalistic settings and various laboratory experimental paradigms, such as behavioral economic studies and priming. Among a long list of critical remarks, a key conclusion is that, while a body of work does indeed find evidence that religious people are more prosocial (e.g., in terms of cooperation, generosity, and sharing), this effect diminishes drastically – or is reversed – when the focal recipient does not share religious identity with the participants. Therefore, a distinction must be drawn between universal and parochial religious prosociality, at minimum (see also, e.g., Graham & Haidt, 2010; Lang et al., 2019; Norenzayan et al., 2016b), which in turn calls for serious consideration of contextual factors in doing such studies. Galen (2012) also argues that since much work depends on self-report the literature is

Moral and Religious Systems

confounded by well-known problems with self-report measures such as demand characteristics, social desirability, and stereotype effects. In sum, some researchers maintain that the connection between religion and morality is less straightforward than often thought – or at least, in Galen’s case, that much research on this topic is so riddled with contradictions that improving conceptual clarity and consistency is of utmost importance moving forward (see Box 23.1).

23.4 Conclusion In this chapter, we have surveyed some of the current thinking on the possible relationships between religious and moral systems. While some deny that there is a significant causal relationship between the two, others suggest that religious traditions include a host of mechanisms that can contribute to the reduction of immoral behavior. In terms of current debates, we have emphasized the need for greater conceptual clarity and consistency in the study of religion and morality, where central concepts often remain underspecified. For instance, thorough consideration should be paid to the definition of morality and religion (e.g., etic vs. emic perspectives) and whether the distinction matters for theories positing that moral norms play an important role in the evolution of social behavior, the type of morality in religion (e.g., gods’ or doctrines’ explicit associations vs. practically relevant moral religions), the cultural and ecological contexts of the religious and moral systems, as well as the religious identity of the focal individuals (e.g., universal vs. parochial religious prosociality). An outstanding challenge for research on this topic is data quality and attention to – and consistency with – differing analytical levels and timescales. For instance, it is an important priority to broaden the diversity of study samples in the social scientific literature (e.g., H. C. Barrett, 2020). However, many cross-cultural studies rely on national- or society-level data, sometimes coded from informal source material. Among other pitfalls (Purzycki & Watts, 2018; Watts et al., 2022), such as questionable coding rubrics or the accuracy of antiquated data culled by nonexperts, analyzing such data runs the risk of committing the ecological fallacy, namely generalizing from one level (e.g., factors of society) to another (e.g., individual psychology). Future research would do well to engage in cross-cultural and individual-level data collection (see, e.g., Lang et al., 2019; Purzycki, Heinrich, et al., 2018; Purzycki, Pisor, et al., 2018). Further, longitudinal studies hold much promise, as they would allow researchers to disentangle the possible relationship between religious and moral sentiments across changing developmental, cultural, and ecological contexts rather than presuming a stable longitudinal relationship with only data from a one-shot, cross-sectional study. So, as the central question regarding the relationship between morality and religion is complex, how we approach the question is difficult to overstress. If moral systems – at their core – are the sociobiological means by which we interact with each other in beneficial and costly ways, much of the literature

587

588

                   

suggests that through supernatural punishment beliefs, behavioral prescriptions, rituals, social institutions, and so forth, religion consists of mechanisms that facilitate the proliferation of cooperation and the reduction of selfish acts. In this view, religion is undoubtedly associated with morality. If we, however, restrict our view of morality to the mental machinery responsible for cooperation, the details of the mechanics that religion galvanizes remain unclear. In recent decades, there has been considerable progress in addressing these questions in the social sciences. Increased clarity, precision, and consensus building will allow researchers to traverse even greater lengths in our quest to make sense of why humans are as remarkably social as they are.

Acknowledgments We thank the Aarhus University Research Foundation for generous support. We also express our appreciation to Silke Atmaca and the research assistants at the Max Planck Institute for Evolutionary Anthropology’s Department of Human Behavior, Ecology, and Culture for entering the Swanson data used in this chapter. Many thanks to the editors and anonymous reviewers for their feedback.

References Abrams, S. (2022). Moralization of religiosity explains worldwide trends in religious belief [Doctoral dissertation]. The University of North Carolina at Chapel Hill. Alexander, R. D. (1987). The biology of moral systems. Aldine de Gruyter. Andersen, M. (2019). Predictive coding in agency detection. Religion, Brain & Behavior, 9(1), 65–84. Atran, S., & Henrich, J. (2010). The evolution of religion: How cognitive by-products, adaptive learning heuristics, ritual displays, and group competition generate deep commitments to prosocial religions. Biological Theory, 5(1), 18–30. Axelrod, R. (1984). The evolution of cooperation. Basic Books. Baldini, R. (2015). Harsh environments and “fast” human life histories: What does the theory say? bioRxiv, Article 014647. https://doi.org/10.1101/014647 Baron-Cohen, S. (1995). Mindblindness: An essay on autism and theory of mind. MIT Press. Barrett, H. C. (2020). Deciding what to observe: Thoughts for a post-WEIRD generation. Evolution and Human Behavior, 41(5), 445–453. Barrett, H. C., Bolyanatz, A., Crittenden, A. N., Fessler, D. M., Fitzpatrick, S., Gurven, M., Henrich, J., Kanovsky, M., Kushnick, G., Pisor, A., Scelza, B. A., Stich, S., von Rueden, C., Zhao, W., & Laurence, S. (2016). Small-scale societies exhibit fundamental variation in the role of intentions in moral judgment. Proceedings of the National Academy of Sciences, 113(17), 4688–4693. Barrett, J. L. (2000). Exploring the natural foundations of religion. Trends in Cognitive Sciences, 4(1), 29–34.

Moral and Religious Systems

Barrett, J. L., & Richert, R. A. (2003). Anthropomorphism or preparedness? Exploring children’s God concepts. Review of Religious Research, 44(3), 300–312. Baumard, N. (2016). The origins of fairness: How evolution explains our moral nature. Oxford University Press. Baumard, N., André, J. B., & Sperber, D. (2013). A mutualistic approach to morality: The evolution of fairness by partner choice. Behavioral and Brain Sciences, 36(1), 59–78. Baumard, N., & Boyer, P. (2013). Explaining moral religions. Trends in Cognitive Sciences, 17(6), 272–280. Baumard, N., Hyafil, A., Morris, I., & Boyer, P. (2015). Increased affluence explains the emergence of ascetic wisdoms and moralizing religions. Current Biology, 25(1), 10–15. Beebe, J. R., & Duffy, L. (2020). The memorability of supernatural concepts: Effects of minimal counterintuitiveness, moral valence, and existential anxiety on recall. International Journal for the Psychology of Religion, 30(4), 322–341. Beheim, B., Atkinson, Q., Bulbulia, J., Gervais, W. M., Gray, R., Henrich, J., Lang, M., Monroe, M. W., Muthukrishna, M., Norenzayan, A., Purzycki, B. G., Shariff, A., Slingerland, E., Spicer, R., & Willard, A. K. (2019). Treatment of missing data determined conclusions regarding moralizing gods. Nature, 595(7866), E29–E34. Bendixen, T., Apicella, C. L., Atkinson, Q., Cohen, E., Henrich, J., McNamara, R. A., Norenzayan, A., Willard, A. K., Xygalatas, D., Purzycki, B. G. (2024). Appealing to the minds of gods: Religious beliefs and appeals correspond to features of local social ecologies. Religion, Brain and Behavior, 14(2), 183–205. Bendixen, T., Lightner, A., & Purzycki, B. G. (2023). The cultural evolution of religion and cooperation. In J. Tehrani, J. Kendal, & R. Kendal (Eds.), The Oxford handbook of cultural evolution. Oxford University Press. https://doi.org/10 .1093/oxfordhb/9780198869252.001.0001 Bendixen, T., & Purzycki, B. G. (2020). Peering into the minds of gods: What crosscultural variation in gods’ concerns can tell us about the evolution of religion. Journal for the Cognitive Science of Religion, 5(2), 142–165. Bering, J. M. (2006). The folk psychology of souls. Behavioral and Brain Sciences, 29(5), 453–462; discussion 462–498. Bloom, P. (2012). Religion, morality, evolution. Annual Review of Psychology, 63, 179–199. Boehm, C. (2008). A biocultural evolutionary exploration of supernatural sanctioning. In J. Bulbulia, R. Sosis, E. Harris, R. Genet, & K. Wyman (Eds.), Evolution of religion: Studies, theories, and critiques (pp. 143–152). Collins Foundation Press. Boyer, P. (2001). Religion explained: The evolutionary origins of religious thought. Basic Books. Boyer, P. (2018). Minds make societies: How cognition explains the world humans create. Yale University Press. Call, J., & Tomasello, M. (2008). Does the chimpanzee have a theory of mind? 30 years later. Trends in Cognitive Sciences, 12(5), 187–192. Cohen, A. B., Siegel, J. I., & Rozin, P. (2003). Faith versus practice: Different bases for religiosity judgments by Jews and protestants. European Journal of Social Psychology, 33(2), 287–295.

589

590

                   

Curry, O. S. (2016). Morality as cooperation: A problem-centred approach. In T. K. Shackelford & R. D. Hansen (Eds.), The evolution of morality (pp. 27–51). Springer. Curry, O., Whitehouse, H., & Mullins, D. (2019). Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies. Current Anthropology, 60(1), 47–69. Dennett, D. C. (2006). Breaking the spell: Religion as a natural phenomenon. Viking. Dunbar, R. I. M., & Shultz, S. (2007). Evolution in the social brain. Science, 317(5843), 1344–1347. Dunbar, R. I. M., & Shultz, S. (2017). Why are there so many explanations for primate brain evolution? Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 372(1727), Article 20160244. Evans, A. M., & Rand, D. G. (2019). Cooperation and decision time. Current Opinion in Psychology, 26, 67–71. Evans-Pritchard, E. E. (1965). Theories of primitive religion. Oxford University Press. Fehr, E., & Fischbacher, U. (2004). Third-party punishment and social norms. Evolution and Human Behavior, 25(2), 63–87. Fehr, E., Fischbacher, U., & Gächter, S. (2002). Strong reciprocity, human cooperation, and the enforcement of social norms. Human Nature, 13(1), 1–25. Fischer, R., Callander, R., Reddish, P., & Bulbulia, J. (2013). How do rituals affect cooperation? An experimental field study comparing nine ritual types. Human Nature, 24(2), 115–125. Galen, L. W. (2012). Does religious belief promote prosociality? A critical examination. Psychological Bulletin, 138(5), 876–906. Graham, J., & Haidt, J. (2010). Beyond beliefs: Religions bind individuals into moral communities. Personality and Social Psychology Review, 14(1), 140–150. Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., & Ditto, P. H. (2013). Moral foundations theory: The pragmatic validity of moral pluralism. Advances in Experimental Social Psychology, 47, 55–130. Greene, J. D. (2013). Moral tribes: Emotion, reason, and the gap between us and them. Penguin. Guthrie, S. E. (1980). A cognitive theory of religion. Current Anthropology, 21(2), 181–203. Guthrie, S. E. (1995). Faces in the clouds: A new theory of religion. Oxford University Press. Harris, M. (1976). History and significance of the emic/etic distinction. Annual Review of Anthropology, 5, 329–350. Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. The American Journal of Psychology, 57(2), 243–259. Jensen, J. S. (2019). What is religion? Routledge. Johnson, D. D. P. (2005). God’s punishment and public goods. Human Nature, 16(4), 410–446. Johnson, D. (2011). Why God is the best punisher. Religion, Brain & Behavior, 1(1), 77–84. Johnson, D. D. P. (2016). God is watching you: How the fear of God makes us human. Oxford University Press. Johnson, D. D., Stopka, P., & Bell, J. (2002). Individual variation evades the prisoner’s dilemma. BMC Evolutionary Biology, 2(1), 1–8.

Moral and Religious Systems

Kant, I. (1997). Groundwork of the metaphysics of morals. Cambridge University Press. (Original work published 1785) Kavanagh, C. M., & Jong, J. (2020). Is Japan religious? Journal for the Study of Religion, Nature and Culture, 14(1), 152–180. Lane, J. (2018). Strengthening the supernatural punishment hypothesis through computer modeling. Religion, Brain & Behavior, 8(3), 290–300. Lang, M., Chvaja, R., Purzycki, B. G., Václavík, D., & Stanĕk, R. (2022). Advertising cooperative phenotype through costly signals facilitates collective action. Royal Society Open Science, 9(5), Article 202202. Lang, M., Purzycki, B. G., Apicella, C. L., Atkinson, Q. D., Bolyanatz, A., Cohen, E., Handley, C., Kundtová Klocová, E., Lesorogol, C., Mathew, S., McNamara, R. A., Moya, C., Placek, C. D., Soler, M., Vardy, T., Weigel, J. L., Willard, A. K., Xygalatas, D., Norenzayan, A., & Henrich, J. (2019). Moralizing gods, impartiality and religious parochialism across 15 societies. Proceedings of the Royal Society B, 286(1898), Article 20190202. Lightner, A., Bendixen, T., & Purzycki, B. G. (in press). Cross-cultural datasets systematically underestimate the presence of moralizing gods in small-scale societies. Evolution and Human Behavior. Markov, A. V., & Markov, M. A. (2020). Runaway brain-culture coevolution as a reason for larger brains: Exploring the “cultural drive” hypothesis by computer modeling. Ecology and Evolution, 10(12), 6059–6077. Mayr, E. (1961). Cause and effect in biology. Science, 134(3489), 1501–1506. McElreath, R., & Boyd, R. (2008). Mathematical models of social evolution: A guide for the perplexed. University of Chicago Press. McKay, R., Efferson, C., Whitehouse, H., & Fehr, E. (2011). Wrath of god: Religious primes and punishment. Proceedings of the Royal Society B: Biological Sciences, 278(1713), 1858–1863. McKay, R., & Whitehouse, H. (2014). Religion and morality. Psychological Bulletin, 141(2), 447–473. McNamara, R. A., & Henrich, J. (2018). Jesus vs. the ancestors: How specific religious beliefs shape prosociality on Yasawa Island, Fiji. Religion, Brain & Behavior, 8(2), 185–204. McNamara, R. A., Norenzayan, A., & Henrich, J. (2016). Supernatural punishment, in-group biases, and material insecurity: Experiments and ethnography from Yasawa, Fiji. Religion, Brain & Behavior, 6(1), 34–55. Meadows, D. H. (2008). Thinking in systems: A primer. Chelsea Green Publishing. Mesoudi, A., Whiten, A., & Dunbar, R. (2006). A bias for social information in human cultural transmission. British Journal of Psychology, 97(3), 405–423. Muthukrishna, M., Doebeli, M., Chudek, M., & Henrich, J. (2018). The cultural brain hypothesis: How culture drives brain expansion, sociality, and life history. PLOS Computational Biology, 14(11), Article e1006504. Nettle, D., & Frankenhuis, W. E. (2020). Life-history theory in psychology and evolutionary biology: One research programme or two? Philosophical Transactions of the Royal Society B: Biological Sciences, 375(1803), Article 20190490. Norenzayan, A. (2013). Big gods: How religion transformed cooperation and conflict. Princeton University Press.

591

592

                   

Norenzayan, A., Shariff, A. F., Gervais, W. M., Willard, A. K., McNamara, R. A., Slingerland, E., & Henrich, J. (2016a). The cultural evolution of prosocial religions. Behavioral and Brain Sciences, 39, Article e1. Norenzayan, A., Shariff, A. F., Gervais, W. M., Willard, A. K., McNamara, R. A., Slingerland, E., & Henrich, J. (2016b). Parochial prosocial religions: Historical and contemporary evidence for a cultural evolutionary process. Behavioral and Brain Sciences, 39, Article e29. Penn, D. C., & Povinelli, D. J. (2007). On the lack of evidence that non-human animals possess anything remotely resembling a ‘theory of mind’. Philosophical Transactions of the Royal Society, B, 362(1480), 731–744. Peoples, H. C., Duda, P., & Marlowe, F. W. (2016). Hunter-gatherers and the origins of religion. Human Nature, 27(3), 261–282. Piazza, J., Bering, J. M., & Ingram, G. (2011). “Princess Alice is watching you”: Children’s belief in an invisible person inhibits cheating. Journal of Experimental Child Psychology, 109(3), 311–320. Pisor, A., & Ross, C. T. (2024). Parochial altruism: What it is and why it varies. Evolution and Human Behavior, 45(1), pp. 2–12. Power, E. A. (2017a). Discerning devotion: Testing the signaling theory of religion. Evolution and Human Behavior, 38(1), 82–91. Power, E. A. (2017b). Social support networks and religiosity in rural south India. Nature Human Behaviour, 1(3), 0057. Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(4), 515–526. Purzycki, B. G. (2011). Tyvan cher eezi and the socioecological constraints of supernatural agents’ minds. Religion, Brain & Behavior, 1(1), 31–45. Purzycki, B. G. (2013). The minds of gods: A comparative study of supernatural agency. Cognition, 129(1), 163–179. Purzycki, B. G., Apicella, C., Atkinson, Q. D., Cohen, E., McNamara, R. A., Willard, A. K., Xygalatas, D., Norenzayan, A., & Henrich, J. (2016). Moralistic gods, supernatural punishment and the expansion of human sociality. Nature, 530(7590), 327–330. Purzycki, B. G., & Arakchaa, T. (2013). Ritual behavior and trust in the Tyva Republic. Current Anthropology, 54(3), 381–388. Purzycki, B. G., Bendixen, T., Lightner, A. D., & Sosis, R. (2022). Gods, games, and the socioecological landscape. Current Research in Ecological and Social Psychology, 3, Article 100057. Purzycki, B. G., Finkel, D. N., Shaver, J., Wales, N., Cohen, A. B., & Sosis, R. (2012). What does god know? Supernatural agents’ access to socially strategic and nonstrategic information. Cognitive Science, 36(5), 846–869. Purzycki, B. G., Henrich, J., Apicella, C., Atkinson, Q. D., Baimel, A., Cohen, E., McNamara, R. A., Willard, A. K., Xygalatas, D., & Norenzayan, A. (2018). The evolution of religion and morality: A synthesis of ethnographic and experimental evidence from eight societies. Religion, Brain & Behavior, 8(2), 101–132. Purzycki, B. G., & McKay, R. (2023). Morality, gods, and social complexity. In B. G. Purzycki & T. Bendixen (Eds.), The minds of gods: New horizons in the naturalistic study of religion (pp. 121–132). Bloomsbury Press. Purzycki, B. G., & McNamara, R. A. (2016). An ecological theory of gods’ minds. In H. De Cruz & R. Nichols (Eds.), Cognitive science of religion and its philosophical implications (pp. 143–167). Continuum.

Moral and Religious Systems

Purzycki, B. G., Pisor, A., Apicella, C., Atkinson, Q. D., Cohen, E., Henrich, J., McNamara, R. A., Norenzayan, A., Willard, A. K., & Xygalatas, D. (2018). The cognitive and cultural foundations of moral behavior. Evolution and Human Behavior, 39(5), 490–501. Purzycki, B. G., Ross, C. T., Apicella, C., Atkinson, Q. D., Cohen, E., McNamara, R. A., Willard, A. K., Xygalatas, D., Norenzayan, A., & Henrich, J. (2018). Material security, life history, and moralistic religions: A cross-cultural examination. PLoS ONE, 13(3), Article e0193856. Purzycki, B. G., & Sosis, R. (2022). Religion evolving: The dynamics of culture, cognition, and ecology. Equinox. Purzycki, B. G., & Watts, J. (2018). Reinvigorating the comparative, cooperative ethnographic sciences of religion. Free Inquiry, 38(3), 26–29. Purzycki, B. G., Willard, A. K., Klocová, E. K., Apicella, C., Atkinson, Q., Bolyanatz, A., Cohen, E., Handley, C., Henrich, J., Lang, M., Lesorogol, C., Mathew, S., McNamara, R. A., Moya, C., Norenzayan, A., Placek, C., Soler, M., Weigel, J., Xygalatas, D., & Ross, C. T. (2022). The moralization bias of gods’ minds: A cross-cultural test. Religion, Brain and Behavior, 12(1–2), 38–60. Rand, D. G., Dreber, A., Haque, O. S., Kane, R. J., Nowak, M. A., & Coakley, S. (2014). Religious motivations for cooperation: An experimental investigation using explicit primes. Religion, Brain & Behavior, 4(1), 31–48. Rappaport, R. A. (1979). Ecology, meaning, and religion. North Atlantic Books. Rappaport, R. A. (1994). On the evolution of morality and religion: A response to Lee Cronk. Zygon, 29(3), 331–349. Rappaport, R. A. (1999). Ritual and religion in the making of humanity. Cambridge University Press. Reddish, P., Bulbulia, J., & Fischer, R. (2014). Does synchrony promote generalized prosociality? Religion, Brain & Behavior, 4(1), 3–19. Richerson, P. J., & Boyd, R. (1999). Complex societies. Human Nature, 10(3), 253– 289. Richerson, P. J., & Boyd, R. (2005). Not by genes alone: How culture transformed human evolution. University of Chicago Press. Schloss, J. P., & Murray, M. J. (2011). Evolutionary accounts of belief in supernatural punishment: A critical review. Religion, Brain & Behavior, 1(1), 46–99. Sear, R. (2020). Do human ‘life history strategies’ exist? Evolution and Human Behavior, 41(6), 513–526. Shariff, A. F., & Norenzayan, A. (2007). God is watching you: Priming god concepts increases prosocial behavior in an anonymous economic game. Psychological Science, 18(9), 803–809. Shariff, A. F., & Norenzayan, A. (2011). Mean gods make good people: Different views of god predict cheating behavior. The International Journal for the Psychology of Religion, 21(2), 85–96. Shaver, J. H., & Sosis, R. (2014). How does male ritual behavior vary across the lifespan? Human Nature, 25(1), 136–160. Singh, M., Kaptchuck, T. J., & Henrich, J. (2021). Small gods, rituals, and cooperation: The Mentawai crocodile spirit Sikaoinan. Evolution and Human Behavior, 42 (1), 61–72. Singh, S., Youssouf, M., Malik, Z. A., & Bussmann, R. W. (2017). Sacred groves: Myths, beliefs, and biodiversity conservation – A case study from Western Himalaya, India. International Journal of Ecology, Article 3828609.

593

594

                   

Smith, E. A. (2000). Three styles in the evolutionary analysis of human behavior. In L. Cronk, N. Chagnon, & W. Irons (Eds.), Adaptation and human behavior: An anthropological perspective (pp. 27–46). Routledge. Soler, M. (2012). Costly signaling, ritual and cooperation: Evidence from Candomblé, an Afro-Brazilian religion. Evolution and Human Behavior, 33(4), 346–356. Sosis, R. (2005). Does religion promote trust? The role of signaling, reputation, and punishment. Interdisciplinary Journal of Research on Religion, 1(7), pp. 1–30. Sosis, R., & Bressler, E. R. (2003). Cooperation and commune longevity: A test of the costly signaling theory of religion. Cross-Cultural Research, 37(2), 211–239. Sosis, R., & Bulbulia, J. (2011). The behavioral ecology of religion: The benefits and costs of one evolutionary approach. Religion, 41(3), 341–362. Sosis, R., Kress, H. C., & Boster, J. S. (2007). Scars for war: Evaluating alternative signaling explanations for cross-cultural variance in ritual costs. Evolution and Human Behavior, 28(4), 234–247. Sperber, D. (1996). Explaining culture: A naturalistic approach. Blackwell. Sperber, D. (2018). Cutting culture at the joints? Religion, Brain & Behavior, 8(4), 447–449. Swan, T., & Halberstadt, J. (2019). The Mickey Mouse problem: Distinguishing religious and fictional counterintuitive agents. PLoS ONE, 14(8), Article e0220886. Swan, T., & Halberstadt, J. (2020). The fitness relevance of counterintuitive agents. Journal of Cognition and Culture, 20(3–4), 188–217. Swanson, G. E. (1960). The birth of the gods: The origin of primitive beliefs. University of Michigan Press. Tan, J. H., & Vogel, C. (2008). Religion and trust: An experimental study. Journal of Economic Psychology, 29(6), 832–848. Teehan, J. (2016). Religion and morality: The evolution of the cognitive nexus. In J. R. Liddle & T. K. Shackelford (Eds.), The Oxford handbook of evolutionary psychology and religion (pp. 117–134). Oxford University Press. Tillich, P. (1957). Dynamics of faith. Zondervan. Tinbergen, N. (1963). On aims and methods of ethology. Zeitschrift für Tierpsychologie, 20(4), 410–433. Townsend, C., Aktipis, A., Balliet, D., & Cronk, L. (2020). Generosity among the Ik of Uganda. Evolutionary Human Sciences, 2, Article e23. Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46(1), 35–57. Turchin, P., Currie, T. E., Turner, E. A., & Gavrilets, S. (2013). War, space, and the evolution of old world complex societies. Proceedings of the National Academy of Sciences, 110(41), 16384–16389. Tylor, E. B. (1920). Primitive culture: Researches into the development of mythology, philosophy, religion, language, art, and custom. Murray. van Schaik, C. P., & Burkart, J. M. (2011). Social learning and evolution: The cultural intelligence hypothesis. Philosophical Transactions of the Royal Society B: Biological Sciences, 366(1567), 1008–1016. van Schaik, C. P., Isler, K., & Burkart, J. M. (2012). Explaining brain size variation: From social to cultural brain. Trends in Cognitive Sciences, 16(5), 277–284.

Moral and Religious Systems

Veissière, S. P., Constant, A., Ramstead, M. J., Friston, K. J., & Kirmayer, L. J. (2020). Thinking through other minds: A variational approach to cognition and culture. Behavioral and Brain Sciences, 43, Article e90. von Bertalanffy, L. (1968). General systems theory: Foundations, development, applications. George Braziller. Watts, J., Jackson, J. C., Arnison, C., Hamerslag, E. M., Shaver, J. H., & Purzycki, B. G. (2022). Building quantitative cross-cultural databases from ethnographic records: Promise, problems and principles. Cross-Cultural Research, 56(1), 62–94. Wiltermuth, S. S., & Heath, C. (2009). Synchrony and cooperation. Psychological Science, 20(1), 1–5. Xygalatas, D., Mitkidis, P., Fischer, R., Reddish, P., Skewes, J., Geertz, A. W., Roepstorff, A., & Bulbulia, J. (2013). Extreme rituals promote prosociality. Psychological Science, 24(8), 1602–1605.

595

24 Lessons from Moral Psychology for Moral Philosophy Paul Rehren and Walter Sinnott-Armstrong

24.1 Historical Background Many traditional philosophers brought psychology to bear on central issues in moral philosophy. Aristotle (2019) filled his writings on ethics with psychology based on observation. David Hume (1739/2007) cited psychological principles of association of ideas to explain moral judgments. William James (1890a/1918, 1890b/1918) was both a philosopher and one of the founders of modern psychology. Similarly, the psychologist Hermann Ebbinghaus received his doctorate in philosophy before performing his ground-breaking experiments on memory. The list goes on (Sorell, 2018). This long-standing friendship between moral philosophy and psychology became strained by two events. Hume (1739/2007) famously announced that it “seems altogether inconceivable, how this new relation [signaled by ‘ought’] can be a deduction from others [signaled by ‘is’], which are entirely different from it” (Bk. 3, pt. 1, sec. 1, para. 27). Then G. E. Moore (1903/1959) deployed his open question argument to show that it is impossible to define “good” (or any other normative term) in purely natural terms, including the terms of science. The implications of these two claims were vastly overestimated by many philosophers. Hume denied only that “ought” can be deduced from “is” alone. This leaves open the possibility that premises about what is the case can be essential parts of deductively valid arguments for a conclusion about what ought to be the case. The fact that a poison is deadly can show that you ought not to feed it to your children, even if this argument needs another premise (such as that you ought not to kill your children) in order to become deductively valid. Similarly, Moore denied only that normative terms like “good” can be defined in purely naturalistic terms. He did not deny that psychology can be very relevant to moral philosophy in other ways. For example, any hedonistic utilitarian who claims that the morally right act is always the one that maximizes pleasure and minimizes pain will need premises about what causes pleasure and pain in order to argue for conclusions about which acts are morally right. And contractarians will need to know how various moral rules would function in society as well as which rules people would agree to in various conditions. Nonetheless, many moral philosophers were influenced by Hume and Moore among others to turn away from psychology in the first half of the twentieth 596

Lessons from Moral Psychology for Moral Philosophy

century. The best-known moral philosophers of this period – including intuitionists (such as Prichard and Ross) as well as expressivists (such as Stevenson, Ayer, and Hare) – instead focused on the meanings of moral terms along with the metaphysics and epistemology of morality. When they proposed theories of what moral and other normative terms mean in common language, they did not appeal to empirical work by linguists on how these terms are actually used. Beginning in the 1950s and 1960s, some leading philosophers started to reevaluate the relationship between psychology and moral philosophy. For example, G. E. M. Anscombe (1958, p. 1) argued that “it is not profitable for us at present to do moral philosophy; that should be laid aside at any rate until we have an adequate philosophy of psychology.” John Rawls (1971) in the third part of A Theory of Justice wrote extensively about moral psychology. However, few readers focused on that last part of Rawls’ work, much less heeded Anscombe’s advice. As a result, moral philosophy remained largely isolated from moral psychology. In the words of an influential review, “very little [careful and empirically informed work on the nature or history or function of morality] has been done even by some of those who have recommended it most firmly” (Darwall et al., 1992, p. 188). The tide began to turn in the 1990s, partly stimulated by Flanagan (1991). During the 1990s and growing in the 2000s and 2010s, many philosophers began to deploy more findings from psychology and neuroscience. Some philosophers even got involved in doing empirical research themselves. Of course, many philosophers still remain recalcitrant (e.g., Berker, 2009; Kauppinen, 2007), just as many psychologists still resist using moral philosophy in their studies, often because they think science should remain neutral about moral norms and values. Nonetheless, the mutual respect and support between philosophy and psychology now seems to be growing and returning to their mutually beneficial friendship of earlier ages.

24.2 Topics A detailed discussion of all the ways in which moral psychology has impacted and is continuing to impact moral philosophy in recent decades could fill entire books (Sinnott-Armstrong, 2008b, 2008c, 2008d, 2014; SinnottArmstrong & Miller, 2017; Tiberius, 2015). Here we list just a few of the main issues in moral philosophy that have been affected by psychology. Situationism in social psychology has been taken to cast doubt on accounts of virtues and vices in the Aristotelian tradition (Doris, 2002), though not in the Humean tradition (Driver, 2001; Merritt, 2000). Philosophers have been stimulated by psychology to create new views of virtue and vice that do not fit neatly into either traditional mold (Miller, 2013; Chapter 2, this volume). Philosophical theories of value, happiness, and well-being have been influenced by empirical psychology (Alexandrova, 2017; Bishop, 2015; Haybron, 2010; Tiberius, 2018). Widely accepted arguments against hedonism have been

597

598

                  -     

criticized as reflecting status quo biases (De Brigard, 2010). Psychological studies of adaptive preferences have challenged philosophical theories of autonomy as well as desire-based theories of happiness (Khader, 2011) and stimulated new views that measure well-being in terms of capabilities (Nussbaum, 2011; Sen, 1993). Another philosophical topic that has felt the impact of psychology and neuroscience is free will. Scientific findings have both raised challenges for traditional views of free will and have inspired new philosophical theories about free will as well as responsibility (Maoz & Sinnott-Armstrong, 2022; SinnottArmstrong, 2014). Some of the most prominent contemporary philosophical accounts of moral responsibility are deeply informed by empirical insights into the psychology of blame (Chapter 15, this volume) and other reactive attitudes. Philosophers have also based new theories of responsibility on improved understanding of mental illnesses, including addiction (Sripada, 2018), psychopathy (Kiehl & Sinnott-Armstrong, 2013; Chapter 13, this volume), and scrupulosity obsessive compulsive disorder (Summers & Sinnott-Armstrong, 2019). Moreover, judgments about the self and personal identity, which is necessary for moral responsibility for past actions, have been found by psychologists to depend on prior moral assumptions (Prinz & Nichols, 2016; Strohminger & Nichols, 2014). Many more examples could be given, but perhaps the most prominent and general lessons from psychology and neuroscience for moral philosophy in recent years grow out of empirical research on moral judgment. To give only a few examples, this research has stimulated and informed philosophical discussion of topics as diverse as moral motivation (Chapter 3, this volume), the role of emotion and reasoning in moral judgment (May, 2018), moral progress (Buchanan & Powell, 2015; Sauer, 2019), and the reliability and trustworthiness of moral judgment. We will explore this last topic in more detail for the rest of this chapter. There are three reasons for this focus. First, it concretely illustrates several of the main ways in which empirical results are relevant and useful for philosophical arguments and debates. Second, it has important implications for philosophical methods in general as well as substantive philosophical theories. Third, it is a topic close to our own hearts.

24.3 Case Study: The Trustworthiness of Moral Judgment There are three main types of arguments from empirical premises against the trustworthiness of moral judgments, that is, whether they deserve our trust when we construct theories and make decisions: process debunking (Section 24.3.1), arguments from disagreement (Section 24.3.2), and arguments from irrelevant influences (Section 24.3.3). Notice that trustworthiness does not have to be understood in terms of truth. Most of the problems that we will discuss also affect meta-ethical projects according to which moral judgments

Lessons from Moral Psychology for Moral Philosophy

need not be seen as true or false in the way moral realists claim. Even nonrealist frameworks typically require moral judgments to be reasonable in the thin sense that we should be less confident in them when they are produced by inadequate processes (process debunking), are the object of hard-to-resolve peer disagreement (arguments from disagreement), or are subject to the influence of morally irrelevant factors (arguments from irrelevant influences).

24.3.1 Process Debunking Process debunking arguments attempt to show that given what we know about the processes underlying moral cognition, we have good reason to think that it will not be trustworthy in many circumstances. The two most prominent examples of this type are evolutionary debunking and psychological process arguments against deontological moral judgments. Evolutionary debunking arguments (for a review, see Vavova, 2015) have two major premises. First, evolution shaped the moral judgments we make today. Most people, for example, believe that harming other people and breaking promises are wrong, and that parents have a greater obligation to their children than to other people. Evolutionary biology provides a powerful explanation of this fact: Individuals with such moral beliefs were more likely to survive and reproduce; for example, because it enabled them to cooperate more efficiently (Curry, 2016; Machery & Mallon, 2010). Second, there is reason to doubt that evolution would have resulted in our being able to accurately track mind-independent moral facts – that is, moral facts that are true or false independently of what anyone thinks, believes, or feels about them. Standard evolutionary origin stories of human moral judgment appeal to the pressures of natural selection, which favor mental faculties that enhance biological fitness (FitzPatrick, 2016). Yet it is hard to see how the ability to track mind-independent moral facts accurately would have benefited biological fitness. Evolutionary debunkers conclude that our moral judgments will frequently be off-track (i.e, not accurately track moral facts). Some have argued that this undermines versions of moral realism that claim that there are mindindependent moral facts to begin with (Street, 2006). Other evolutionary debunkers think that we should stop trusting our moral judgments all together (Joyce, 2006). The other kind of debunking argument – which cites current psychological processes to cast doubt on deontological moral judgments – has been most forcefully made by Greene (2008; also Singer, 2005). According to Greene, moral judgments are the output of two distinct types of processing, System 1 and System 2. System 1 processes are unconscious, automatic, fast, and intuitive, while System 2 processes are conscious, controlled, slow, and effortful. The two types of processing often produce conflicting moral judgments. In particular, System 1 tends to output deontological judgments (supported by appeals to rights and duties). In contrast, System 2 tends to output consequentialist judgments (supported by cost-benefit considerations).

599

600

                  -     

Greene thinks that this model has important implications for the trustworthiness of deontological moral judgments. He argues that because System 1 processes are unconscious, fast, and automatic, they are often blunt, inflexible, and unresponsive to relevant evidence. Therefore, we should rely on System 1 outputs only if they have been sufficiently shaped through evolutionary, cultural, or personal experience. However, according to Greene, we have had inadequate experience with many moral problems, especially moral problems that arise in the modern world. Hence, we should not rely on System 1 to make judgments about such problems. And because System 1 tends to output deontological moral judgments, this means that we should not trust our deontological moral judgments in many circumstances. Both of these process debunking arguments have received a lot of attention in the literature (e.g., Nichols, 2014; Sauer, 2018), including considerable pushback (e.g., Berker, 2009; Kahane, 2011; Chapter 5, this volume). Whether or not such arguments succeed remains controversial.

24.3.2 Argument from Disagreement The second type of argument from empirical premises against the trustworthiness of moral judgment is based on cases of moral disagreement (Tersman, 2022). Moral disagreements occur when different individuals or groups of individuals make conflicting moral judgments about an issue, problem, or scenario. Some instances of moral disagreement are easily resolved. For example, when young children disagree with their parents about moral issues, most of us will side with the adult’s judgment over that of the child. Many philosophers think that we are justified in doing so, because there are relevant differences between the disagreeing sides. For example, the cognitive abilities of young children are usually less developed than those of their parents. Other differences that can help us determine which side of a moral disagreement is more likely to be correct include differences in background knowledge, the amount and quality of evidence brought to bear on the issue, and psychological biases (for a more complete list, see Frances & Matheson, 2019). Moral disagreements between young children and their parents are easily resolved because it is usually not hard to find epistemically relevant differences between the disagreeing sides. However, there are moral disagreements where this is more difficult. One set of examples like this are moral disagreements that arise between cultural groups (Graham et al., 2016; Chapter 20, this volume). Other examples are due to demographic differences in moral judgment, including gender, age, socioeconomic status (all Table 24.1), and religion (Norenzayan, 2013). Doris and Plakias (2008), for instance, have suggested that differences between Southerners and Northerners in the United States in attitudes toward violence in response to violations of honor (Cohen & Nisbett, 1994) cannot easily be explained by one side being less rational or more biased than the other (for other examples, see Fraser & Hauser, 2010; Machery et al., 2005).

Lessons from Moral Psychology for Moral Philosophy

601

Table 24.1 Some demographic differences in moral judgment Gender • In sacrificial moral dilemmas, women have been found to approve less of acting than men (Arutyunova et al., 2016; Capraro & Sippel, 2017; Fumagalli et al., 2010). A recent meta-analysis supports this finding (Friesdorf et al., 2015). • Critiquing Kohlberg’s model of moral cognition (Kohlberg, 1971; Levine et al., 1985), Gilligan (1982) argued that women’s moral judgments are based more on considerations of care than men’s, while men’s moral judgments are based more on considerations of justice than women’s. This claim has been supported by some studies (e.g., Björklund, 2003; Gump et al., 2000; Pratt et al., 1988) but not by others (e.g., Friedman et al., 1987; Galotti, 1989). A meta-analysis (Jaffee & Hyde, 2000) suggests small differences in the care and justice orientations of women and men. Age • In sacrificial moral dilemmas, older adults have been found to approve less of acting than younger adults (Arutyunova et al., 2016; McNair et al., 2019). • Some studies report that moral judgments become harsher with age: Older adults have been found to judge morally ambiguous behaviors of politicians as more wrong than younger adults (Aldrich & Kage, 2003) and to disapprove more strongly of egoistic behavior than younger adults (Rosen et al., 2016). • Older adults have been found to rely less on an agent’s intentions and more on the outcome of an action when judging moral scenarios than younger adults (Margoni et al., 2018; Moran et al., 2012). Socioeconomic status (SES) • High-SES adults have been found to be more morally permissive of harmless taboo violations than low-SES adults (Haidt et al., 1993). • In sacrificial moral dilemmas, high-SES adults have been found to approve less of acting than low-SES adults (Côté et al., 2013).

If this is right, then for many instances of moral disagreements between demographic and cultural groups, the disagreeing sides are in an equally good epistemic position to judge the issue, problem, or scenario. But many have argued that we should not trust judgments about which such epistemic peers disagree (Frances & Matheson, 2019). And since every one of us belongs to some demographic and cultural groups, this suggests that we should not trust our own moral judgments on these disputed issues, problems, or scenarios, either. Not everyone has been convinced by this argument. Some dispute that to mistrust all judgments involved is the right way to deal with cases of peer disagreement (for an overview, see Frances & Matheson, 2019). For example, some authors have argued that people should in general assign more evidential weight to their own experiences than to the experiences of others (e.g., Huemer, 2011). Others appeal to a notion of self-trust and argue that because people are justified in trusting in their own judgments and the mental faculties that produce them, the mere fact of peer disagreement may speak against the epistemic

602

                  -     

reliability of the person they disagree with (e.g., Enoch, 2010). Both lines of argument have the same upshot: It can sometimes be reasonable for someone to stick to their own moral judgment over that of their epistemic peer. Others question whether the moral disagreements we have been citing are really about anything moral at all. They suggest that instead, many (perhaps all) such disagreements are really disagreements about (nonmoral) facts: “[C]areful philosophical examination will reveal . . . that agreement on nonmoral issues would eliminate almost all disagreement about the sorts of moral issues which arise in ordinary moral practice” (Boyd, 1988, p. 213). As with process debunking arguments, then, it remains an open question whether and to what extent arguments from moral disagreement succeed.

24.3.3 Arguments from Irrelevant Influences The third type of argument from empirical premises against the trustworthiness of moral judgment cites irrelevant influences on moral judgments. Many things can influence moral judgments, some of which are not at all problematic. For example, people sometimes change their mind about a moral judgment in light of compelling counterarguments (Bloom, 2010; Paxton et al., 2012). Most would agree that hearing compelling counterarguments is a good reason to change one’s mind, that they are relevant to the moral judgments a person should make. Hence, their influence is perfectly legitimate. However, other influences are less welcome. In particular, for some of the things that influence moral judgments, it is difficult to see how they could be relevant to whether the issue, problem, or scenario that is being judged is morally right or wrong. Some proposed examples of this are framing effects (Nadelhoffer & Feltz, 2008; Sinnott-Armstrong, 2008a). A moral judgment about an action (or person or problem) is subject to a framing effect if it changes because of morally irrelevant differences in the way that action (or person or problem) is presented (Demaree-Cotton, 2016). Prominent examples of framing effects are order effects and word effects. Order framing effects result from multiple scenarios being presented in different orders. Word framing effects result from different but morally equivalent language being used to describe the same moral scenario. Table 24.2 samples the literature on these two types of framing effect. To see how framing effects could pose a problem for the trustworthiness of moral judgments, it helps to think of them in terms of disagreement within single individuals. When someone’s moral judgment is influenced by framing effects, they are (or would be) making different (incompatible) moral judgments about an issue, problem, or scenario at different times. They are, in other words, disagreeing (or would disagree) with themselves at another time. As we have seen in Section 24.3.2, moral disagreement can pose a challenge to the trustworthiness of the disputed judgments if there are no epistemically relevant differences between the disagreeing sides. Yet plausibly, that a scenario was presented using different wordings or that the same scenarios were presented in

Lessons from Moral Psychology for Moral Philosophy

603

Table 24.2 Order and word framing effects on moral judgment Order framing effects • Adults have been found to exhibit action/omission order effects. In one scenario, an outcome is brought about by an action, while in a second scenario, that same outcome is brought about by an omission. For example, in one scenario, an agent might snatch a life vest from a drowning person for themselves (action), while in the other scenario, the agent might fail to offer their own life vest to a drowning person (omission). For this and another pair of dilemmas (Schwitzgebel & Cushman, 2012), as well as for several pairs of nondilemma harmful acts (Haidt & Baron, 1996), adults’ moral judgments have been found to differ depending on the order of the two scenarios. • Adults have been found to exhibit means/side-effect order effects. In one scenario, an outcome is brought about as a means to an end; in another, the same outcome is brought about as a side effect. For example, in one scenario, an agent might have to decide whether to shoot one swimmer in order to lure away a dangerous shark from a group of swimmers (means), while in the other scenario, the agent can make loud noises to redirect the shark away from the group of swimmers, which will result in the shark killing one swimmer in the water between it and the agent. For this and other moral dilemmas, adults’ moral judgments have been found to differ depending on the order of the two scenarios (e.g., Lanteri et al., 2008; Rehren & Sinnott-Armstrong, 2021; Wiegmann & Waldmann, 2014). Word framing effects • In sacrificial moral dilemmas, the same adults have been found to judge acting more morally permissible when the action was described in terms of how many people it would save rather than in terms of how many people it would kill, even though they had been told that the action would have both effects (Petrinovich & O’Neill, 1996; Rehren & Sinnott-Armstrong, 2021). This is an example of a broader class called valence framing effects where the options of a scenario are described in terms of either their positive or their negative outcomes. A recent meta-analysis (McDonald et al., 2021) finds an overall moderate effect of valence framing on moral judgment. • In sacrificial moral dilemmas, adults have been found to approve more of acting when the scenario is presented in a nonnative language (e.g., Costa et al., 2014; Geipel et al., 2015; Muda et al., 2020).

different orders (cf., Horne & Livengood, 2017) is not an epistemically relevant difference. If this is right, then the influence of framing effects on moral judgment reveals cases of peer moral disagreement within the same individual. Thus, if we follow the advice of many epistemologists, we should not trust moral judgments to the extent that they are influenced by framing effects. Another candidate example of irrelevant influences on moral judgment is social conformity (for a review, see Chituc & Sinnott-Armstrong, 2020). A social conformity effect occurs when a moral judgment is influenced by judgments that others expressed. It would not be a problem if people did not believe the judgments that they stated but uttered them only in order to avoid social conflict. However, that seems unlikely in several experiments because conformity effects also occur online without any interpersonal interaction

604

                  -     

(Kelly et al., 2017) and because experimental participants continue to express the same judgments when they are alone after leaving the presence of the others whose judgments influenced them (Aramovich et al., 2012). Social conformity effects thus suggest that people sometimes change from one moral judgment to an incompatible judgment that they hear others endorse. Moral philosophers have traditionally emphasized the autonomy of moral judgment (e.g., Aristotle, 2019; Kant, 1785/1998). If they were right to do so, then social conformity with others’ moral judgments falls short of this philosophical ideal of autonomous moral judgment. Moreover, whether the person making a moral judgment happens to hear from one group instead of another group is not relevant to whether the issue, problem, or scenario that is being judged is morally right or wrong (or good or bad, etc.). If Kelly believes an act is wrong while she is surrounded by people who say it is wrong, but she believes the same act is not wrong while she is surrounded by people who say that it is not wrong, then her moral judgments at these different times cannot both be correct. She still might be justified in trusting one group over the other if she has some independent reason to believe that one group is more likely (or that the other group is less likely) to be correct on this matter. In the absence of any such reason to favor one group, however, she has no reason to favor either moral judgment, so it is hard to see how social conformity by itself could track the truth or produce trustworthy moral judgments in such cases. For a final example, consider the influence on moral judgment of incidental affect. An affective state (mood or emotion) is incidental when it is unrelated to an issue, scenario, or problem about which a (moral) judgment is made. Anger at injustice is not incidental when it is based on the moral wrongness of the injustice. In contrast, suppose Jordan is in a bad mood because she had a frustrating day at work. On her way home, someone stops her and asks her to donate to Save the Children. But for her bad mood, Jordan would have happily donated; however, today, she does not. In this example, Jordan’s bad mood (due to her frustrating day at work) is completely unrelated to the moral issue she is considering (whether she should donate to Save the Children). Nevertheless, it influences her moral judgment: Had it not been for her bad mood, Jordan would have believed that she should donate. Many see this kind of influence as problematic (e.g., Cameron et al., 2013; Kumar & May, 2019). Because the source of incidental affective states is completely unrelated to the moral issue, problem, or scenario about which a moral judgment is made, such affective states do not provide any good reason to change one’s moral judgment. Instead, their effect on moral judgment is distorting. Table 24.3 samples the literature. How does incidental emotion threaten the trustworthiness of moral judgment? Again, it helps to think about such cases as disagreements within single individuals over time. When someone’s moral judgment is influenced by incidental emotion they make (or would make) different (even incompatible) moral judgments about an issue, problem, or scenario at different times. Unlike for framing effects (and more like social conformity), we can easily identify one

Lessons from Moral Psychology for Moral Philosophy

605

Table 24.3 Some influences of incidental affect on moral judgment Negative affect • The largest body of research is on the influence of incidental disgust on moral judgment (for a review, see Chapman & Anderson, 2013). For example, Wheatley and Haidt (2005) hypnotically induced subjects to feel a pang of disgust when reading certain words. They found that participants judged moral transgressions as more morally wrong when they included disgust-eliciting words. Other studies have used ambient odor (Schnall et al., 2008), bitter taste (Eskine et al., 2011), and gross noises (Seidel & Prinz, 2013) to manipulate incidental disgust, with similar results. • Manipulations of incidental anger have been associated with harsher moral judgments of autonomy (Seidel & Prinz, 2013), fairness (Singh et al., 2018), and purity violations (Ansani et al., 2017). Positive affect • In sacrificial dilemmas, incidental mirth has been associated with higher approval of acting, while incidental elevation has been associated with lower approval of acting (Strohminger et al., 2011). • Incidental positive mood has been associated with higher approval of acting in sacrificial dilemmas (Pastötter et al., 2013; Valdesolo & DeSteno, 2006).

epistemically relevant difference between the two sides. If incidental emotion distorts moral judgment, then we have reason to discount that side’s judgment. Such disagreements are not like peer disagreements, and consequently do not threaten the trustworthiness of moral judgment in this way. Even if most moral disagreements due to the influence of incidental emotion can be resolved, however, this does not mean that moral judgments so affected are always home free. It is not enough that in fact a moral judgment was not distorted by incidental emotion. In addition, individuals need to know or at least have good enough reason to believe that a certain moral judgment is not distorted by incidental emotion in order for that individual to be justified in trusting that moral judgment. This might not seem too difficult. Yet adults often have limited insight into the psychological processes underlying their judgment and decision making (Nisbett & Wilson, 1977; Wilson & Dunn, 2004), including whether they were influenced by incidental emotion (Schwarz, 2012). There is considerable reason to think that this is true for the domain of moral judgment specifically (Carlsmith, 2008; Greene, 2008; Haidt, 2001; Hall et al., 2012; Hauser et al., 2007). If this is right, then people will often not be able to tell whether their moral judgments were distorted by incidental emotions. In such cases, the influence of incidental emotion will still make those moral judgments untrustworthy.

24.4 Philosophical Implications We have surveyed three types of argument from empirical premises that attempt to show that we should not trust some, many, or even all moral

606

                  -     

judgments. Let us suppose for a moment that such arguments succeed. What would this imply for moral philosophy? Here, we discuss two suggested implications. First, this conclusion would spell trouble for moral intuitionism. Second, many moral philosophers would have to reconsider the way they typically do their work.

24.4.1 Moral Intuitionism A central epistemological tenet of modern moral intuitionism is that at least some moral judgments are justified noninferentially – that is, justified even in the absence of independent confirmation (Audi, 2008; Huemer, 2005; ShaferLandau, 2003; cf. Tropman, 2011). According to critics, however, if our moral judgments are often enough not trustworthy, then a given moral judgment would seem not to be justified without at least some independent confirmation (Nadelhoffer & Feltz, 2008; Sinnott-Armstrong, 2008a). For an analogy, consider a box of 100 thermometers. You know that many (say, 50) of the thermometers do not show the temperature accurately. You take one of the thermometers out of the box at random. Are you justified in trusting what this particular thermometer says? The answer seems to be “no.” Unless you have some independent reason to think that this particular thermometer is accurate, the fact that many of the thermometers in the box are not accurate should make you skeptical of this one, too. This point applies no matter which thermometer you pick out of the box, so all of them are subject to the same doubts. If this is correct, then, by analogy, if enough moral judgments are untrustworthy, then no moral judgment should be trusted without independent confirmation. However, that conclusion conflicts directly with the central claim made by moral intuitionists that moral judgments are justified noninferentially, since independent confirmation requires inference. Thus, process debunking, the argument from disagreement, and the argument from irrelevant influences pose serious challenges to moral intuitionism, provided they can show that a wide enough variety of moral judgments are untrustworthy.

24.4.2 How Should We Do Moral Philosophy? It is widely accepted that one of the main ways (perhaps the main way) to do moral philosophy is to use responses to particular moral problems, issues, or scenarios (Deutsch, 2010; Williamson, 2007). Such responses have been called the “data of ethics” (Ross, 1930/2002, p. 41). The most widely used method that relies on judgments of particular cases is reflective equilibrium (Kamm, 1993; Rawls, 1971). It consists in “working back and forth among our considered judgments . . . about particular instances or cases, the principles or rules that we believe govern them, and the theoretical considerations that we believe bear on accepting these considered judgments, principles, or rules, revising any of these elements wherever necessary in order to achieve an acceptable coherence among them” (Daniels, 2020).

Lessons from Moral Psychology for Moral Philosophy

The problem for this method that is posed by process debunking, the argument from disagreement, and the argument from irrelevant influences is then straightforward: If our moral judgments are too often not trustworthy, then they do not make for a solid foundation on top of which to build moral theory (e.g., Alexander et al., 2014; Horowitz, 1998; Paulo, 2020). As Jesus is reported to have said (Matthew, 7:26), only a foolish man would build his house on sand. In order to have a secure foundation, moral philosophers who rely on their moral judgments about particular cases would thus need to reconsider the way they build their houses. Of course, defenders of the reflective equilibrium method can respond in several ways, some of which will be discussed in Section 24.5.

24.5 Discussion 24.5.1 Can We Trust the Research? The argument from disagreement, arguments from irrelevant influences, and some process debunking arguments all rely heavily on findings from moral psychology. Many of these findings have been criticized. Take, for example, the influence on moral judgment of incidental disgust (Table 24.3). Several high-profile studies in this literature have failed to replicate (Ghelfi et al., 2020; Johnson et al., 2016). Moreover, a meta-analysis (Landy & Goodwin, 2015) found only a small overall effect, which disappeared entirely once publication bias was taken into account. At this point, it is therefore unclear whether incidental disgust has much, if any, influence on moral judgment. Null results incompatible with other findings have been published (because of publication bias, there may be more out there that have not been published; Song et al., 2010). For example, Gawronski et al. (2018) failed to find any effect of incidental anger on moral judgments of sacrificial dilemmas. Likewise, several studies have failed to find (or only found negligible) effects of gender, age, and religion on moral judgment (Banerjee et al., 2010; Gleichgerrcht & Young, 2013; Hauser et al., 2007). In addition, some of the studies reviewed exhibit methodological weaknesses, such as low power (Stanley et al., 2018), unrepresentative samples (Henrich et al., 2010), and a failure to sufficiently motivate their research question(s) by theory (Muthukrishna & Henrich, 2019). Moreover, there is some evidence of questionable research practices in experimental moral psychology more broadly (Stuart et al., 2019), and of publication bias for some of the research programs reviewed in Section 24.3 in particular (Landy & Goodwin, 2015; McDonald et al., 2021). Thus, even among results that have not yet been challenged, there is a good chance that some of them will not hold up in the future (Ioannidis, 2005, 2014). Proponents of arguments that rely on this research and research like it need to be wary of these scientific issues and should adjust their confidence in their

607

608

                  -     

conclusions accordingly. In particular, this means that before we can robustly evaluate the empirical premises of the arguments we surveyed in Section 24.3, we need a lot more – and more rigorous – work in moral psychology (for some recommendations, see Asendorpf et al., 2013; Shrout & Rodgers, 2018).

24.5.2 Does the Research Show Severe Enough Disagreements? Arguments from disagreement and arguments from irrelevant influences both cite evidence of moral disagreement (interpersonal in the first case, intrapersonal in the second case). Yet disagreements can be more or less severe. Some disagreements involve moral judgments that are polar opposites – one side judges p, the other side judges not-p. Other disagreements involve moral judgments that do not differ in their polarity but only in their strength or confidence. For example, we might disagree because I think that destroying the environment is extremely wrong, while you think that it is only somewhat wrong. We can ask whether peer disagreement to any degree counts against the trustworthiness of our moral judgments (Rehren & Sinnott-Armstrong, 2021). Some have suggested that only disagreements involving moral judgments that are polar opposites pose a problem for the trustworthiness of moral judgments (cf. Demaree-Cotton, 2016; Kumar & May, 2019). The thought seems to be that when people (or the same person at different times) vary in the strength or confidence of a moral judgment, then they are still making the same moral judgment (in some sense), and so they are not really disagreeing. If so, disagreements like this involve no real threat to the trustworthiness of the differing moral judgments. On the other side, Andow (2016) has argued that evidence of moral judgments that differ only in strength or confidence can still cast serious doubt on their trustworthiness. Which side of this dispute one takes has important implications for which kinds of evidence have force. Most of the empirical studies that we have cited throughout Sections 24.3.2 and 24.3.3 provide evidence of one of two forms. Some studies focus on judgment reversals from a moral judgment to its polar opposite. For example, a study that manipulates the order of two scenarios would find evidence of judgment reversals if its participants judged an act to be morally acceptable in one order but unacceptable in the other (e.g., Lanteri et al., 2008). Other studies instead focus on judgment shifts, which occur whenever participants change (or would change) their moral judgments in any significant way, even if only in strength or confidence. For example, the mean response of participants who read two scenarios in one order might be that an act is definitely wrong, while the other order might lead participants to respond only that the act is somewhat wrong (e.g., Wiegmann & Waldmann, 2014). For those who think that peer disagreement provides reason to mistrust the different moral judgments even if this disagreement is only about confidence or strength, but not polarity, evidence of both judgment shifts and judgment reversals is relevant. In contrast, someone who thinks that only peer disagreements that involve polar opposite moral judgments speak against the

Lessons from Moral Psychology for Moral Philosophy

trustworthiness of our moral judgments should only appeal to studies that provide evidence of judgment reversals. Their evidential basis will therefore be quite a bit thinner.

24.5.3 How Much Untrustworthiness Is Too Much? Some process debunking arguments (most prominently evolutionary debunking arguments) are global. That is, if they go through, then they implicate all moral judgments. In contrast, the arguments from disagreement and irrelevant influences are local. All they claim to show is that moral judgments should not be trusted to the extent that they feature in intractable moral disagreements or are susceptible to irrelevant influences. Suppose they succeed: If there is robust evidence that a moral judgment features in intractable moral disagreements or is susceptible to irrelevant influences, then it (and moral judgments like it) is untrustworthy. On its own, this is not enough to discredit moral intuitionism and methods of moral philosophy. It will not be enough to point to just any amount of untrustworthiness, no matter how small. It is clearly unreasonable to demand that moral judgments never go wrong (Shafer-Landau, 2008). How much untrustworthiness would be too much? One way to approach this question is in terms of the variety of moral judgments implicated by moral disagreement and irrelevant influences. If this variety is too limited, then the arguments against moral intuitionism and the methods of moral philosophy are weak. Both arguments require that our moral judgments are often untrustworthy. Because people make moral judgments about a wide variety of problems, issues, and scenarios, this requirement can be met only if a large enough fraction of this variety of moral judgments is implicated by moral disagreement or irrelevant influences. Unfortunately, the existing research is of limited help in determining the variety of moral judgments implicated by moral disagreement and irrelevant influences. Most of it is focused on harm-based transgressions, in particular sacrificial dilemmas. Yet there is more to morality than harm and sacrificial dilemmas (Bauman et al., 2014; Graham et al., 2016; Rozin et al., 1999). Much more future research will be needed to figure out the extent of disagreement and irrelevant influences in other areas of morality, including judgments about honesty, fairness, loyalty, or authority. Only then can we even hope to know how much of a threat arguments from disagreement and irrelevant influences could pose for moral intuitionism and the methods of moral philosophy. Even if it turned out that a large variety of moral judgments were untrustworthy, however, another important question remains: What is the proportion of people whose judgments are implicated? Suppose we found that a wide variety of moral judgments (judgments about harm, honesty, fairness, loyalty, etc.) were influenced by framing effects. If this result was due to only 1 percent of the population, the arguments against moral intuitionism and the method of moral philosophy would remain unimpressive. In that case, plausibly, our moral

609

610

                  -     

judgments would not have been shown to often be untrustworthy – only the moral judgments of 1 percent of the population. In contrast, if the judgments of 50 percent of the population were influenced by framing effects, this would make for a much stronger case against moral intuitionism and the methods of moral philosophy. Suppose we had good enough sense of the variety of moral judgments implicated by moral disagreement or irrelevant influences and of the proportion of people responsible for this to come up with an estimate for the proportion of moral judgments that are untrustworthy. How much would be too much? In the context of her meta-analysis of moral framing effects, Demaree-Cotton (2016) claims that 20 percent would not constitute “a large probability of error” (p. 19), and even adds, “I might be happy to accept the possibility that my moral judgments are off-track 20% of the time” (p. 17). This sentiment is echoed by May (2019, p. 8) and by Sauer (2018, p. 76). Others disagree and have argued that 20 percent (or even 10 percent) would be a problem. As one reason, McDonald et al. (2019) point to the importance of morality (Chapters 14, 21, and 22, this volume). Mistakes in moral judgments can lead to hurt feelings, antagonism, bad laws, and even war. Therefore, it is crucial that we get them right. Moreover, McDonald et al. argue that we would not accept scientific judgments if they were off-track 20 percent of the time. Yet the stakes in science are frequently much lower than in morality. This makes it hard to see why we should be happy to accept it if 20 percent of moral judgments could not be trusted.

24.5.4 Does the Research Tell Us about the Right Kind of Moral Judgments? Another question is whether the studies cited throughout this chapter tell us about the right kind of moral judgments. In particular, some (e.g., Bengson, 2013) have argued that these studies were not set up to capture moral intuitions in particular as opposed to other kinds of moral judgments. If this is right, then, even if the cited studies do show that moral judgments are often not trustworthy, this might not be a problem for moral intuitionism, which is specifically about moral intuitions. However, there still might be a problem if inference is needed to determine that a particular moral judgment is indeed a moral intuition. If so, then moral intuitionists would need to explain how those moral intuitions could be justified independently of inference. In a similar vein, other have claimed that participants in the kinds of studies that moral psychologists often run tend to give knee-jerk reactions without reflecting deeply, but that the moral judgments whose trustworthiness is relevant to moral theories and the practice of moral philosophy are more considered or reflective moral judgments (e.g., Kauppinen, 2007). It is not clear how this objection can be used to defend moral intuitionism, since reflection involves inferences such that the reflective moral judgments will not be justified noninferentially, as moral intuitionists claim. While the objection is more

Lessons from Moral Psychology for Moral Philosophy

promising when applied to the practice of moral philosophy, various responses to it have been offered (for a sample, see Sytsma & Livengood, 2015). For one thing, it rests on the idea that it is indeed only (or at least mostly) reflective moral judgments that matter for the practice of moral philosophy, which is controversial. Moreover, there is some evidence to suggest that reflective moral judgments may be susceptible to many of the same morally irrelevant influences we mentioned in Section 24.3.3 (Rehren & Sinnott-Armstrong, 2022; Schwitzgebel & Cushman, 2015), which would speak against their trustworthiness. Some philosophers have offered more conciliatory takes on these objections. Instead of denying that the empirical research can tell us anything of note for the way most moral philosophers currently do their work, they acknowledge that it can pose significant problems, but then argue that there are ways to improve philosophical practice to alleviate them. For example, Tiberius (2013) has proposed an empirically informed kind of reflective equilibrium that she suggests may be used to produce philosophically informative moral judgments not subject to the kinds of influences that make unreflective moral judgments untrustworthy. Still, if this kind of reflective equilibrium requires inference, then this move cannot be used to show that any moral judgments are justified noninferentially, as moral intuitionists claim.

24.5.5 The Expertise Defense Finally, some philosophers claim that they are experts about what is moral (for a survey, see Nado, 2014). Proponents of this expertise defense argue that although the empirical research may reveal moral judgments of nonphilosophers to be untrustworthy, judgments of individuals who have had training and practice in (moral) philosophy will be mostly immune to this problem. They are, in effect, expert moral judges. Yet it is the moral judgments of such experts that feature in moral philosophy and meta-ethics. Thus, the method of moral philosophy is not in danger (and maybe neither is moral intuitionism). One question about this reply is what philosophers’ moral expertise would consist of (Weinberg et al., 2010). Various suggestions have been made. For example, Cameron et al. (2013) have suggested that moral expertise will centrally involve the ability to differentiate the features of one’s emotional experiences. Others have argued that moral expertise can be understood by analogy to linguistic expertise (Driver, 2013) or to mastery of skills like driving or playing chess (Dreyfus & Dreyfus, 1991; Ryberg, 2013). Whether philosophers are moral experts in a way that will make them impervious to the pull of the arguments from disagreement and irrelevant influences is partly an empirical question. If moral philosophers are indeed experts, then they should not be subject to the types of disagreements and influences revealed by the studies we have surveyed. There is some reason to doubt this. For example, Schwitzgebel and Cushman (2012) find evidence for order effects among professional moral philosophers comparable to the order effects exhibited by nonphilosophers. Tobia et al. (2013) found that moral

611

612

                  -     

judgments of both philosophers and nonphilosophers were influenced by an unconscious cleanliness prime. Clearly, though, two studies are not enough to draw any strong conclusions about whether moral philosophers are experts. More research will be needed.

24.6 Conclusion In the end, nothing is settled, and much work remains to be done. Nonetheless, we hope to have convinced the reader that moral psychology can raise profound and interesting questions for moral philosophy – questions that can be properly answered only by philosophers and psychologists working together. May their friendship continue to grow!

References Aldrich, D., & Kage, R. (2003). Mars and Venus at twilight: A critical investigation of moralism, age effects, and sex differences. Political Psychology, 24(1), 23–40. Alexander, J., Mallon, R., & Weinberg, J. M. (2014). Accentuate the negative. In J. Knobe & S. Nichols (Eds.), Experimental philosophy (Vol. 2, pp. 31–50). Oxford University Press. Alexandrova, A. (2017). A philosophy for the science of well-being. Oxford University Press. Andow, J. (2016). Qualitative tools and experimental philosophy. Philosophical Psychology, 29(8), 1128–1141. Ansani, A., D’Errico, F., & Poggi, I. (2017). “It sounds wrong. . .” Does music affect moral judgement? In O. Gervasi, B. Murgante, S. Misra, G. Borruso, C. M. Torre, A. M. A. C. Rocha, D. Taniar, B. O. Apduhan, E. Stankova, & A. Cuzzocrea (Eds.), Computational science and its applications – ICCSA 2017, 10409 (pp. 753–760). Springer International Publishing. Anscombe, G. E. M. (1958). Modern moral philosophy. Philosophy, 33(124), 1–19. Aramovich, N. P., Lytle, B. L., & Skitka, L. J. (2012). Opposing torture: Moral conviction and resistance to majority influence. Social Influence, 7(1), 21–34. Aristotle. (2019). Nicomachean ethics (T. Irwin, Trans.; 3rd ed.). Hackett Publishing Company, Inc. Arutyunova, K. R., Alexandrov, Y. I., & Hauser, M. D. (2016). Sociocultural influences on moral judgments: East–west, male–female, and young–old. Frontiers in Psychology, 7, Article 1334. Asendorpf, J. B., Conner, M., Fruyt, F. D., Houwer, J. D., Denissen, J. J. A., Fiedler, K., Fiedler, S., Funder, D. C., Klieg, R., Nosek, B. A., Perugini, M., Roberts, B. W., Schmitt, M., van Aken, M. A. G., Weber, H., & Wicherts, J. M. (2013). Recommendations for increasing replicability in psychology. European Journal of Personality, 27(2), 108–119. Audi, R. (2008). Intuition, inference, and rational disagreement in ethics. Ethical Theory and Moral Practice, 11(5), 475–492.

Lessons from Moral Psychology for Moral Philosophy

Banerjee, K., Huebner, B., & Hauser, M. (2010). Intuitive moral judgments are robust across variation in gender, education, politics and religion: A large-scale web-based study. Journal of Cognition and Culture, 10(3–4), 253–281. Bauman, C. W., McGraw, A. P., Bartels, D. M., & Warren, C. (2014). Revisiting external validity: Concerns about trolley problems and other sacrificial dilemmas in moral psychology. Social and Personality Psychology Compass, 8(9), 536–554. Bengson, J. (2013). Experimental attacks on intuitions and answers. Philosophy and Phenomenological Research, 86(3), 495–532. Berker, S. (2009). The normative insignificance of neuroscience. Philosophy & Public Affairs, 37(4), 293–329. Bishop, M. (2015). The good life: Unifying the philosophy and psychology of well-being. Oxford University Press. Björklund, F. (2003). Differences in the justification of choices in moral dilemmas: Effects of gender, time pressure and dilemma seriousness. Scandinavian Journal of Psychology, 44(5), 459–466. Bloom, P. (2010). How do morals change? Nature, 464(7288), 490–490. Boyd, R. (1988). How to be a moral realist. In G. Sayre-McCord (Ed.), Essays on moral realism (pp. 181–228). Cornell University Press. Buchanan, A., & Powell, R. (2015). The limits of evolutionary explanations of morality and their implications for moral progress. Ethics, 126(1), 37–67. Cameron, C. D., Payne, B. K., & Doris, J. M. (2013). Morality in high definition: Emotion differentiation calibrates the influence of incidental disgust on moral judgments. Journal of Experimental Social Psychology, 49(4), 719–725. Capraro, V., & Sippel, J. (2017). Gender differences in moral judgment and the evaluation of gender-specified moral agents. Cognitive Processing, 18(4), 399–405. Carlsmith, K. M. (2008). On justifying punishment: The discrepancy between words and actions. Social Justice Research, 21(2), 119–137. Chapman, H. A., & Anderson, A. K. (2013). Things rank and gross in nature: A review and synthesis of moral disgust. Psychological Bulletin, 139(2), 300–327. Chituc, V., & Sinnott-Armstrong, W. (2020). Moral conformity and its philosophical lessons. Philosophical Psychology, 33(2), 262–282. Cohen, D., & Nisbett, R. E. (1994). Self-protection and the culture of honor: Explaining southern violence. Personality and Social Psychology Bulletin, 20(5), 551–567. Costa, A., Foucart, A., Hayakawa, S., Aparici, M., Apesteguia, J., Heafner, J., & Keysar, B. (2014). Your morals depend on language. M. Sigman (Ed.). PLoS ONE, 9(4), Article e94842. Côté, S., Piff, P. K., & Willer, R. (2013). For whom do the ends justify the means? Social class and utilitarian moral judgment. Journal of Personality and Social Psychology, 104(3), 490–503. Curry, O. S. (2016). Morality as cooperation: A problem-centred approach. In T. K. Shackelford & R. D. Hansen (Eds.), The evolution of morality (pp. 27–51). Springer International Publishing. Daniels, N. (2020). Reflective equilibrium. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Summer 2020 ed.). https://plato.stanford.edu/archives/ sum2020/entries/reflective-equilibrium/ Darwall, S., Gibbard, A., & Railton, P. (1992). Toward fin de siecle ethics: Some trends. The Philosophical Review, 101(1), 115–189.

613

614

                  -     

De Brigard, F. (2010). If you like it, does it matter if it’s real? Philosophical Psychology, 23(1), 43–57. Demaree-Cotton, J. (2016). Do framing effects make moral intuitions unreliable? Philosophical Psychology, 29(1), 1–22. Deutsch, M. (2010). Intuitions, counter-examples, and experimental philosophy. Review of Philosophy and Psychology, 1(3), 447–460. Doris, J. M. (2002). Lack of character: Personality and moral behavior. Cambridge University Press. Doris, J. M., & Plakias, A. (2008). How to argue about disagreement: Evaluative diversity and moral realism. In W. Sinnott-Armstrong (Ed.), Moral psychology: Vol. 2. The cognitive science of morality: Intuition and diversity (pp. 303–231). MIT Press. Dreyfus, H. L., & Dreyfus, S. E. (1991). Towards a phenomenology of ethical expertise. Human Studies, 14(4), 229–250. Driver, J. (2001). Uneasy virtue. Cambridge University Press. Driver, J. (2013). Moral expertise: Judgement, practice, and analysis. Social Philosophy and Policy, 30(1–2), 280–296. Enoch, D. (2010). Not just a truthometer: Taking oneself seriously (but not too seriously) in cases of peer disagreement. Mind, 119(476), 953–997. Eskine, K. J., Kacinik, N. A., & Prinz, J. J. (2011). A bad taste in the mouth: Gustatory disgust influences moral judgment. Psychological Science, 22(3), 295–299. FitzPatrick, W. (2016). Morality and evolutionary biology. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Spring 2016 ed.). https://plato .stanford.edu/archives/spr2016/entries/morality-biology/ Flanagan, O. (1991). Varieties of moral personality. Harvard University Press. Frances, B., & Matheson, J. (2019). Disagreement. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Winter 2019 ed.). Metaphysics Research Lab. Fraser, B., & Hauser, M. (2010). The argument from disagreement and the role of cross-cultural empirical data. Mind & Language, 25(5), 541–560. Friedman, W. J., Robinson, A. B., & Friedman, B. L. (1987). Sex differences in moral judgments? A test of Gilligan’s theory. Psychology of Women Quarterly, 11(1), 37–46. Friesdorf, R., Conway, P., & Gawronski, B. (2015). Gender differences in responses to moral dilemmas: A process dissociation analysis. Personality and Social Psychology Bulletin, 41(5), 696–713. Fumagalli, M., Ferrucci, R., Mameli, F., Marceglia, S., Mrakic-Sposta, S., Zago, S., Lucchiari, C., Consonni, D., Nordio, F., Pravettoni, G., Cappa, S., & Prioriet, A. (2010). Gender-related differences in moral judgments. Cognitive Processing, 11(3), 219–226. Galotti, K. M. (1989). Gender differences in self-reported moral reasoning: A review and new evidence. Journal of Youth and Adolescence, 18(5), 475–488. Gawronski, B., Conway, P., Armstrong, J., Friesdorf, R., & Hütter, M. (2018). Effects of incidental emotions on moral dilemma judgments: An analysis using the CNI model. Emotion, 18(7), 989–1008. Geipel, J., Hadjichristidis, C., & Surian, L. (2015). How foreign language shapes moral judgment. Journal of Experimental Social Psychology, 59, 8–17. Ghelfi, E., Christopherson, C. D., Urry, H. L., Lenne, R. L., Legate, N., Fischer, M. A., Wagemans, F. M. A., Wiggins, B., Barrett, T., Bornstein, M., de Haan, B.,

Lessons from Moral Psychology for Moral Philosophy

Guberman, J., Issa, N., Kim, J., Na, E., O’Brien, J., Paulk, A., Peck, T., Sashihara, M., . . ., Sullivan, D. (2020). Reexamining the effect of gustatory disgust on moral judgment: A multilab direct replication of Eskine, Kacinik, and Prinz (2011). Advances in Methods and Practices in Psychological Science, 3(1), 3–23. Gilligan, C. (1982). In a different voice: Psychological theory and women’s development. Harvard University Press. Gleichgerrcht, E., & Young, L. (2013). Low levels of empathic concern predict utilitarian moral judgment. PLoS ONE, 8(4), Article e60418. Graham, J., Meindl, P., Beall, E., Johnson, K. M., & Zhang, L. (2016). Cultural differences in moral judgment and behavior, across and within societies. Current Opinion in Psychology, 8, 125–130. Greene, J. D. (2008). The secret joke of Kant’s soul. In W. Sinnott-Armstrong (Ed.), Moral psychology: Vol. 3. The neuroscience of morality: Emotion, brain disorders, and development (pp. 35–79). MIT Press. Gump, L. S., Baker, R. C., & Roll, S. (2000). Cultural and gender differences in moral judgment: A study of Mexican Americans and Anglo-Americans. Hispanic Journal of Behavioral Sciences, 22(1), 78–93. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834. Haidt, J., & Baron, J. (1996). Social roles and the moral judgement of acts and omissions. European Journal of Social Psychology, 26(2), 201–218. Haidt, J., Koller, S. K., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65(4), 613–628. Hall, L., Johansson, P., & Strandberg, T. (2012). Lifting the veil of morality: Choice blindness and attitude reversals on a self-transforming survey. PLoS ONE, 7 (9), Article e45457. Hauser, M., Cushman, F., Young, L., Kang-Xing Jin, R., & Mikhail, J. (2007). A dissociation between moral judgments and justifications. Mind & Language, 22(1), 1–21. Haybron, D. M. (2010). The pursuit of unhappiness: The elusive psychology of well-being. Oxford University Press. Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83. Horne, Z., & Livengood, J. (2017). Ordering effects, updating effects, and the specter of global skepticism. Synthese, 194(4), 1189–1218. Horowitz, T. (1998). Philosophical intuitions and psychological theory. Ethics, 108(2), 367–385. Huemer, M. (2005). Ethical intuitionism. Palgrave Macmillan. Huemer, M. (2011). Epistemological egoism and agent-centered norms. In T. Dougherty (Ed.), Evidentialism and its discontents (pp. 17–33). Oxford University Press. Hume, D. (2007). A treatise of human nature (D. F. Norton & M. J. Norton, Eds.) (Vol. 1). Oxford University Press. (Original work published 1739) Ioannidis, J. P. A. (2005). Why most published research findings are false. PLOS Medicine, 2(8), Article e124. Ioannidis, J. P. A. (2014). How to make more published research true. PLOS Medicine, 11(10), Article e1001747. Jaffee, S., & Hyde J. S. (2000). Gender differences in moral orientation: A meta-analysis. Psychological Bulletin, 126(5), 703–726.

615

616

                  -     

James, W. (1918). The principles of psychology (Vol. 1). Henry Holt and Company. (Original work published 1890a) James, W. (1918). The principles of psychology (Vol. 2). Henry Holt and Company. (Original work published 1890b) Johnson, D. J., Wortman, J., Cheung, F., Hein, M., Lucas, R. E., Donnellan, M. B., Ebersole, C. R., & Narr, R. K. (2016). The effects of disgust on moral judgments: Testing moderators. Social Psychological and Personality Science, 7(7), 640–647. Joyce, R. (2006). Metaethics and the empirical sciences. Philosophical Explorations, 9(1), 133–148. Kahane, G. (2011). Evolutionary debunking arguments. Noûs, 45(1), 103–125. Kamm, F. M. (1993). Morality, mortality: Death and whom to save from it (Vol. 1). Oxford University Press. Kant, I. (1998). Grundlegung zur Metaphysik der Sitten (F. Hansen, Ed.). directmedia. (Original work published 1785) Kauppinen, A. (2007). The rise and fall of experimental philosophy. Philosophical Explorations, 10(2), 95–118. Kelly, M., Ngo, L., Chituc, V., Huettel, S., & Sinnott-Armstrong, W. (2017). Moral conformity in online interactions: Rational justifications increase influence of peer opinions on moral judgments. Social Influence, 12(2–3), 57–68. Khader, S. J. (2011). Adaptive preferences and women’s empowerment. Oxford University Press. Kiehl, K. A., & Sinnott-Armstrong, W. (Eds.). (2013). Handbook on psychopathy and law. Oxford University Press. Kohlberg, L. (1971). From is to ought: How to commit the naturalistic fallacy and get away with it in the study of moral development. In T. Mischel (Ed.), Cognitive development and epistemology (pp. 151–235). Academic Press. Kumar, V., & May, J. (2019). How to debunk moral beliefs. In J. Suikkanen & A. Kauppinen (Eds.), Methodology and moral philosophy (pp. 25–48). Routledge. Landy, J. F., & Goodwin, G. P. (2015). Does incidental disgust amplify moral judgment? A meta-analytic review of experimental evidence. Perspectives on Psychological Science, 10(4), 518–536. Lanteri, A., Chelini, C., & Rizzello, S. (2008). An experimental investigation of emotions and reasoning in the trolley problem. Journal of Business Ethics, 83(4), 789–804. Levine, C., Kohlberg, L., & Hewer, A. (1985). The current formulation of Kohlberg’s theory and a response to critics. Human Development, 28(2), 94–100. Machery, E., Kelly, D., & Stich, S. P. (2005). Moral realism and cross-cultural normative diversity. Behavioral and Brain Sciences, 28(6), 830. Machery, E., & Mallon, R. (2010). Evolution of morality. In J. M. Doris (Ed.), The moral psychology handbook (pp. 3–46). Oxford University Press. Maoz, U., & Sinnott-Armstrong, W. (Eds.). (2022). Free will: Philosophers and neuroscientists in conversation. Oxford University Press. Margoni, F., Geipel, J., Hadjichristidis, C., & Surian, L. (2018). Moral judgment in old age: Evidence for an intent-to-outcome shift. Experimental Psychology, 65(2), 105–114. May, J. (2018). Regard for reason in the moral mind. Oxford University Press. May, J. (2019). Précis of Regard for reason in the moral mind. Behavioral and Brain Sciences, 42(e146), 1–60.

Lessons from Moral Psychology for Moral Philosophy

McDonald, K., Graves, R., Yin, S., Weese, T., & Sinnott-Armstrong, W. (2021). Valence framing effects on moral judgments: A meta-analysis. Cognition, 212, Article 104703. McDonald, K., Yin, S., Weese, T., & Sinnott-Armstrong, W. (2019). Do framing effects debunk moral beliefs? Behavioral and Brain Sciences, 42(e162), 35–36. McNair, S., Okan, Y., Hadjichristidis, C., & Bruine de Bruin, W. (2019). Age differences in moral judgment: Older adults are more deontological than younger adults. Journal of Behavioral Decision Making, 32(1), 47–60. Merritt, M. (2000). Virtue ethics and situationist personality psychology. Ethical Theory and Moral Practice, 3(4), 365–383. Miller, C. (2013). Moral character: An empirical theory. Oxford University Press. Moore, G. E. (1959). Principia ethica. Cambridge University Press. (Original work published 1903) Moran, J. M., Jolly, E., & Mitchell, J. P. (2012). Social-cognitive deficits in normal aging. Journal of Neuroscience, 32(16), 5553–5561. Muda, R., Pieńkosz, D., Francis, K. B., & Białek, M. (2020). The moral foreign language effect is stable across presentation modalities. Quarterly Journal of Experimental Psychology, 73(11), 1930–1938. Muthukrishna, M., & Henrich, J. (2019). A problem in theory. Nature Human Behaviour, 3(3), 221–229. Nadelhoffer, T., & Feltz, A. (2008). The actor–observer bias and moral intuitions: Adding fuel to Sinnott-Armstrong’s fire. Neuroethics, 1(2), 133–144. Nado, J. (2014). Philosophical expertise. Philosophy Compass, 9(9), 631–641. Nichols, S. (2014). Process debunking and ethics. Ethics, 124(4), 727–749. Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231–259. Norenzayan, A. (2013). Big gods: How religion transformed cooperation and conflict. Princeton University Press. Nussbaum, M. (2011). Creating capabilities: The human development approach. Harvard University Press. Pastötter, B., Gleixner, S., Neuhauser, T., & Bäuml, K. T. (2013). To push or not to push? Affective influences on moral judgment depend on decision frame. Cognition, 126(3), 373–377. Paulo, N. (2020). The unreliable intuitions objection against reflective equilibrium. The Journal of Ethics, 24(3), 333–353. Paxton, J. M., Ungar, L., & Greene, J. D. (2012). Reflection and reasoning in moral judgment. Cognitive Science, 36(1), 163–177. Petrinovich, L., & O’Neill, P. (1996). Influence of wording and framing effects on moral intuitions. Ethology and Sociobiology, 17(3), 145–171. Pratt, M. W., Golding, G., Hunter, W., & Sampson, R. (1988). Sex differences in adult moral orientations. Journal of Personality, 56(2), 373–391. Prinz, J., & Nichols, S. (2016). Diachronic identity and the moral self. In J. Kiverstein (Ed.), The Routledge handbook of philosophy of the social mind (pp. 449–464). Routledge. Rawls, J. (1971). A theory of justice. Harvard University Press. Rehren, P., & Sinnott-Armstrong, W. (2021). Moral framing effects within subjects. Philosophical Psychology, 34(5), 611–636.

617

618

                  -     

Rehren, P., & Sinnott-Armstrong, W. (2022). How stable are moral judgments? Review of Philosophy and Psychology, 14, 1377–1403. Rosen, J. B., Brand, M., & Kalbe, E. (2016). Empathy mediates the effects of age and sex on altruistic moral decision making. Frontiers in Behavioral Neuroscience, 10, Article 67. Ross, D. (2002). The right and the good (P. Stratton-Lake, Ed.). Clarendon Press. (Original work published 1930) Rozin, P., Lowery, L., Imada, S., & Haidt, J. (1999). The CAD triad hypothesis: A mapping between three moral emotions (contempt, anger, disgust) and three moral codes (community, autonomy, divinity). Journal of Personality and Social Psychology, 76(4), 574–586. Ryberg, J. (2013). Moral intuitions and the expertise defence. Analysis, 73(1), 3–9. Sauer, H. (2018). Debunking arguments in ethics. Cambridge University Press. Sauer, H. (2019). Butchering benevolence moral progress beyond the expanding circle. Ethical Theory and Moral Practice, 22(1), 153–167. Schnall, S., Haidt, J., Clore, G. L., & Jordan, A. H. (2008). Disgust as embodied moral judgment. Personality and Social Psychology Bulletin, 34(8), 1096–1109. Schwarz, N. (2012). Feelings-as-information theory. In P. A. M. Van Lange, A. W. Kruglanski, & E. T. Higgins (Eds.), Handbook of theories of social psychology (Vol. 1, pp. 289–308). SAGE Publications Ltd. Schwitzgebel, E., & Cushman, F. (2012). Expertise in moral reasoning? Order effects on moral judgment in professional philosophers and non-philosophers. Mind & Language, 27(2), 135–153. Schwitzgebel, E., & Cushman, F. (2015). Philosophers’ biased judgments persist despite training, expertise and reflection. Cognition, 141, 127–137. Seidel, A., & Prinz, J. (2013). Sound morality: Irritating and icky noises amplify judgments in divergent moral domains. Cognition, 127(1), 1–5. Sen, A. (1993). Capability and well-being. In M. Nussbaum & A. Sen (Eds.), The quality of life (pp. 30–53). Clarendon Press. Shafer-Landau, R. (2003). Moral realism: A defence. Oxford University Press. Shafer-Landau, R. (2008). Defending ethical intuitionism. In W. Sinnott-Armstrong (Ed.), Moral psychology: Vol. 2. The cognitive science of morality: Intuition and diversity (pp. 83–95). MIT Press. Shrout, P. E., & Rodgers, J. L. (2018). Psychology, science, and knowledge construction: Broadening perspectives from the replication crisis. Annual Review of Psychology, 69(1), 487–510. Singer, P. (2005). Ethics and intuitions. The Journal of Ethics, 9(3–4), 331–352. Singh, J. J., Garg, N., Govind, R., & Vitell, S. J. (2018). Anger strays, fear refrains: The differential effect of negative emotions on consumers’ ethical judgments. Journal of Business Ethics, 151(1), 235–248. Sinnott-Armstrong, W. (2008a). Framing moral intuitions. In W. Sinnott-Armstrong (Ed.), Moral psychology: Vol. 2. The cognitive science of morality: Intuition and diversity (pp. 47–76). MIT Press. Sinnott-Armstrong, W. (Ed.). (2008b). Moral psychology: Vol. 1. The evolution of morality: Adaptations and innateness. MIT Press. Sinnott-Armstrong, W. (Ed.). (2008c). Moral psychology: Vol. 2. The cognitive science of morality: Intuition and diversity. MIT Press.

Lessons from Moral Psychology for Moral Philosophy

Sinnott-Armstrong, W. (Ed.). (2008d). Moral psychology: Vol. 3. The neuroscience of morality: Emotion, brain disorders, and development. MIT Press. Sinnott-Armstrong, W. (Ed.). (2014). Moral psychology: Vol. 4. Free will and moral responsibility. MIT Press. Sinnott-Armstrong, W., & Miller, C. B. (Eds.). (2017). Moral psychology: Vol. 5. Virtue and character. MIT Press. Song, F., Parekh, S., Hooper, L., Loke, Y. K., Ford, J. K., Sutton, A. J., Hing, C., Kwok, C. S., Pang, C., & Harvey, I. (2010). Dissemination and publication of research findings: An updated review of related biases. Health Technology Assessment, 14(8), iii, ix–xi, 1–193. Sorell, T. (2018). Experimental philosophy and the history of philosophy. British Journal for the History of Philosophy, 26(5), 829–849. Sripada, C. (2018). Addiction and fallibility. Journal of Philosophy, 115(11), 569–587. Stanley, T. D., Carter, E. C., & Doucouliagos, H. (2018). What meta-analyses reveal about the replicability of psychological research. Psychological Bulletin, 144(12), 1325–1346. Street, S. (2006). A Darwinian dilemma for realist theories of value. Philosophical Studies, 127(1), 109–166. Strohminger, N., Lewis, R. L., & Meyer, D. E. (2011). Divergent effects of different positive emotions on moral judgment. Cognition, 119(2), 295–300. Strohminger, N., & Nichols, S. (2014). The essential moral self. Cognition, 131(1), 159–171. Stuart, M. T., Colaço, D., & Machery, E. (2019). P-curving x-phi: Does experimental philosophy have evidential value? Analysis, 79(4), 669–684. Summers, J. S., & Sinnott-Armstrong, W. (2019). Clean hands: Philosophical lessons from scrupulosity. Oxford University Press. Sytsma, J., & Livengood, J. (2015). The theory and practice of experimental philosophy. Broadview Press. Tersman, F. (2022). Moral disagreement. In E. N. Zalta & U. Nodelman (Eds.), The Stanford encyclopedia of philosophy (Fall 2022 ed.). https://plato.stanford.edu/ archives/fall2022/entries/disagreement-moral/ Tiberius, V. (2013). Well-being, wisdom, and thick theorizing: On the division of labor between moral philosophy and positive psychology. In S. Kirchin (Ed.), Thick concepts (pp. 217–233). Oxford University Press. Tiberius, V. (2015). Moral psychology: A contemporary introduction. Routledge. Tiberius, V. (2018). Well-being as value fulfillment: How we can help each other to live well. Oxford University Press. Tobia, K. P., Chapman, G. B., & Stich, S. (2013). Cleanliness is next to morality, even for philosophers. Journal of Consciousness Studies, 20(11–12), 195–204. Tropman, E. (2011). Non-inferential moral knowledge. Acta Analytica, 26(4), 355–366. Valdesolo, P., & DeSteno, D. (2006). Manipulations of emotional context shape moral judgment. Psychological Science, 17(6), 476–477. Vavova, K. (2015). Evolutionary debunking of moral realism: Evolutionary debunking of moral realism. Philosophy Compass, 10(2), 104–116. Weinberg, J. M., Gonnerman, C., Buckner, C., & Alexander, J. (2010). Are philosophers expert intuiters? Philosophical Psychology, 23(3), 331–355. Wheatley, T., & Haidt, J. (2005). Hypnotic disgust makes moral judgments more severe. Psychological Science, 16(10), 780–784.

619

620

                  -     

Wiegmann, A., & Waldmann, M. R. (2014). Transfer effects between moral dilemmas: A causal model theory. Cognition, 131(1), 28–43. Williamson, T. (2007). The philosophy of philosophy (The Blackwell/Brown Lectures in Philosophy 2). Blackwell Publishing. Wilson, T. D., & Dunn, E. W. (2004). Self-knowledge: Its limits, value, and potential for improvement. Annual Review of Psychology, 55, 493–518.

Index

Abrams, Samantha, 13–14, 17 absolute dehumanization, 341–342 action-based EUT, see expected utility theory (EUT) actus reus imposition of blame for bad thoughts precluded, 526–527 overview, 524 proximity test, 527–528 substantial step test, 528 act utilitarianism, 1, 155, 177 admiration, 237 adolescence abstract thinking during, 465 attachment during with caregivers, 473–474 implicit memory system and, 473 with mother, 472 oxytocin and, 472–473 brain development during amygdala and, 467, 470, 471–472 anterior cingulate cortex (ACC) and, 467, 469, 470 dorsal anterior cingulate cortex and, 469 insula and, 467, 470 prefrontal cortex and, 468–469, 471–472 ventromedial prefrontal cortex (vmPFC) and, 470 cognition and, 466–467 defined, 463–464 ecological systems theory and, 482 empathy and, 466 fight or flight response and, 467 function of, 465 future research, 482 gender, importance of adult interaction and, 477 care ethics and, 478–479 differences in moral behavior, 478 moral development, relation to, 478–479 multidimensional and flexible nature of, 479–480 overview, 482 reinforcement of behavior, 478

secondary sex characteristics, 477–478 stage-based theory of moral development and, 478–479 guilt and, 467 gut feelings, 481 heteronomous moral behavior during, 467–468 high-reactive individuals, 19 integration of processes, 468 lived experience, importance of, 480–481, 482 low-reactive individuals, 19 moral development during, 462–463 moral dilemmas during, 476–477 moral emotions and, 466–467 multidisciplinary approach to, 482 overview, 19, 463, 481 parents, importance of behavioral regulation, 475 moral reasoning and, 476 peers, change in focus to, 475 peers, importance of coercive power of, 477 moral dilemmas and, 476–477 overview, 474–475, 482 parents, change in focus from, 475 puberty and, 463–464 second-order representations, 469 self-conscious emotions during, 465–466 sex and, 475 sociocultural processing during, 464–465 somatic marker hypothesis and, 469–470 temperament, importance of low-reactive versus high-reactive, 471–472 overview, 471, 482 “use it or lose it” process, 468–469 affective empathy, 236 Affective Harm Account (AHA), 552–553 affective startle eye blink response, 313–314 affective states, prosociality and, 284–285 affect–moral and mental regulation–reality interaction model, 203, 204 affect sharing, see empathy Affordable Care Act, 565 age anterior cingulate cortex (ACC) and, 467, 469, 470

621

622

Index age (cont.) punishment and, 400 trustworthiness of moral judgments and, 601 agency moral agency (see moral agency) psychological agency, 208, 212, 214 agency–communion model, 200, 203 agency detection, 576–577, 581 agentialism, 211–212 agents artificial agents, 7–8 intentional agents, 261, 553 moral agents, 281, 417 social agents, 444 spiritual agents, 578, 581, 586 aggression anger, relation to, 227 dehumanization as enabling, 333 proactive aggression, 320 reactive aggression, 320 Aguilar-Pardo, D., 190 Aharoni, E., 64, 308 alexithymia, 286 Algonquin people, moral development and, 421–422 Alicke, M. D., 532 Alison, L., 109 alloparenting, 257–258 altruism altruistic behavior, 259, 275 altruistic motivation, 252 biological altruism, 279–280 psychological altruism, 73–74, 279–280 reciprocal altruism, 578 American Psychiatric Association (APA), 304 amoralism externalism and, 63, 66–67 internalism and, 63, 66–67 psychopathy as, 63, 304 ventromedial prefrontal cortex (vmPFC) lesions and, 65 amygdala adolescence and, 467, 470, 471–472 moral decision making and, 316 perspective taking and, 260–261 prosociality and, 288–289 psychopathy and, 315, 316–317 Anderson, N. E., 314 Andow, James, 608 Andrighetto, Giulia, 13 anger aggression, relation to, 227 autonomy, association with, 225–226 blame, role in, 368 CAD triad hypothesis and, 225–226 common triggers of, 227 consequences of, 227–228

as contextually or situationally dependent, 228 elicitors of, 227 guilt, relation to, 233 interpersonal relationships and, 228 as moral emotion, 240 moral judgments, relation to, 607 overview, 240 positive versus negative consequences, 223 as primary emotion, 225 social accountability and, 228 angular gyrus moral decision making and, 316 psychopathy and, 315, 316–317 animalistic dehumanization, 337 animals animal behavior, 11, 15–16 dehumanization, animal metaphors and, 333, 341, 342 moral patiency and, 208, 209 prosociality in capuchin monkeys, 280 chimpanzees, 279–280 evolutionary basis of, 280–281 kin selection, 280 other mammals, 280 overview, 274 psychological altruism versus biological altruism, 279–280 rhesus monkeys, 279 shared motives with humans, 280 punishment of, 399 Anscombe, G. E. M., 597 anterior cingulate cortex (ACC) adolescence and, 467, 469, 470 affect sharing and, 253, 261–262 perspective taking and, 261 prosociality and, 288 psychopathy and, 315 anthropology, see also cultural perspectives affect sharing and, 255 anthropological theories, 514 anthropological work, 133 conflation of moral with social, 512, 514 cultural differences, importance of, 132 ethical turn in, 512 illiberal societies, studying, 495 moralistic religions and, 584–585 morality and, 505–506, 516 moral realism and, 496–497, 512–513 power in, 513 reductivist approach to morality, 513 science of unfreedom and, 513 transcendence and, 512 anti-Humeans, 62, 69–70

Index antisocial personality disorder (APD) classification of, 304 conduct disorder (CD), 304 developmental progression of, 304 diagnosis of, 304 features of, 304 historical background, 303–304 oppositional defiant disorder (ODD), 304 psychopathy (see psychopathy) use of terms, 303–304 APD, see antisocial personality disorder (APD) Appiah, A., 348 approach–avoidance task (AAT), 314–315 Aristotle, 409, 596 Aronow, P. M., 92 artificial agents, 7–8 artificial intelligence, moral categorization and, 216 Ascent of Man Scale, 341, 343 Asch, S., 36 Asian Disease Problem individual choices versus moral decisions and, 174–175 overview, 174 strong preferences and, 174–175 attribute substitution allocation and, 185–186 criminal law and, 186, 187 insurance and, 186–187 overview, 185 tort law and, 186 attribution dispositional attributions, 539 of mental capacities and traits, 198, 212 of mindedness, 214–215 of moral agency, 15, 211–216 of moral patiency, 204–210, 214–216 of moral status, 198, 214–216 situational attributions, 539 Australia, dehumanization in intergroup conflict in, 331–332 automatic aversion, 109–111 automatic behaviors, 473 automaticity, 132 automatic processes, 341 automatic responses, 470 autonomous morality, 419 autonomy, ethics of, 133, 503 awe consequences of, 238–239 elevation compared, 239 elicitors of, 238 flavors of, 238 as “hive switch,” 237–238 overview, 240 Ayer, A. J., 596–597

Bago, B., 112 Bailey, A. H., 40, 41–42 Baillargeon, R., 438 Baird, Abigail, 19 Banaji, M. R., 132 Bandura, A., 333–335, 345 Bargh, J. A., 132 Baron, Jonathan framing effects and, 178 on isolation effects, 188 on moral absolutes, 20 on moral dilemmas, 115 on moral judgments, 187 moral reasoning generally, 14–15, 598 omission bias and, 180 on protected values, 183 on response times (RT), 189, 190 on utilitarianism, 155 vaccines and, 175, 181–182 Bar-Tal, D., 333, 334, 345 Bartels, D. M., 309 basic emotions theory, 138 Baskin-Sommers, A. R., 321 Batson, C. D., 286, 466 Bauman, Z., 513 Bayesian probability theory, 176 behavioral approach system (BAS), 233 behavioral economics empathy and, 251 overview, 11 punishment and, 392 behavioral inhibition system (BIS), 233 behaviorism, 127–128 Beldo, Les, 11, 19–20 belief beliefs and desires, 383 metaphysical beliefs, 495 moral beliefs versus moral behaviors, 577 motivational mental states, relation to background beliefs, 56 political beliefs as causing moral beliefs, 553–555 religious beliefs, 577–578 shared beliefs, 256 transgressors, influence of beliefs and attitudes toward, 366–367 belief manipulation protocols, 90–91 Bendixen, Theiss, 11, 21 Benedict, Ruth, 494 Bentham, Jeremy, 206, 393 Berkowitz, A. D., 92 Berlin, Isaiah, 502 Bermúdez, J. P., 190 Bian, L., 448 Bicchieri, C., 79, 81, 82, 87, 89, 90–91 Big Five model of personality, 35 “Big Three” of morality, 503–504 biological altruism, 279–280

623

624

Index Bjornsson, G., 67–69 Blackburn, R., 312 Blackburn, S., 60 Black Lives Matter, 556 Blader, S. L., 263 Blair, R. J. R., 64, 307, 310, 312, 313 blame criminal blame (see criminal law) cultural history hierarchical social structure and, 355, 357 human settlement, after, 355, 356–357 in hunter–gatherer communities, 355–356 institutionalization of, 355, 356 intergroup conflict and, 357 overview, 355, 371 population growth and, 355, 357 property and, 356 honor and, 366 intuitive blame (see intuitive blame) judgments anger, role of, 368 causality and, 364, 367, 368 dumbfounding hypothesis, 367–368 information processing model, 364 intentionality and, 364, 367, 368 intuitions and, 367–368 moral emotions, role of, 368 motivated bias and, 364–367 norms and values, influence of, 366 racial discrimination and, 366 reasons for acting, 364, 366–367, 368 transgressors, influence of beliefs and attitudes toward, 366–367 as moral criticism community response, 369 in-group versus out-group status, 370 norms of, 369 online discourse, 370–371 overblaming, 369 overview, 5, 369, 371 second-person blaming, 370 social media and, 370–371 subtle forms of, 369 third-person blaming, 370 transgressor response, 369 underblaming, 369 moral intuitions and, 367 as moral judgment, 5 moral norms and, 390–391 moral philosophy and, 598 overview, 17–18, 354, 363–364 Path Model of Blame, 364, 366 punishment distinguished, 354, 359–360 reprobative blame, 391 standing to blame, 198 blatant dehumanization, 340–341 body–heart–mind model, 203, 204 Bollich, K. L., 34–35

Bosnia, dehumanization in intergroup conflict in, 345 Bowlby, John, 472 brain impairment criminal behavior and, 317, 320–321 prosociality and, 289–290 psychopathy and, 315–316, 320–321 Braver Angels, 566 Briggs, J. L., 505 Bronfenbrenner, Urie, 482 Brown, R., 345 Bruneau, E., 341, 343 Buffy the Vampire Slayer (television program), 462 Buganda, punishment in, 358 Buyukozer Dawkins, M., 447 Cacioppo, J., 132 CAD triad hypothesis, 225–226 Cameron, C. D., 611 Campbell, M. A., 307 cancel culture, 370 Caplan, B., 188 capuchin monkeys, prosociality in, 280 care ethics, 478–479 Cassaniti, J. L., 505 Castano, E., 345 caudate nucleus, prosociality and, 288, 332 causal determinism, 10–11 CD (conduct disorder), 304 Cehajic, S., 345 Chae, J. J. K., 439 Chapman, G. B., 611–612 Chapman, H. A., 230 Chavez, A., 89 child raising, see also infants and toddlers community orientation of, 418 cooperative caregiving, 418 cooperative child raising, 412, 418 evolved development niche (EDN) and, 418 feminist perspectives on, 417–418 moral development and (see moral development) morality, as central to, 417–418 moral sense and, 412–413 overview, 415–417 stress response and, 416–417 vagus nerve and, 416 chimpanzees moral communication, humans compared, 385–386 prosociality in, 279–280 China, punishment in, 358 Cholbi, M., 66 Choy, O., 319–320 Cialdini, R. B., 84 Cidam, A., 234–235

Index cingulate cortex, moral decision making and, 316 Cislaghi, B., 79, 92 civil rights, psychological constructionism and, 139 Clark, A. E., 113 Clark, F., 307 climate change prosociality and, 278 social norms and, 85 Clore, G. L., 231 cognitive empathy, 236 Cognitive Reflection Test, 116 cognitivism “Hume’s problem” and, 12–13, 59–62 moral emotion and, 72 moral judgments and, 72–73 noncognitivism versus, 59–60, 69 Cohen, A. B., 35 Cohen-Chen, S., 223 Colombia, improving constructiveness of contact and dialogue in, 565–566 community, ethics of, 133, 503 community service, 398 compassion, see also empathy consequences of, 236, 237 defined, 236 empathy compared, 236–237 moral character, relation to, 47 as moral emotion, 240 overview, 236–237 sympathy and, 236 competence versus morality, 38–40, 48 conditional internalism, 61, 65, 67–69 conduct disorder (CD), 304 consciousness generally, 11 artificial agents and, 8 mind perception and, 199, 202 consensus and morality, 492 contempt CAD triad hypothesis and, 225–226 community, association with, 225–226 consequences of, 226 as contextually or situationally dependent, 228 as distinct emotion, debate regarding, 226 elicitors of, 226 measurement of, 226 overview, 240 predictive value of, 226–227 as primary emotion, 225 Conway, P., 114 cooperation affect sharing, intragroup cooperation versus intergroup competition, 255 cheating and, 433–434 evolutionary perspective, 412, 433–434

mind perception and, 199–200 moral dilemmas and, 183–184 punishment as fostering, 362 religious systems and evolutionary foundations of, 580 ritual, 583 supernatural punishment hypothesis, 582 cooperative behavior, 282, 578 cooperative breeders, 257–258 cooperative caregiving, 418 cooperative child raising, 412, 418 cooperative groups, 133, 433 cooperative impulses, 112 cooperative individuals, 282 cooperative morality, 11 cooperativeness, 199–200 Coovert, M. D., 45–46 Copp, D., 60 core goodness traits, 42–44 cosmopolitanism, 184 COVID-19 moral injury and, 118 moral reasoning and, 172 social norms and, 83, 85, 87 vaccine, 83, 85, 87, 172 criminal law actus reus imposition of blame for bad thoughts precluded, 526–527 overview, 524 proximity test, 527–528 substantial step test, 528 attempt, 524, 527 attribute substitution and, 186, 187 brain impairment, criminal behavior and, 317, 320–321 culpable control model, 532 cultural considerations in criminal blame, 544 deprivation of liberty, 525 deterrence in, 186, 187, 525–526 discrimination in, 363 elements of criminal blame, 523 evidence of prior crimes and bad acts Federal Rules of Evidence 404(b), 540, 541, 542, 543 identity, admission to show, 541 inadmissibility generally, 525, 537 intent, admission to show, 541–542 knowledge, admission to show, 541–542 lack of accident or mistake, admission to show, 541 moral character, role of, 539–540 motive, admission to show, 540–541 “nonpropensity” purposes, admission for, 540–542 opportunity, admission to show, 541 plan, admission to show, 541

625

626

Index criminal law (cont.) policy basis of inadmissibility, 542–543 preparation, admission to show, 541 split in judicial authority regarding, 542 excuse, 544 failure to act, imposition of blame for, 527, 543 free choice and, 524 influence of intuitive blame on criminal blame, 524, 535 injustice in, effect on compliance, 526 intuitive blame (see intuitive blame) justification for punishment in, 525, 544 knowledge in, 543 legality principle and, 525 legitimacy of, effect on compliance, 526 marijuana and, 526 mens rea failure to act, imposition of blame for, 527, 543 hierarchy of culpability, 530, 543–544 knowledge, 543 moral character, role of, 535, 537 negligence, 543 overview, 524 requirement of, 529 strict liability offenses distinguished, 530, 543 mitigation of blame, 544 Model Penal Code (MPC) overview, 524 recklessness and, 533, 537 moral character, role of cognition versus, 537–539 dog attack examples, 534, 535, 536, 537–539 drugs example, 532, 533, 535 evidence and, 539–540, 542 explosion example, 532, 533, 536 fire example, 532, 533 inference of mental state, 535, 539 intuitive blame and, 532, 534–535 mens rea and, 535, 537 as proxy for mental state, 535–536 recklessness and, 532, 534–536 skiing accident example, 534, 535, 536 moral intuitions and, 524 moral reasoning and, 186, 187 murder, culpability for, 530 negligence in, 543 overview, 20, 523, 524–525 prior crimes and bad acts, evidence of, 537 psychopathy and as predictor of criminal behavior, 304–305 rational deficit hypothesis, legal culpability and, 305 recklessness in

conscious risk taking as, 530–531, 533–534, 535–536 inference of mental state, 531 intuitive blame and, 533 moral character, role of, 532, 534–536 nature and purpose of conduct standard, 533–534 overview, 524 proof of mental state, 531 reckless homicide, culpability for, 530–531 reckless manslaughter, culpability for, 530, 531 sentencing versus judgment, 363 shared moral culture and, 526 strict liability offenses, 530, 543 Critcher, C. R., 46–47 Croes, M., 346–347 cross-cultural differences, 131, 132, 142, 454, 455 cross-cultural research, 131 “Crying Baby” thought experiment, 106 Cuddy, A. J. C., 37 culpable control model, 532 cultural perspectives on blame (see blame) Brahman widows eating fish example, 498–501, 506, 511 consensus and morality, 492 on criminal law, 544 Hmong ritual practice versus Christian sanctity, 509–511 illiberal societies, studying, 495 moral absolutes (see moral absolutes) moral foundations theory (MFT) and, 132, 135 moral realism (see moral realism) moral reasoning, 191–192 Native American whaling example, 507–509, 511 overview, 11, 409, 492 punishment (see punishment) theory of dyadic morality (TDM) and, 142 trustworthiness of moral judgments and, 600 Cushman, F. A., 106, 110, 178, 611 Damasio, A. R., 65–66, 469, 470 “Dark Factor,” 35 Darley, J. M., 34, 286 Darwall, S., 60 Darwin, Charles, 411, 412–413 Das, V., 514, 516 data of ethics, 606 debunking deontological moral judgments, arguments against, 599–600 evolutionary debunking, 599, 600 Decety, Jean, 11, 15–16, 260–261 decremental nature of moral psychology, 137

Index defining issues test (DIT), 306 defining moral psychology, 1 dehumanization aggression, as enabling, 333 animalistic dehumanization, 337 Ascent of Man Scale and, 341, 343 critiques of exaggeration of dehumanization, 347–348 valence effect as better explanation, 347, 349 violence only compatible with perception of victims as human, 347, 348–349 dehumanization paradox, 348 discrimination and, 344 epistemic authority engendering, 338–339 ethnocentrism and, 335 extreme nature of, 333 forms of absolute dehumanization, 341–342 animal metaphors, 333, 341, 342 blatant dehumanization, 340–341 meta-dehumanization, 343 objectification, 342 overview, 332, 340 reciprocal dehumanization, 343 relative dehumanization, 341–342 self-dehumanization, 342–343, 347 simple dehumanization, 342 subtle dehumanization, 333, 340–341 genocide and infrahumanization and, 346 moral disengagement and, 347 moral exclusion and, 347 overview, 332, 346–347 self-dehumanization and, 347 stage model, 346–347 stereotypes, role of, 346 humaneness and, 336–337 implicit association task and, 341 infrahumanization criticism of, 339 genocide and, 346 intergroup conflict and, 344 overview, 334–336 as relative dehumanization, 342 intergroup conflict, role in after harm, 345–346 consequences of, 343–344 delegitimization and, 334, 345 dual model and, 344–345 during harm, 345 infrahumanization and, 344 mind perception and, 345 moral disengagement and, 334–335, 345 moral exclusion and, 334–335, 347 overview, 343–344 prior to harm, 344–345 stereotypes and, 344–345

in interpersonal relationships, 332 mechanistic dehumanization, 337 moral agency and, 213, 337–338 moral disengagement genocide and, 347 intergroup conflict and, 334–335, 345 overview, 333–334 moral exclusion intergroup conflict and, 334–335, 347 overview, 334 moral patiency and, 337–338 overview, 17, 331, 332, 349–350 psychological essentialism and, 339–340 scholarly literature on, 340 social psychological focus of, 332 spectrum of, 333 theoretical accounts of delegitimization, 334, 345 dual model, 336–337, 344–345 mind perception and, 337–338, 345 overview, 331–333 delegitimization, intergroup conflict and, 334, 345 Del Gaizo, A. L., 313 Demaree-Cotton, Joanna, 13, 610 democratic norms, 556 De Neys, W., 112 deontology consequences and, 176 defined, 176 deontological moral judgments, arguments against, 599–600 deontological moral realism, 514 moral dilemmas and, 111–113 moral patiency and, 206, 215 moral rules and, 515–516 omission bias and, 181 overview, 1–2 punishment and, 392–393 descriptive moral realism, 495, 496–497, 512–513, 516–517 descriptive norms, 87 desires beliefs and, 383 first-nature desires, 424–425 goals and, 204 in Humean theory of motivation, 57–58, 59 practical norms and, 57–58 second-order desires, 426 ultimate desires, 57–58 detachment from intimacy, 423 deterrence in criminal law, 186, 187, 525–526 general deterrence, 393 lack of, 358–359, 360, 371 retribution versus, 393–394 specific deterrence, 393 in tort law, 186

627

628

Index developmental changes, 465–466, 468 developmental differences, 550 developmental ethics, 411 developmental evolutionary biology theory, 411–413 developmental evolutionary theory, 413 developmental moral realism, 501–502 developmental neuroscience, 415–417 developmental niche, 413 developmental processes, 190, 466, 467–468 developmental psychology, 433 developmental research, 452 developmental stages, 191, 465 developmental work, 130, 274, 282 diachronic identity, 9 Diagnostic and Statistical Manual of Mental Disorders (DSM-I), 304 Diagnostic and Statistical Manual of Mental Disorders (DSM-V), 304 dictator game (DG), 89, 90–91, 275, 278 discontinuity hypothesis, 202–203 discrimination generally, 127, 129 in criminal law, 363 dehumanization and, 344 gender discrimination, 139, 370, 555 racial discrimination, 139, 363, 366 unintentional, 366 disgust bodily norm position, 230–232 CAD triad hypothesis and, 225–226 consequences of, 230 as contextually or situationally dependent, 228 as distinct emotion, debate regarding, 228–229 divinity, association with, 225–226 elicitors of, 229, 230 general morality or character position, 229–230 metaphorical use position, 229 as moral emotion, 240 moral judgments, relation to, 231–232, 607 overview, 240 predictive value of, 226–227 as primary emotion, 225 purity position, 229 shame, relation to, 233 as socially constructed, 225 sympathetic magic and, 230–231 Diskutier Mit Mir, 566 dispositional attributions, 539 distress, prosociality and, 285–286 divinity, ethics of, 133, 503 Doris, J. M., 611 dorsal anterior cingulate cortex, adolescence and, 469 dorsal lateral prefrontal cortex, prosociality and, 289

dorsal raphe nucleus, empathic concern and, 258 Dreier, J., 60 Dresher, Melvin, 274 dual-inheritance theory, 580–581 Duff, Anthony, 391–392 dumbfounding hypothesis, 367–368 Durkheim, Emile, 492, 512, 514 dyadic completion, 140 Ebbinghaus, Hermann, 596 ecological attachment, 421 ecological systems theory, 482 economic games, see prosociality economics behavioral economics empathy and, 251 overview, 11 punishment and, 392 expected consequences and, 172–173 wealth maximization versus utility maximization, 172–173 welfare economics, utilitarianism and, 172–173 elevation awe compared, 239 consequences of, 239 elicitors of, 238 as “hive switch,” 237–238 overview, 240 Elster, J., 79 emergent view of mind, 138 emotional contagion, 15–16, 249, 252–253, 263 emotional deficits, 309 emotional distress, 116, 252, 258–259 emotional empathy, 249, 253, 257 emotional expressions, 254–255 emotion processing, psychopathy and, see psychopathy emotion regulation, psychopathy and, 321 empathic concern alloparenting and, 257–258 dorsal raphe nucleus and, 258 fMRI studies, 258–259 infant cues and, 258 inputs and stimuli, 249 kin recognition and, 259 mechanisms of, 258–259 neotenous cues and, 258 neurobiology and, 258 nucleus accumbens and, 258 overview, 249, 260 oxytocin and, 258 parental nurturing and, 257 perspective taking, relation to, 262 prosociality, relation to, 259–260 somatosensory cortex and, 259 stria terminalis and, 258

Index ventral pallidum and, 258 ventral striatum and, 259 ventromedial prefrontal cortex (vmPFC) and, 258, 259 empathy adolescence and, 466 affective empathy, 236 affect sharing AIDS example, 255–256 anterior cingulate cortex (ACC) and, 253, 261–262 as communication of emotional state between individuals, 251 competitive interaction and, 254–255 evolutionary perspectives on, 255 fMRI studies, 253–256 influence on responses, 252 in-groups versus outsiders, 256 insula and, 261–262 intragroup cooperation versus intergroup competition, 255 limitations of, 251 mechanisms of, 252–254 neurobiology and, 252–253 overview, 249 oxytocin and, 253 perspective taking, relation to, 251 in prairie voles, 253 a priori attitudes and, 255–256 prisoner’s dilemma and, 255 prosociality, relation to, 253, 256–257 in rats, 252–253 shared beliefs and, 256 as socially modulated, 249, 254–257 avoidance of, 236–237 behavioral economics and, 251 cognitive empathy, 236 compassion compared, 236–237 complex relation with morality, 248, 249–250 consequences, 236, 237 costs versus benefits, 248–249 difficulty in defining, 236 empathy failures, 236–237 heuristics and, 250–251 information processing and, 249 interdisciplinary approach to, 250 as moral emotion, 240 moral intuitions and, 263 multidimensional nature of, 250 overview, 15–16, 264 phenomenological conceptualization of, 250 prosociality and, 236, 248, 286–287 proximate explanations, 250 reasoning versus, 262–263 traditional/Indigenous versus Western societies, 411 ultimate explanations, 250 energy usage, social norms and, 87

engagement-centered ethic, 18 epistemic norms, 56–57 Eskine, K. J., 231 ethical considerations, 309 ethical decisions, 479 ethical naturalism, 417 ethical reasons, 256 ethical views, 115–116 ethics of autonomy, 133, 503 care ethics, 478–479 of community, 133, 503 data of ethics, 606 developmental ethics, 411 of divinity, 133, 503 moral realism and, 514–515, 516 virtue ethics moral character and, 33 moral dilemmas and, 115–116 moral philosophy, 2 overview, 2 ethnocentrism, 335 evaluative self, 8, 9 Evans-Pritchard, E. E., 585 event segmentation theory, 165 evidentialism, 56 evolutionary adaptation, 412 evolutionary debunking, 599, 600 evolutionary perspectives on affect sharing, 255 animals, prosociality in, 280–281 biological linkages with other species, 413 cooperation and, 412, 433–434 developmental evolutionary biology theory, 411–413 developmental evolutionary theory, 413 egalitarian social structures, rise of, 412 functional versus evolutionary adaptation, 412 genetic adaptation, 411–412 hypersociality, rise of, 412 infants and toddlers, moral core in, 455 on moral development (see moral development) “moral sense” and, 412–413 natural selection, 411–412 nongenetic inheritances, 413 overview, 11, 18–19, 409 prosociality, 280–281 on punishment, 392 on religion (see religion) “survival of the fittest,” 412 evolved development niche (EDN), 415, 418 executive self, 8–9 expected utility theory (EUT) action-based EUT assigning utility to actions, 157 ball and bucket example, 158

629

630

Index expected utility theory (EUT) (cont.) coin flip example, 157–158 etiquette example, 158 fairness and, 161–162 implementation issues, 164–166 lying, low utility assigned to, 157–158, 159, 160 moral decision making and, 160–161 naive rule following and, 158 outcome-based EUT compared, 164–166 overview, 167 retrospective protocol analysis and, 159 rule-breaking, low utility assigned to, 157–158 truth-telling, high utility assigned to, 159, 160 betting examples, 154 event segmentation theory and, 165 expectations, 153 human behavior not conforming to, 153 as normative model, 174 options, 153 outcome-based EUT action-based EUT compared, 164–166 act/omission distinction and, 157 ambulance example, 155–156 implementation issues, 164–166 moral decision making diverging from, 156–157 overview, 154–155 protected values and, 157 status quo bias and, 157 trolley dilemmas and, 156–157 umbrella example, 154–155 utilitarianism and, 155 variations in moral decision making and, 156 overview, 14, 167, 176 utilitarianism versus, 174 utilities, 153, 157 experience–agency model mind perception and, 15, 200–203, 337 moral agency and, 212–213 two-dimensional models compared, 203–204 experientialism, 207–209, 211–212 experimental philosophers, 11, 66–67 experimental philosophy, 67, 69 expertise defense, 611–612 expressivism, 596–597 externalism amoralism and, 63, 66–67 internalism versus, 60, 61, 69 Facebook, 558–559 facial affect recognition deficits, 312–313 Falkenbach, D. M., 313 false consensus bias, 84–85 Fariborz, A., 473 fatalism, 10–11 Feinberg, J., 390

FeldmanHall, Oriel, 11, 16 feminist perspectives on child raising, 417–418 Figner, B., 104 first-nature desires, 424–425 Firth, Raymond, 493 Fiske, S. T., 36–37, 213, 338, 345, 505 Fitness, J., 229 Flanagan, O., 597 Flood, Merrill M., 274 footbridge dilemma, see trolley dilemmas Forgas, J., 132 framing effects moral reasoning and, 174, 178–179 order framing effects, 602, 603 trustworthiness of moral judgments and, 602–603 word framing effects, 602, 603 France, Jesuit missionaries from, 421–422 Frederick, S., 185 freedom defined, 513 liberal free choice distinguished, 513 liberationist ideals distinguished, 513 morality and, 512, 513–514 power compared, 513–514 science of unfreedom, 513 free will artificial agents and, 8 causal determinism and, 10–11 consciousness and, 8 incompatibilism and, 10–11 moral philosophy and, 598 moral responsibility and, 10–11 overview, 10–11 Friedman, W. J., 479 functional adaptation, 412 Funk, Friederike, 18 Galen, L. W., 586–587 Gamez-Djokic, M., 112–113 Gawronski, B., 607 Geertz, Clifford, 496 gender importance in adolescence (see adolescence) trustworthiness of moral judgments and, 601 gender discrimination, 139, 370, 555 genetic adaptation, 411–412 genocide dehumanization generally, 332, 346–347 infrahumanization and, 346 moral disengagement and, 347 moral exclusion and, 347 self-dehumanization and, 347 stage model, 346–347 stereotypes, role of, 346 Geraci, A., 448, 449 Gergen, Kenneth, 512 Germany

Index Diskutier Mit Mir, 566 Nazi Germany, capital punishment in, 358 Gewirth, Alan, 128 Gibbard, A., 60 Gilligan, Carol, 478–479 Giner-Sorolla, R., 230, 345 Glenn, A. L., 310, 311 Goff, P., 346 Goldenberg, A., 223 Goldstein-Greenwood, J., 117 Gonzalez, R., 345 Goodwin, Geoff, 12, 71, 116 Graham, J., 110, 310 Gray, Kurt agentialism and, 212 on dehumanization, 17, 337–338 discontinuity hypothesis and, 202 experience–agency model and, 201–202, 203, 212–213 on mind perception, 199, 206–207, 337–338, 342 moral domains generally, 13–14 Greene, Joshua D. dual-process model, 111, 190 on moral dilemmas, 106–107 on moral judgments, 187, 189 on moral reasoning, 178 omission bias and, 182 process debunking and, 599–600 on sacrificial moral dilemmas, 308 Greenwald, A. G., 132 Gromet, D. M., 47 group dynamics affect sharing and, 255, 256 blame, in-group versus out-group status, 370 human nature, in-group versus out-group status and, 415–416 infants and toddlers, in-group status and expectations regarding fairness, 448 intergroup conflict dehumanization in (see dehumanization) negative perceptions in, 331 moral communication, in-group versus outgroup status, 398–399 moral decision making, in-groups and, 163–164 overview, 12 perspective taking and, 261–262 Guan, Kate, 20–21 guilt adolescence and, 467 anger, relation to, 233 anticipation of, 233 as approach emotion, 234 beneficial, perceived as, 232, 233–234 breastfeeding example, 233 as complex emotion, 232 consequences of, 234–235

elicitors of, 233–234 as moral emotion, 240 moral norms and, 79 overview, 240 prescriptive morality and, 233 prosociality and, 287–288 psychopathy, lack of in, 304 as secondary emotion, 232 self as focus, 232 self-failure as triggering, 232 shame compared, 232–235 similarities with shame, 232 Gürçay, B., 115, 189, 190 Güth, W., 275 Haagensen, L., 346–347 Hagan, J. P., 43–44 Haidt, Jonathan on moral domains, 125 on moral emotions, 223, 229, 231, 238 moral foundations theory (MFT) (see moral foundations theory (MFT)) on moral judgments, 134 on moral motivation, 70–71, 72 Hallpike, C. R., 191 Hamilton, R. H., 319–320 Hamlin, J. K., 439, 441–443, 444, 452–453, 454 Hamlin, Kiley, 19 happiness, 597–598 Hare, R. M., 176, 178, 188–189, 596–597 Harris, L., 338, 345 Harris, R. J., 179 Hartshorne, H., 34 Haslam, Nick, 17, 213, 332, 336–337, 340, 342, 610 hate speech, social norms and, 87 hedonism, 597–598 Heereken, H. R., 70 Held, Virginia, 417–418 Heltzel, Gordon, 20–21 Helzer, E. G., 35 Henle, M., 173 Herder, Johann, 502 heteronomous morality, 419 heuristics empathy and, 250–251 moral heuristics, 175–176 Hickman, Jacob R., 11, 19–20 Hinduism, morality of Brahman widows eating fish in, 498–501, 506, 511 hippocampus, psychopathy and, 316–317 Hmong people, ritual practice of, 509–511 Hobbes, Thomas, 423 Hodson, G., 343 Holocaust, 333, 343 human nature capacities required for proper species members, 414

631

632

Index human nature (cont.) evolved development niche (EDN) and, 415 influences on development, 415 in-group versus out-group status and, 415–416 moral development and, 415–417 qualities needed for life fulfillment, 414 species-specific features, 414 human nature–human uniqueness model, 200, 203 Hume, David on moral judgments, 6 moral philosophy and, 596 “ought” versus “is,” 596 prosociality and, 284 Humean theory of motivation anti-Humeans versus, 62, 69–70 desires and, 57–58, 59 “Hume’s problem” and, 12–13, 59, 61–62 “Hume’s problem” cognitivism and, 12–13, 59–62 formulation of, 59, 61–62 Humean theory of motivation and, 12–13, 59, 61–62 internalism and, 12–13, 59, 60–62 overview, 12–13, 58 Hungary, dehumanization in intergroup conflict in, 331–332, 341 hunter–gatherer communities blame in, 355–356 punishment in, 355–356 hypothalamic-pituitary-adrenocortical axis, 252 hypothalamus, empathic concern and, 258 identity criminal law, admission of prior crimes and bad acts to show, 541 diachronic identity, 9 moral change, effect of, 44 moral character, relation to, 44–45, 48 moral philosophy and, 598 neurodegenerative disease, effect of, 44–45 overview, 8 political attitudes and behavior and, 556 synchronic identity, 9 “true self,” 9 image shame, 235 implicit association task, 341 incidental affect negative incidental affect, 605 positive incidental affect, 605 trustworthiness of moral judgments and, 604–605 incompatibilism, 10–11 India Brahman widows eating fish, morality of, 498–501, 506, 511 collective violence in, 516

infant cues, 258 infants and toddlers child raising (see child raising) differentiating between help versus harm, 436–437 evaluation generally, 435, 454 expectations generally, 435, 454 fairness, expectations regarding contextual expectations, 448 equal distribution of resources, 447–448 individual response, regarding, 448–449 in-group status and, 448 overview, 435–436, 447 preference of fair over unfair distributors, 449 third-party observer response, regarding, 449 help versus harm, evaluation of box scenario and, 443, 444 contextual influences on, 445–447 hill scenario and, 441, 442–443 intent, consideration of, 445 overview, 435–436, 441 preference of helpers over hinderers, 441–442 preferential looking scenario and, 441–442 social basis of, 442–444 help versus harm, expectations regarding contextual influences on, 438 hill scenario and, 440 individual response, regarding, 438–439 intent, consideration of, 439–440 overview, 435–436, 437 third-party observer response, regarding, 440–441 moral core in current issues, 453 determining who is “good,” 450–451 evolutionary perspective, 455 family and parental variation, impact of, 455 generalizations, 451 ManyBabies Project, 453–454 nature of, 449–450 ontogenetic perspective, 454–455 other explanations, 436 outstanding questions, 455 overview, 434–435 replication crisis, 452–453 research methods, 435–436 reward and punishment, 451–452 self-interest and, 451 WEIRD (Western, educated, industrialized, rich, democratic) problem, 452, 453 moral intuitions in, 503 morality plays, 435 overview, 19 prosociality in, 281–282, 440, 452

Index violation-of-expectation (VoE) paradigms, 435 inferior frontal gyrus, moral decision making and, 317 infrahumanization criticism of, 339 genocide and, 346 intergroup conflict and, 344 overview, 334–336 as relative dehumanization, 342 inhibitive moral agency, 8–9 insula adolescence and, 467, 470 affect sharing and, 261–262 psychopathy and, 315 insurance, 186–187 intentional agents, 261, 553 intentionality, 364, 367, 368 intergroup conflict blame and, 357 dehumanization in (see dehumanization) negative perceptions in, 331 punishment and, 357 internalism amoralism and, 63, 66–67 conditional internalism, 61, 65, 67–69 externalism versus, 60, 61, 69 “Hume’s problem” and, 12–13, 59, 60–62 psychopathy, ramifications of, 63–65 rational agents and, 61 unconditional internalism, 61 ventromedial prefrontal cortex (vmPFC) lesions, ramifications of, 65–66 interpersonal orientation task, 162–163 intuitions blame and, 367–368 in children, 503 descriptive models of moral psychology and, 177 moral foundations theory (MFT), role in, 132, 135, 550–551 moral intuitions blame and, 367 criminal law and, 524 empathy and, 263 harm perceptions and, 139 as heuristics, 175 in infants and toddlers, 503 in MFT, 136, 512 moral intuitionism, 606, 609–611 moral judgments and, 132, 134 overview, 596–597 as third-person rules of reason, 497–498 trustworthiness of moral judgments and, 610–611 utilitarianism and, 176, 177–178, 191 normative models of moral psychology and, 176

theory of dyadic morality (TDM), role in, 141 traditional/Indigenous versus Western societies, 411 utilitarianism and, 176 intuitive blame anti-Muslim bias example, 529 criminal blame, influence on, 524, 535 cultural considerations, 544 for incomplete wrongdoing, 528–529 mitigation of, 544 moral character, role of evidence and, 542 overview, 524 recklessness framework compared, 532, 534–536 proximity test and, 528–529 recklessness and, 533 social function of, 523–524 Isen, A. M., 284 Islam, alcohol consumption in, 584 isolation effects overview, 188 rent control example, 188 taxation example, 188 Israel–Palestine conflict, dehumanization in, 347 Iyer, R., 310 James, William, 469, 596 Jensen, Lene, 504 Jesuit missionaries, 421–422 Jin, K.-S., 438 Johnson, B. D., 258 Jones, A., 229 Jones, L., 307 Jordan, A. H., 231 Joseph, C., 504–505 journals, 1–2 Joyce, M. A., 179 Jurney, J., 187 justification for disengagement, 561 justified reasons, 8, 368 posthoc justification, 139 for punishment in criminal law, 525, 544 self-serving justification, 9 Kagan, Jerome, 471, 472 Kahane, Guy, 13 Kahn, E., 303 Kahneman, D., 173–174, 185, 189 kama muta, 237 Kanakogi, Y., 440, 445 Kane, M. J., 113 Kant, Immanuel on animals, 206 on child raising, 418–419, 420 deontological moral realism and, 514

633

634

Index Kant, Immanuel (cont.) on indirect duty, 205 on moral absolutes, 493–494 on moral judgments, 6 moral realism and, 515 on moral reasoning, 128 on punishment, 392–393 Karantzas, G. C., 332 Keizer, K., 84 Kelly, D., 390 Kelman, H., 333 Keltner, D., 238, 505 Kervyn, N., 37 Kiehl, K. A., 308 King, R. D., 258 kin selection in animals, 280 religious systems and, 578 Knighten, K. R., 114 Knobe, J., 202–203 Knoller, Marjorie, 537–539 Koenigs, M., 65, 309 Kogut, T., 252 Kohlberg, Lawrence on adolescence, 475 critiques of, 503 developmental moral realism and, 501–502 liberal bias of, 502 on moral development, 419 moral realism and, 511–512 on moral reasoning, 65, 128, 191, 192 stage-based theory of moral development (see stage-based theory of moral development) test of morality, 305, 306–307, 310 theory of moral development, 306 Koleva, S., 310 Köster, M., 438 Kraepelin, E., 303 Kronfelder, M., 332, 348–349 Krosch, A., 104 Kruepke, M., 309 Krupka, E. L., 89 Kteily, N., 340, 341, 343, 347, 348 !Kung people, punishment among, 361 Laidlaw, James, 512–513 Lambek, M., 514 Landry, A. P., 340, 348 Landy, Justin, 12, 40, 71, 113, 116 Lang, J., 347 Latané, B., 34 Laurin, Kristin, 20–21, 610 law criminal law (see criminal law) moral categorization and, 215–216 tort law deterrence in, 186

moral reasoning and, 186 victim compensation, 186 Leach, C. W., 234–235 Leary, S., 66 Lee, R. B., 361 Leeman, R. F., 116 left middle gyrus, moral decision making and, 316 legality principle, 525 Legros, S., 79 Le Jeune, Paul, 421–422 Leshner, S., 183 Levin, P. F., 284 Leyens, J.-P., 335, 336, 339–340, 344 life history theory, 584 Lilienfeld, S. O., 310 Lindenberg, S., 84 Link, N. F., 307 local norms, 507 Lorenz, Konrad, 258 Loughnan, S., 332, 340 Mackie, J. L., 60 Mahapatra, Manamohan, 128 Makah people, whaling and, 507–509, 511 Malle, Bertram F., 11, 17–18, 79, 203, 204, 544, 598 Manne, K., 348 ManyBabies Project, 453–454 Maoz, I., 347 Marsh, A. A., 312, 313 Marshall, J., 310, 311 Mashek, D. J., 222–223 Matthews, Margaret, 19 May, J., 610 May, M. A., 34 McCaffrey, E. J., 178, 188 McCauley, C. R., 229, 347 McDonald, K., 610 McGeer, Victoria, 18 McGregor, Edgar, 33, 36, 49 McPherson, D. H., 421 mechanistic dehumanization, 337 medial frontal gyrus, moral decision making and, 316 medial prefrontal cortex (mPFC), prosociality and, 288, 289 Meindl, P., 34 Melinkoff, D. E., 40, 41–42 Mencius, 409 mens rea, see criminal law mental illness, 598 meta-dehumanization, 343 metaphysical beliefs, 495 MeToo Movement, 370 MFT, see moral foundations theory (MFT) microaggressions, 366, 370 Milgram, Stanley, 34, 273–274

Index military veterans, moral injury and, 118 Mill, J. S., 183 Miller, Joan, 503 mind perception, see also theory of mind affect–moral and mental regulation–reality interaction model, 203, 204 agency–communion model, 200, 203 body–heart–mind model, 203, 204 capacity-based models, 199–203 consciousness and, 199, 202 cooperation and, 199–200 dehumanization and, 337–338, 345 discontinuity hypothesis, 202–203 experience–agency model, 15, 200–203, 337 human nature–human uniqueness model, 200, 203 intentional mental states, 199 moral categorization, relation to, 198–199, 214–216 moral patiency, influence of, 209–210 overview, 15, 198, 199 phenomenal mental states, 199 three-dimensional models, 203–204 trait-based models, 199–200, 203 two-dimensional models, 199–203 warmth–competence model, 15, 200, 203 weaknesses of standard models, 214–215 mindreading, 412, 578, 581, see also theory of mind model of moral motives (MMM), 552, 553 Model Penal Code (MPC) overview, 524 recklessness and, 533, 537 Molden, D., 112–113 Molendijk, T., 118 Molho, C., 359–360 Moll, J., 70 Moore, C., 610 Moore, G. E., 596 Moore, S. A., 113 moral absolutes abstract nature of, 494, 515 Brahman widows eating fish example, 498–501, 506, 511 care as, 507 comparative research and, 493 competing moral absolutes, 509–511 criticisms of, 494–495 defined, 493 existential issues in, 497 Hmong ritual practice versus Christian sanctity, 509–511 insufficiency of moral principles alone, 498–501, 506, 511 local application of, 498, 500, 506 metaphysics and, 494

multiple interpretations of single moral absolute, 507–509, 511 Native American whaling example, 507–509, 511 objective values, 498 oversimplification, avoiding, 515 overview, 493 as self-evident truths, 494 universalization and, 493–494 universally binding abstract rules, 497–498 moral agency agentialism, 211–212 attribution of, 15, 211–216 dehumanization and, 213, 337–338 experience–agency model and, 212–213 moral agents, 281, 417 moral character and, 213 moral patiency compared, 206–207, 211, 214–215 overview, 198 psychological agency and, 212, 214 warmth–competence model and, 213 moral agents, 281, 417 moral behavior moral decision making and, 3–4 moral sense and, 3 overview, 3 unintentional behavior, 4 moral categorization artificial intelligence and, 216 law and, 215–216 mind perception, relation to, 198–199, 214–216 moral agency (see moral agency) moral patiency (see moral patiency) moral character bravery, relation to, 47 compassion, relation to, 47 competence versus morality, 38–40, 48 criminal law, role in (see criminal law) cross-situational stability of, 34–35 defined, 33 existence of, 34–35, 48 fairness, relation to, 47 hard working, relation to, 47 honesty, relation to, 47 identity, relation to, 44–45, 48 inferences of behavior, from, 45 decision making process, from, 46–47 facial structure, from, 47 mental state, from, 46 motivation, from, 46 post-action mental state, from, 47 suffering, from, 47 intuitive blame, role of evidence and, 542 overview, 524

635

636

Index moral character (cont.) recklessness framework compared, 532, 534–536 loyalty, relation to, 47 moral agency and, 213 moral dilemmas and, 113–114 morality dependence hypothesis and, 39 morality dominance hypothesis and (see morality dominance hypothesis) moral judgments, relation to, 48–49 multifaceted evaluations of, 48 negative versus positive character, 45–46 overview, 12, 33–34 ratings of, 35 self-control, relation to, 47 skepticism regarding, 34, 48 sociability versus morality, 38–40, 48 trustworthiness, relation to, 47 valence effect and, 45–46 virtue ethics and, 33 warmth/coldness dichotomy and morality versus, 36–38 overview, 35–36 moral communication overview, 7, 18, 382, 401 punishment as in absence of audience, 394–395 animals, punishment of, 399 as attitude-focused activity, 401 consequentialism and, 395 content of communication, 397 as dialogical activity, 401 future research, 399–400 hedonic effects and, 396 in-group versus out-group status, 398–399 material nature of punishment and, 396 means of communication, 398 as nonlinguistic form of communication, 401 overview, 382, 401 power of, 401 research on, 396 retribution and, 394, 395 as specifically moral response to wrongdoing, 395 systematic approach generally, 396–397 target of communication, 397–398 social norms, as facilitating as attitude-focused activity, 386–387, 401 attitudinal response and, 387 chimpanzees versus humans, 385–386 as dialogical activity, 386–387, 401 overview, 382, 385 pointing gesture, 385–386 shared intentionality and, 386–387 moral conventional task (MCT), 307 moral decision making

amygdala and, 316 angular gyrus and, 316 cingulate cortex and, 316 event segmentation theory and, 165 expected utility theory (EUT) (see expected utility theory (EUT)) fairness and, 161–162 in-groups and, 163–164 interpersonal moral decision making, 162–163 interpersonal orientation task and, 162–163 left middle gyrus and, 316 medial frontal gyrus and, 316 moral behavior and, 3–4 moral character, inferences of from, 46–47 orbitofrontal cortex and, 316 overview, 14, 153, 167 posterior cingulate and, 316 prefrontal cortex and, 316 psychopathy and, 163, 316–317 superior temporal gyrus and, 316 temporal cortex and, 316, 317 temporal pole and, 316 temporoparietal junction (TPJ) and, 316 transformation of actions, 166–167 transformation of options, 166–167 moral development during adolescence, 462–463 capacities required for proper species members, 414 child raising (see child raising) developmental evolutionary theory, 413 embeddedness, 424 embodiment, 424 enactedness, 424 evolved development niche (EDN) and, 415 extendedness, 424 first-nature desires, 424–425 human nature capacities required for proper species members, 414 evolved development niche (EDN) and, 415 influences on development, 415 in-group versus out-group status and, 415–416 moral development and, 415–417 qualities needed for life fulfillment, 414 species-specific features, 414 influences on development, 415 in-group versus out-group status and, 415–416 moral development and, 415–417 optimal morality overview, 409–411 traditional/Indigenous versus Western societies, 411 qualities needed for life fulfillment, 414

Index recommendations, 426 species-specific features, 414 stage-based theory of (see stage-based theory of moral development) traditional/Indigenous approach child-centered nature of, 422 community and, 420 ecological attachment and, 421 honoring of children in, 420–421 overview, 420 personhood in, 420 stories, role of, 421 vision quests, 421 triune ethics metatheory (TEM) and, 415 undercare, 423 unnestedness, consequences of, 422–424 Western approach autonomous morality, 419 autonomy and, 418–419 failures of, 425–426 heteronomous morality, 419 heteronomy and, 419 submission of children to adults, 418 moral dilemmas adolescence, during, 476–477 affective regret versus cognitive regret, 117 conflict model, 115 cooperation and, 183–184 “Crying Baby” thought experiment, 106 defined, 101, 102–103 descriptive markers, 103 genuine moral dilemmas, 102–103 liver transplant example, 107–108 moral foundations theory (MFT) and, 110 moral injury resulting from, 117–118 normative nature of definition, 103 overview, 13, 101, 118–119 personal experience of, 103 psychological research on, 103–104 rational deficit hypothesis and, 305, 306, 308–309 resolution of Cognitive Reflection Test and, 116 conflict model, 115–116 content, 111–114 deontology versus utilitarianism, 111–113 dual-process model, 111, 190 emotional response and, 116 moral character and, 113–114 moral myopia model, 113 overview, 111 process, 114–116 religion and, 113 role-based obligations and, 113 values brought to bear, 111–114 verdict, 116–117 virtue ethics and, 115–116 weighing competing values, 114–116

response times (RT), 189–190 sacrificial moral dilemmas, 305, 306, 308–309 “Sophie’s Choice” dilemma, 105 sources of absolute constraints, 109 automatic aversion, 109–111 emotion versus reason, 106–107 overview, 105, 111 protected values, 107–108 tragic trade-offs, 107–108 value commensurability, 108–109 taboo trade-offs, 107 trivial moral trade-offs, 101–102 trolley dilemmas, 101, 105 ultimate moral dilemmas, 104–105 utilitarianism and, 102, 111–113, 117 utilitarian moral dilemmas, 181 moral disengagement genocide and, 347 intergroup conflict and, 334–335, 345 overview, 333–334 moral domains conventional domains versus, 127, 129–130, 384 Haidt and (see moral foundations theory (MFT)) how moral judgments made, 124, 125 (see also Turiel, Elliot ) Kohlberg, critique of, 503 modern evidence, 125–126 moral foundations theory (MFT) (see moral foundations theory (MFT)) moral mind and, 124, 125 moral realism and, 503 moral world and, 124, 125 overview, 13–14 paradigms, 125, 143–144 temporal context, 124 theory of dyadic morality (TDM)(see theory of dyadic morality (TDM)) Turiel and (see Turiel, Elliot) what acts deemed immoral, 124, 125 moral dumbfounding, 367–368 moral emotions adolescence and, 466–467 blame, role in, 368 CAD triad hypothesis, 225–226 cognitivism and, 72 consequences of, 224–225 disinterested elicitors of, 223 elicitors of, 224–225 emotion processing, psychopathy and (see psychopathy) emotion regulation, psychopathy and, 321 families of, 223 “feel good” versus “do good,” 223 functions of, 6–7 identification of, 6

637

638

Index moral emotions (cont.) moral dilemmas and resolution of, emotional response and, 116 sources of, emotion versus reason, 106–107 moral foundations theory (MFT), role in, 132–133, 135 moral judgments, relation to, 70–72, 222 noncognitivism and, 69, 71 other-condemning emotions generally, 225, 240 other-praising emotions generally, 237, 240 other-suffering emotions generally, 235, 240 overview, 15, 222 prosociality, role in affective states, 284 distress, 285–286 empathy, 286–287 guilt, 287–288 overview, 223, 284 self-conscious emotions, 19, 240, 465–466 social functioning and, 222–223 moral exclusion intergroup conflict and, 334–335, 347 overview, 334 Moral Foundations Questionnaire (MFQ), 310 moral foundations theory (MFT) affect and, 131–132 affective modules, 134 alternative approaches, 505 automatic aversion and, 110 “Big Three” of morality and, 504 comparison with other approaches, 126 concurrent perspectives, 133 cultural differences and, 132, 135 cultural diversity, failure to account for, 135 deconstruction of, 133–134 intuitions, role of, 132, 135, 550–551 modern evidence, in light of, 134–137 modular moral mind, 136 moral emotions, role of, 132–133, 135 moral foundations, 133–134, 136–137 moral intuitions in, 136, 512 moral mind and, 134 moral realism and, 504–505, 512 moral relativity and, 131–132 moral world and, 133–134 overview, 13–14, 131, 132, 143 political attitudes and behavior and (see political attitudes and behavior) rational deficit hypothesis and, 305–306, 309–310, 311 reasoning and, 134–135 understanding paradigm of, 143 moral heuristics, 175–176 moral injury COVID-19 and, 118 military veterans and, 118 moral dilemmas, resulting from, 117–118

moral intuitions blame and, 367 criminal law and, 524 empathy and, 263 harm perceptions and, 139 as heuristics, 175 in infants and toddlers, 503 in MFT, 136, 512 moral intuitionism, 606, 609–611 moral judgments and, 132, 134 overview, 596–597 as third-person rules of reason, 497–498 trustworthiness of moral judgments and, 610–611 utilitarianism and, 176, 177–178, 191 moralistic religions life history theory and, 584 overview, 584 reproductive strategies and, 584 social complexity and, 584–585 morality dependence hypothesis, 39 morality dominance hypothesis conditional valuation of moral traits and, 40–41 core goodness traits and, 42–44 dependent variables in, 41–42 framing of, 42 overview, 39 preferences and, 41–42 self-righteousness and, 40 value commitment traits and, 42–44 moral judgments anger, relation to, 607 blame (see blame) cognitivism and, 72–73 defined, 5 disgust, relation to, 231–232, 607 evolution of, 410 functions of, 5 good–bad evaluations, 5 internalism (see internalism) kinds of, 5 moral character, relation to, 48–49 moral emotions, relation to, 70–72, 222 moral heuristics and, 175–176 moral intuitions and, 132, 134 moral judgment internalism (see internalism) moral motivation, relation to, 55 moral reasoning (see moral reasoning) negative emotions and, 72 noncognitivism and, 71 overview, 5 political judgments as, 191 praise as, 5 in processing hierarchy, 5 theory of dyadic morality (TDM) and, 142 traditional/Indigenous versus Western societies, 411

Index trustworthiness of (see trustworthiness of moral judgments) wrongness judgments, 5, 13–14, 321, 367 moral learning, 12 moral licensing, 9 moral motivation cognitivism (see cognitivism) conditional internalism and, 67–69 externalism (see externalism) “Hume’s problem” and (see “Hume’s problem”) internalism (see internalism) moral character, inferences of from, 46 moral judgments, relation to, 55 motivational mental states action, relation to, 55–56 background beliefs, relation to, 56 different norms, as subject to, 56–57 epistemic norms and, 57 functional role of, 56 overview, 58 practical norms and, 57 noncognitivism (see noncognitivism) overview, 12–13, 55, 73–74 traditional/Indigenous versus Western societies, 411 moral myopia model, 113 moral norms conventional norms versus, 387 guilt and, 79 overview, 382, 401 psychological attitudes toward blame and, 390–391 independent normativity, 389–390 as intrinsically motivating, 389 overview, 387, 389 punitive reactions, 390–392 social norms, relation to, 79–80 substantive content of monist approach, 388 overview, 387 pluralist approach, 388–389 reductive monist approach, 388 variations in, 387–388 moral patiency alien species and, 208 animals and, 208, 209 attribution of, 204–210, 214–216 binary conception of, 205–206 dehumanization and, 337–338 deontology and, 206, 215 experientialism and, 207–209 harmful disposition and, 208–209 indirect duty and, 205 measurement of, 206–207 mindedness and, 204 mind perception, influence on, 209–210 moral agency compared, 206–207, 211, 214–215

moral considerability contrasted, 204 moral standing and, 204 normative nature of, 205 overview, 198 perception of, 205 psychological agency and, 208 psychological patiency and, 205, 207 psychological traits and, 208–209 utilitarianism and, 206, 215 moral philosophy act utilitarianism perspective in, 1 blame and, 598 deontology perspective in (see deontology) free will and, 598 happiness and, 597–598 hedonism and, 597–598 historical background, 596–597 identity and, 598 isolation from psychology, 596–597 mental illness and, 598 moral judgments generally, 598 “ought” versus “is,” 596 overview, 1, 21–22, 612 reconnection with psychology, 597 rule utilitarianism perspective in, 2 situationism and, 597 trustworthiness of moral judgments, impact of, 606–607 (see also trustworthiness of moral judgments) utilitarianism (see utilitarianism) virtue ethics perspective in, 2 moral realism anthropology and, 496–497, 512–513 anti-realist approaches, 512, 514–515 autonomy, ethics of, 503 “Big Three” of morality and, 503–504 community, ethics of, 503 concepts for understanding moral behavior of others, 492 cultural–developmental approach, 504 defined, 492 deontological moral realism, 514 descriptive moral realism, 495, 496–497, 512–513, 516–517 developmental moral realism, 501–502 different reasoning and, 500–501 divinity, ethics of, 503 ethical approaches, 514–515 ethics and, 514–515, 516 Kohlberg and, 511–512 moral absolutes (see moral absolutes) moral domains and, 503 moral foundations theory (MFT) and, 504–505, 512 moral prescriptions, 495–496 objective right and, 493 overview, 19–20 phenomenological approaches, 514–515

639

640

Index moral realism (cont.) pluralistic moral realism, 495, 496–497, 502–503, 506, 512–513, 515 relational models theory and, 505 relativism, criticism of, 496, 506 stage-based theory of moral development and, 501–502 ubiquity of, 492, 493 moral reasoning attribute substitution allocation and, 185–186 criminal law and, 186, 187 insurance and, 186–187 overview, 185 tort law and, 186 child surcharge/bonus example, 174 consequential versus moral judgments, 187 COVID-19 and, 172 cultural factors, 191–192 descriptive models of moral psychology biases and, 178 intuitions and, 177 normative models compared, 173 evolution of, 410 framing effects, 174, 178–179 intuitions and dual systems, 189–190 overview, 188–189 isolation effects overview, 188 rent control example, 188 taxation example, 188 moral experience and, 516 nonutilitarian versus utilitarian decisions business profits/expenses example, 179 marriage tax example, 179 old-growth forest example, 183 omission bias, 180–182 (see also omission bias) omissions and, 180 overview, 14–15, 173, 178 parochialism and, 183–185 protected values and, 182–183 trolley dilemmas, 180 voting example, 184 normative models of moral psychology Asian Disease Problem and, 174–175 biases and, 176–177, 178 descriptive models compared, 173 expected utility theory (EUT) as, 174 framing effects, 174, 178–179 individual choices versus moral decisions, 174–175 intuitions and, 176 utilitarianism as, 175 overview, 14–15

parents and, 476 pertussis vaccine example, 175 political judgments, 191 prescriptive models of moral psychology biases and, 178 decision analysis and, 177 division example, 177 utilitarianism as, 177–178 teaching moral thinking, 192 traditional/Indigenous versus Western societies, 411 utilitarianism and, 14–15 whooping cough vaccine example, 175 moral residue, 117–118 moral responsibility, 10–11 moral sanctions, 5–6 moral sense, 3 moral shame, 235 moral standards, 3 moral typecasting, 205, 208, 214 motivational mental states action, relation to, 55–56 background beliefs, relation to, 56 different norms, as subject to, 56–57 epistemic norms and, 57 functional role of, 56 overview, 58 practical norms and, 57 Murphy, S., 72–73 My Lai massacre, 263 Nadler, Janice, 18, 20, 610 Narvaez, Darcia, 11, 18–19 Native Americans, whaling and, 507–509, 511 natural selection, 411–412 negative incidental affect, 605 negligence in criminal law, 543 Nelson, C., 36 neotenous cues, 258 neurobiological perspectives affect sharing and, 252–253 brain impairment criminal behavior and, 317, 320–321 prosociality and, 289–290 psychopathy and, 315–316, 320–321 developmental neuroscience, 415–417 empathic concern and, 258 overview, 11, 18–19, 409 perspective taking and, 260–261 prosociality and brain impairment, studies of, 289–290 emotional involvement and, 288–289 functional localization, 289 insular cortex, 470 overview, 274, 288 perspective taking and, 288–289

Index psychopathy and (see psychopathy) theory of dyadic morality (TDM), consistency with neurobiology, 141–142 neurodegenerative disease, effect on identity, 44–45 neuromoral theory, see psychopathy Newman, J. P., 309 Nichols, Shaun, 11, 14, 44, 45, 64, 67–69, 598 Niemi, Laura, 11, 14 Nietzsche, Friedrich, 492 Noel, Robert, 537–539 noncognitivism cognitivism versus, 59–60, 69 moral emotions and, 69, 71 moral judgments and, 71 normative beliefs, 69–70, 79–80, 84, 89–90, 91 normative decision theory, 58 normative expectations, 5, 81, 82, 85, 86, 89, 90, 91–92 normative judgments, 70, 495 normative standards, 262, 365–366 norms democratic norms, 556 descriptive norms, 87 epistemic norms, 56–57 local norms, 507 moral norms (see moral norms) practical norms, 57–58 social norms (see social norms) Northern Ireland, dehumanization in intergroup conflict in, 346 nucleus accumbens empathic concern and, 258 prosociality and, 288 Nussbaum, M. C., 69 ODD (oppositional defiant disorder), 304 Oliner, P., 262 Oliner, S., 262 Olshan, K., 36 omission bias “but for” causality and, 181 deontology and, 181 direct causality and, 181–182 vaccines and, 181 ontogenetic origins of morality, 433 Opotow, S., 333, 334 oppositional defiant disorder (ODD), 304 orbitofrontal cortex, moral decision making and, 316 order framing effects, 602, 603 organizational morality, 12 origins of morality, 463 outcome-based EUT, see expected utility theory (EUT) Over, H., 349

over-imitation, 384–385 oxytocin adolescence and, 472–473 affect sharing and, 253 empathic concern and, 258 Pain versus Gain (PvG) paradigm, 277–278, 286, 287 Paluck, E. L., 92 parochialism, 183–185 Path Model of Blame, 364, 366 Payne, B. K., 611 Peabody, D., 36 personal identity, see identity perspective taking affect sharing, relation to, 251 amygdala and, 260–261 anterior cingulate cortex (ACC) and, 261 empathic concern, relation to, 262 “imagine-other” perspective, 261 “imagine-self” perspective, 261 intergroup relations and, 261–262 as mental simulation, 260–261 neurobiology and, 260–261 overview, 249, 260 temporoparietal junction (TPJ) and, 260–261 ventromedial prefrontal cortex (vmPFC) and, 261 pertussis vaccine, 175 Petty, R., 132 Phelan, M., 202, 203 phenomenology appearances versus experiences, 514 moral realism and, 514 phronesis, 426 phylogenetic origins of morality, 433–434 Piaget, Jean, 128, 419, 465, 475, 501 Pinel, Phillipe, 303 Pizarro, D. A., 49, 309 Pizzirani, B., 332 Plato, 514 Pliskin, R., 223 pluralistic ignorance, 85 pluralistic moral realism, 495, 496–497, 502–503, 506, 512–513, 515 pointing gesture, 385–386 political attitudes and behavior Affective Harm Account (AHA) and, 552–553 constructiveness of contact and dialogue, improving inclusion, importance of, 565–566 opportunity to be heard, importance of, 565–566 overview, 565, 566–567 respect, importance of, 565–566 shared moral values, importance of, 565

641

642

Index political attitudes and behavior (cont.) encouragement to act in support of policies democratic and constructive political behavior, 555 identity concerns, 556 individual action, 555 inspiring others, 555 less democratic political behavior, 556 overview, 555 violence, 556 interventions to improve motivation for engagement, 562 contact interventions, 564–565 exaggerated perceptions, correcting, 564–565 first-order misconceptions, 563 group boundaries, deemphasizing, 565 individual attitudes, targeting, 562–564 interpersonal relationships, targeting, 564–565 misperceptions, correcting, 563–564 perspective-taking interventions, 563 second-order misconceptions, 563–564 self-fulfilling prophecies, avoiding, 562 showing versus telling, 564 stereotypes, correcting, 564–565 justifications for disengagement blaming opponents, 561 lack of intent or knowledge, 561 moral righteousness, 561 model of moral motives (MMM) and, 552, 553 moral foundations theory (MFT) and binding foundations, 551 care foundation, 551 conservatives, 551 criticisms of, 551–552 individualizing foundations, 551 intuitions, role of, 550–551 liberals, 551 overview, 550 predictive value of, 551 purity foundation, 551 morality, role of liberals versus conservatives, 550 as motivating political action, 554–555 overview, 550, 566 political polarization, 558 moral judgments and, 191 moral reasoning and, 191 overcoming barriers to engagement, 562 overview, 20–21, 549 political beliefs as causing moral beliefs, 553–555 political polarization morality, role of, 558 overview, 549, 566 psychological constructionism and, 139, 143

psychological disposition, role of, 550 roots of, 550 undermining motivation to engage constructively with opponents active forces pushing people away from dissimilar others, 559–561 belonging and social connectedness, need for, 559 emotions, preserving, 559 encouraging engagement in principle but obstructing engagement in practice, 557–558 energy, preserving, 559 liberals versus conservatives, 559–561 overview, 556–557 passive forces pulling people toward similar others, 558–559 political polarization and, 558 on social media, 558–559 stereotypes and, 559–560 understanding world, psychological need for, 558–559 Poon, Kean, 16–17, 598 population growth blame and, 355, 357 punishment and, 355, 357 positive incidental affect, 605 posterior cingulate moral decision making and, 316 psychopathy and, 316–317 posthoc justification, 139 power in anthropology, 513 freedom compared, 513–514 practical norms, 57–58 prairie voles, affect sharing in, 253 praise as moral judgment, 5 prefrontal cortex adolescence and, 468–469, 471–472 moral decision making and, 316 psychopathy and, 315, 316–317 transcranial direct current stimulation of, 319–320 Premack, A. J., 436–437 Premack, D., 436–437 Prichard, H. A., 596–597 Prichard, J. C., 303 Prinz, J., 63, 64, 202–203 prisoner’s dilemma affect sharing and, 255 prosociality and, 274–275 religious systems and, 579 proactive aggression, 320 process debunking deontological moral judgments, arguments against, 599–600 evolutionary debunking, 599, 600

Index property blame and, 356 punishment and, 356 prosociality affective states and, 284–285 affect sharing, relation to, 252–253, 256–257, 263 amygdala and, 288–289 in animals (see animals) anterior cingulate cortex (ACC) and, 288 attachment and, 473 caudate nucleus and, 288, 332 in children, 281–282 climate change and, 278 cooperativeness and, 200 developmental trajectory of, 281–282 dorsal lateral prefrontal cortex and, 289 economic games, studies involving dictator game (DG), 275, 278 distributive justice, limited to, 276 limitations of, 276–277, 290 money allocation tasks, 276 options, limiting of, 276–277 overview, 274, 279 Pain versus Gain (PvG) paradigm, 277–278, 286, 287 prisoner’s dilemma, 274–275 punishment and, 277 trust game (TG), 275–276 ultimatum game (UG), 275, 276–277 uncertainty and, 278 empathic concern, relation to, 259–260, 262 empathy and, 236, 248 evaluative self and, 9 evolutionary perspective, 280–281 in infants and toddlers, 281–282, 440, 452 laboratory study of, 274–279 medial prefrontal cortex (mPFC) and, 288, 289 money, research involving, 290 moral emotions, role of affective states, 284 distress, 285–286 empathy, 286–287 guilt, 287–288 overview, 223, 284 motivations of other-regarding motives, 283–284 overview, 274, 282 reputation, 283 risk perspective, 282–283 self-interest, 282 neurobiology and brain impairment, studies of, 289–290 emotional involvement and, 288–289 functional localization, 289 insular cortex, 470 overview, 274, 288 perspective taking and, 288–289

nucleus accumbens and, 288 overview, 15–16, 274, 290–291 peer influence and, 19, 475 punishment and, 277 religion and, 586–587 rise of, 273–274 subgenual anterior cingulate and, 332 ventral striatum and, 288 protected values outcome-based EUT and, 157 as source of moral dilemmas, 107–108 psychological agency, 208, 212, 214 psychological altruism, 73–74, 279–280 psychological constructionism, 138–139, 143 psychological egoism, 73–74 psychological essentialism, 339–340 psychological patiency, 205, 207 psychopathy affective impairment as cause of, 305 as amoralism, 63, 304 amygdala and, 315, 316–317 angular gyrus and, 315, 316–317 anterior cingulate cortex (ACC) and, 315 brain impairment and, 315–316, 320–321 cognitive control and, 321 criminal behavior, as predictor of, 304–305 emotion processing and affective startle eye blink response, 313–314 approach–avoidance task (AAT), 314–315 facial affect recognition deficits, 312–313 future research, 320 overview, 311 physiological evidence of emotion deficits, 313–315 shallow emotional experience, 311–312 skin conductance, 313 emotion regulation and, 321 future research, 320–321 guilt, lack of, 304 insula and, 315 internalism, ramifications for, 63–65 interventions brain stimulation techniques, 319–320 overview, 303 psychosocial interventions, 319 transcranial direct current stimulation, 319–320 moral decision making and, 163, 316–317 neuromoral theory brain impairment and, 315–316 causality and, 317 moral decision making and, 316–317 neuroimaging, 317 normal individuals, moral decision making compared, 316 overview, 303, 315 temporal cortex and, 316–317

643

644

Index psychopathy (cont.) temporal lobe and, 316 temporal pole and, 316–317 unresolved issues, 317–319 overview, 16–17, 303 posterior cingulate and, 316–317 prefrontal cortex and, 315, 316–317 rational deficit hypothesis and (see rational deficit hypothesis) superior temporal sulcus and, 316–317 understanding of morality and, 63–65, 68 Psychopathy Checklist–Revised (PCL-R), 304–305 puberty, 463–464 publication bias, 607 punishment as adaptive mechanism, 392 age and, 400 of animals, 399 behavioral economics and, 392 blame distinguished, 354, 359–360 capital punishment, 358 community service, 398 consequentialist approach, 392–393, 395 as context-related, 395 cooperation, as fostering, 362 in criminal law (see criminal law) cultural history escalation of, 358 hierarchical social structure and, 355, 357, 360 human settlement, after, 355, 356–357 in hunter–gatherer communities, 355–356 institutionalization of, 355, 356 intergroup conflict and, 357 overview, 355, 371 population growth and, 355, 357 property and, 356 delegation and, 361–362 deontological approach, 392–393 deterrence general deterrence, 393 lack of, 358–359, 360, 371 retribution versus, 393–394 specific deterrence, 393 escalation of cultural history, 358 defense against political threats and, 358 deterrence, lack of, 358–359, 360 in schools, 359 war and, 358 evolutionary psychology and, 392 “hidden punishment,” 393–394 judgments, 363 as moral communication (see moral communication) overview, 17–18, 354, 360 penalty distinguished, 390–391

proportionality in, 398 prosociality and, 277 proximate mechanisms of, 392 racial discrimination in, 363 restorative justice and, 371, 398 retribution deterrence versus, 393–394 as factor supporting punishment, 360–361 moral communication and, 394, 395 sentencing versus judgment, 363 social norms and interventions, 91–92 peer punishment, enforcement through, 85–86 supernatural punishment hypothesis cooperation and, 582 self-interest versus, 581–582 Purzycki, Benjamin Grant, 11, 21 Rabb, J. D., 421 racial discrimination generally, 139 blame and, 366 in punishment, 363 Rai, T., 505 Raine, Adrian, 16–17, 319–320, 598 Rappaport, Roy, 582–583 rational agents, 61 rational deficit hypothesis criminal law, legal culpability in, 305 Defining Issues Test (DIT), 306 Kohlberg test, 305, 306–307, 310 measurement tools, 305–306 meta-analysis, 310–311 moral–conventional distinction, 305, 306, 307–308 moral conventional task (MCT), 307 moral foundations questionnaire (MFQ), 310 moral foundations theory (MFT) and, 305–306, 309–310, 311 overview, 305 sacrificial moral dilemmas and, 305, 306, 308–309 trolley dilemmas and, 308–309 Ratoff, William, 12–13 rats, affect sharing in, 252–253 Rawls, John, 128, 597 reactive aggression, 320 reciprocal altruism, 578 reciprocal dehumanization, 343 recklessness, see criminal law Reeder, G. D., 45–46 Rehren, Paul, 21–22 relational models theory, 505 relative dehumanization, 341–342 religion alcohol consumption and, 584 cooperation and

Index evolutionary foundations of, 580 ritual, 583 supernatural punishment hypothesis, 582 evolutionary foundations of agency detection and, 577–578 cognitive machinery, morality as, 578–580 cooperation and, 580 kin selection and, 578 overview, 576–577 prisoner’s dilemma and, 579 reciprocal altruism and, 578 ritual and, 583 social life and, 580–581 theory of mind and, 577–578 future research, 587–588 intertwining of morality and religion, 576, 587 moral dilemmas and, 113 moralistic religions “life history theory” and, 584 overview, 584 reproductive strategies and, 584 social complexity and, 584–585 moral systems analytical level of, 577 context of, 577 defined, 576, 587 emic versus etic perspectives, 577, 587 intertwining of morality and religion, 576, 587 local models, 577 moral beliefs versus moral behaviors, 577 scope or breadth of, 577 skepticism regarding relation of morality and religion, 586–587 time-dependence of, 577 overexploitation of resources and, 584 overview, 21, 575 prosociality and, 586–587 religious beliefs, 577–578 religious systems analytical level of, 577 defined, 575–576, 587 emic versus etic perspectives, 577, 587 intertwining of morality and religion, 576, 587 local models, 577 skepticism regarding relationship between morality and religion, 586–587 research challenges, 587–588 research questions, 576 ritual, 582–583 skepticism regarding relation of morality and religion, 586–587 socially strategic information and, 586 supernatural punishment hypothesis cooperation and, 582

self-interest versus, 581–582 replication crisis, 452–453 reprobative blame, 391 restorative justice, 371, 398 retribution deterrence versus, 393–394 as factor maintaining public support for punishment, 360–361 moral communication and, 394, 395 Reynolds, C. J., 114 rhesus monkeys, prosociality in, 279 Ritov, I., 175, 180, 181, 182, 187, 252 Robbins, Joel, 512–513 Robbins, Philip, 15, 17 Roisman, G. I., 332 Roma people, dehumanization in intergroup conflict and, 331–332, 341 Rome (Ancient), punishment in, 358 Romero-Martinez, Á., 319 Rosas, A., 190 Rosenberg, S., 36 Roskies, Adina, 12–13, 65, 66, 598 Ross, D., 596–597 Ross, W. D., 115 Roth, A. E., 182 Royzman, E. B., 43–44, 73, 113, 116, 182 Rozin, P., 229, 230–231 Ruby, P., 260–261 rule utilitarianism, 2, 177 runaway trolley scenarios, see trolley dilemmas Russell, Pascale Sophie, 6, 15, 598 Rwanda, dehumanization in intergroup conflict in, 333, 343 sacrificial moral dilemmas, 305, 306, 308–309 Salvadori, E., 453 same-sex marriage, social norms and, 87 Samuelson, P., 192 Sankaran, K., 610 Sauer, H., 610 Savage, L. J., 174 Scarf, D., 442–443, 453 Schmittberger, R., 275 Schnall, S., 71, 72, 231 Schneider, F., 303 Schultz, P. W., 87 Schwarze, B., 275 Schwitzgebel, E., 611 second-order desires, 426 self evaluative self, 8, 9 executive self, 8–9 inhibitive moral agency and, 8–9 moral licensing and, 9 overview, 8 “true self,” 9 self-awareness, 15, 201, 232 self-conception, 114

645

646

Index self-conscious emotions, 19, 232–235, 240, 465–466 self-control, 15, 303, 336, 337, 416, 469 self-dehumanization, 342–343, 347 self-esteem, 9, 234, 367 self-interest in infants and toddlers, 451 prosociality, as motivation of, 282 supernatural punishment hypothesis versus, 581–582 self-protection-centered ethic, 18 self-protectionism, 410, 415–416, 423 self-regulation, 8–9, 414, 416 self-serving justification, 9 shame anticipation of, 233 as avoidance emotion, 234 breastfeeding example, 233 as complex emotion, 232 consequences of, 234–235 detrimental, perceived as, 232, 233–234 disgust, relation to, 233 elicitors of, 233–234 guilt compared, 232–235 image shame, 235 as moral emotion, 240 moral shame, 235 overview, 240 proscriptive morality and, 233 as secondary emotion, 232 self as focus, 232 self-failure as triggering, 232 similarities with guilt, 232 social norms and, 79 shared beliefs, 256 shared intentionality, 386–387 Shepherd, H. S., 92 Sherif, M., 273–274 Shoemaker, D., 598 Shortland, N., 109 Shweder, Richard A., 11, 19–20, 128, 137, 502, 503, 505 Sidgwick, Henry, 493, 496, 497 Siegel, J. Z., 46 simple dehumanization, 342 Simpson, J. A., 332 Sinnott-Armstrong, Walter, 21–22, 308 situational attributions, 539 situationism, 597 skin conductance, 313 skin conductance response (SCR), 285 Slovic, P., 175 Smith, Adam, 282 Smith, David Livingstone, 332, 338–340, 348 Smith, M., 58, 60, 66, 67, 307 smoking, social norms and, 87 Snarey, J., 192 sociability versus morality, 38–40, 48

social agents, 444 social media blame and, 370–371 undermining motivation to engage constructively with political opponents, 558–559 social norms climate change example, 85 COVID-19 example, 83, 85, 87 defined, 78, 80–81 descriptive component, 81 diagnosis of defining norm, 88 identifying aspects to be targeted by interventions, 90 overview, 88 rule-following task and, 90 social expectations, extracting, 89–90 social reference category, identifying, 89 energy usage example, 87 enforcement of direct information, by providing, 87 normative information, by providing, 86 peer punishment, through, 85–86 social institutions, role of, 86–87 social sanctions, through, 86 epistemic norms, 56–57 hate speech example, 87 individual versus collective construct, 78–79 inference of conditional inference, 82 environmental cues and, 84 inaccuracies in, 84–85 macro-level norms, 82 obtaining social information and, 83 overview, 13, 85 pluralistic ignorance and, 85 reference population and, 82, 83 social proof and, 83 social referents and, 83–84 injunctive component, 81 interventions belief manipulation protocols and, 90–91 identifying aspects to be targeted by, 90 information provided in, 91 multiple or conflicting norms, 92 overview, 88 punishment and, 91–92 social referents and, 92 weak norms, 92 legal norms, relation to, 79 littering example, 84 moral communication as facilitating as attitude-focused activity, 386–387, 401 attitudinal response and, 387 chimpanzees versus humans, 385–386 as dialogical activity, 386–387, 401 overview, 382, 385

Index pointing gesture, 385–386 shared intentionality and, 386–387 moral norms, relation to, 79–80 motivational mental states as subject to different norms, 56–57 norm-governed behavior examples of, 383 over-imitation, 384–385 overview, 382, 383 psychology of, 383–384 variation in, 383 overview, 13, 78, 80, 92–93 peer punishment, enforcement through, 85–86 practical norms, 57–58 same-sex marriage example, 87 shame and, 79 smoking example, 87 substance abuse example, 87 sustainable consumption example, 82 taxation example, 87 Trump example, 85 ventromedial prefrontal cortex (vmPFC) lesions and, 65–66 social referents, 83–84, 92 socioeconomic status, trustworthiness of moral judgments and, 601 Solomon, Robert, 69 somatic marker hypothesis, 469–470 somatosensory cortex, empathic concern and, 259 Sommers, T., 598 Sommerville, J., 455 Song, H., 439 “Sophie’s Choice” dilemma, 105 Sorenson, E. Richard, 420 spiritual agents, 578, 581, 586 Spores, J. M., 45–46 Sripada, C., 388, 389 stage-based theory of moral development adolescence and, 478–479 conventional level, 502 moral realism and, 501–502 postconventional level, 502 preconventional level, 501–502 standing to blame, 198 Stanford, M. S., 314 Stanford University Center for Deliberative Democracy, 566 Stanton, G. H., 346 Staub, E., 333 Steg, L., 84 Stevenson, C. L., 596–597 Stich, S., 388, 389, 390, 611–612 stress response, 416–417 stria terminalis, empathic concern and, 258 Strohminger, N., 44, 45 Stuewig, J., 222–223

subgenual anterior cingulate, prosociality and, 332 substance abuse, social norms and, 87 subtle dehumanization, 333, 340–341 superior temporal gyrus, moral decision making and, 316 superior temporal sulcus, psychopathy and, 316–317 supernatural agents, see spiritual agents supernatural punishment hypothesis cooperation and, 582 self-interest versus, 581–582 Surian, L., 448, 449 “survival of the fittest,” 412 Swanson, G. E., 585 sympathetic magic, 230–231 sympathy, compassion and, 236, see also empathy synchronic identity, 9 Szekely, A., 89 Tahiti, punishment in, 358 Tam, T., 346 Tangney, J. P., 222–223, 232, 234, 505 Tannenbaum, D., 49 Tasimi, A., 451 Tassy, S., 309 taxation, social norms and, 87 TDM, see theory of dyadic morality (TDM) Teehan, J., 583 temporal cortex moral decision making and, 316, 317 psychopathy and, 316–317 temporal lobe, psychopathy and, 316 temporal pole moral decision making and, 316 psychopathy and, 316–317 temporoparietal junction (TPJ) moral decision making and, 316 perspective taking and, 260–261 Ten Commandments, 177 Tetlock, P. E., 182 theory of dyadic morality (TDM) advantages of, 143 affect and, 141 Affective Harm Account (AHA) and, 552–553 “agent causing harm to patient” schema, 139, 140 comparison with other approaches, 126 cultural differences and, 142 dyadic completion, 140 emotional damage and, 140 harm, centrality of, 137, 141 harmless wrongs and, 140 intuitions, role of, 141 modern evidence, in light of, 141–142 moral judgments and, 142

647

648

Index theory of dyadic morality (TDM) (cont.) moral mind in, 139–140 moral typecasting, 205, 208, 214 moral world in, 140–141 neurobiology, consistency with, 141–142 overview, 125–126, 143 perception of harm in, 138, 141 psychological constructionism in, 138–139, 143 victimization and, 140–141 theory of mind, 253–254, 260, 261, 317–319, 338, 578, see also mindreading Thompson, Hugh, 263 Tiberius, V., 611 Tikopia people, moral realism and, 493 Tobia, K. P., 611–612 toddlers, see infants and toddlers Tomasello, M., 386 tort law deterrence in, 186 moral reasoning and, 186 victim compensation, 186 transcendence, morality and, 512 triune ethics metatheory (TEM), 415 trolley dilemmas as moral dilemmas, 101, 105 nonutilitarian versus utilitarian decisions, 180 outcome-based EUT and, 156–157 rational deficit hypothesis and, 308–309 “true self,” 9 Trump, Donald, 85 trust, 12 trust game (TG), 275–276 trustworthiness of moral judgments acceptable levels of untrustworthiness, 609–610 data of ethics and, 606 disagreement, argument from age and, 601 criticism of, 601–602 cultural differences and, 600 demographic differences and, 600–601 gender and, 601 nonmoral disagreements and, 602 overview, 600 parent–child disagreements, 600 peer disagreements, 608–609 socioeconomic status and, 601 sufficient severity of disagreements, 608–609 excessive levels of untrustworthiness, 609–610 expertise defense, 611–612 irrelevant influences, argument from framing effects, 602–603 incidental affect, 604–605 negative incidental affect, 605 order framing effects, 602, 603

overview, 602 positive incidental affect, 605 social conformity, 603–604 word framing effects, 602, 603 methodological weaknesses, 607 moral intuitions and, 606, 610–611 moral philosophy, impact on, 606–607 overview, 598–599 process debunking and deontological moral judgments, arguments against, 599–600 evolutionary debunking, 599, 600 publication bias and, 607 research, trustworthiness of, 607–608 types of moral judgments studies, 610–611 Tsarnaev, Dzhokhar, 258 Turiel, Elliot analytical philosophy and, 128 behaviorism and, 127–128 children, studies of, 126–127 comparison with other approaches, 126 on conscious reasoning, 130 conventional perspectives, 128 deconstruction of, 128–129 harm-based theory of morality, 127, 130 informational assumptions and, 129 on intuitions, 503 Kohlberg, critique of, 503 modern evidence, in light of, 129–131 on moral domains, 13–14, 125, 126, 142–143 on moral mind, 128–129 moral versus conventional domains, 127, 129–130, 384 on moral world, 129 on norms, 79 on rules, 126–129 sociocultural differences, failure to consider, 130–131 understanding paradigm of, 143 Tversky, A., 173–174 Twitter, 558 Tyler, T. R., 263 ultimate desires, 57–58 ultimatum game (UG), 89, 275, 276–277 unconditional internalism, 61 uncooperativeness, 304 undercare, 423 unnestedness, 422–424 utilitarianism act utilitarianism, 1, 155, 177 biases in, 176–177 criticism of, 181 distribution of goods and, 185 expected consequences and, 172–173 expected utility theory (EUT) versus, 174 intuitions and, 176 moral dilemmas and, 102, 111–113, 117

Index moral intuitions and, 176, 177–178, 191 moral patiency and, 206, 215 moral reasoning and, 14–15 nonutilitarian versus utilitarian decisions (see moral reasoning) as normative model, 175 outcome-based EUT and, 155 as prescriptive model, 177–178 rule utilitarianism, 2, 177 utilitarian moral dilemmas, 181 welfare economics and, 172–173 vaccines COVID-19, 83, 85, 87, 172 omission bias and, 181 pertussis, 175 whooping cough, 175 vagus nerve, 416 valence effect dehumanization, as better explanation than, 347, 349 moral character and, 45–46 Vallacher, R. R., 166 value commitment traits, 42–44 ventral pallidum, empathic concern and, 258 ventral striatum empathic concern and, 259 prosociality and, 288 ventral tegmental area, prosociality and, 287 ventromedial prefrontal cortex (vmPFC) adolescence and, 470 empathic concern and, 258, 259 lesions in amoralism and, 65 cognitive normality of persons with, 65 internalism, ramifications for, 65–66 social norms and, 65–66 understanding of morality and, 65–66 moral decision making and, 317 perspective taking and, 261 Vietnam, Hmong ritual practice in, 509–511 violation-of-expectation (VoE) paradigms, 435 virtue ethics moral character and, 33 moral dilemmas and, 115–116 moral philosophy, 2 overview, 2 vision quests, 421

Vivekananthan, P.S., 36 Vives, Marc-Lluís, 11, 16 von Borries, A. K. L., 314–315 Vriens, Eva, 13 Wan, L., 314 warmth/coldness dichotomy morality versus, 36–38 overview, 35–36 warmth–competence model mind perception and, 15, 200, 203 moral agency and, 213 Waschbusch, D., 313 Watson, John, 419 Watts, A. L., 310 Weber, E. U., 104 Weber, R. A., 89 Wegner, D. M., 166 WEIRD (Western, educated, industrialized, rich, democratic) problem, 452, 453 Weisman, K., 201, 202, 203, 204 welfare economics, utilitarianism and, 172–173 whaling, 507–509, 511 Wheatley, T., 70–71, 72, 231 whooping cough vaccine, 175 Williams, Bernard, 103 Wilson, K., 313 Wojciszke, B., 37, 39 Woo, B. M., 444–445 Woodworth, M., 313 word framing effects, 602, 603 wrongness judgments, 5, 13–14, 321, 367 wu-wei, 410 Wynn, K., 451, 452–453 Xiao, E., 90–91 Xwb Fwb (Hmong Christian pastor), 509–511 Young, K. A., 314 Young, L., 178 Yuen, Francis, 19 yu-wei, 410 Zeier, J., 309 Zigon, J., 514 Zimbardo, Philip G., 273–274 Ziv, T., 452, 455

649