Empirically Engaged Evolutionary Ethics 3030688011, 9783030688011

A growing body of evidence from the sciences suggests that our moral beliefs have an evolutionary basis. To explain how

351 25 3MB

English Pages 234 [226] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Empirically Engaged Evolutionary Ethics
 3030688011, 9783030688011

Table of contents :
Acknowledgments
Contents
About the Editors
Contributors
Chapter 1: Situating Empirically Engaged Evolutionary Ethics
1.1 What Does It Mean to Naturalize Ethics?
1.2 Empirically Engaged Ethics Before Darwin
1.3 Evolutionary Ethics: Some Historical Notes
1.4 Evolutionary Ethics Today
1.5 The Present Volume
References
Part I: The Nuts and Bolts of Evolutionary Ethics
Chapter 2: Dual-Process Theories, Cognitive Decoupling and the Outcome-to-Intent Shift: A Developmental Perspective on Evolutionary Ethics
2.1 Introduction
2.2 The Outcome-to-Intent Shift
2.3 Using Dual-Process Models to Explain the Outcome-to-Intent Shift
2.4 Cognitive Decoupling and the Articulation of Divergent Intuitions
2.5 Cultural Variation in the Weight Placed on Intentions
2.6 Discussion
2.7 Conclusion
References
Chapter 3: Not So Hypocritical After All: Belief Revision Is Adaptive and Often Unnoticed
3.1 Introduction
3.2 Hypocrisy in Social Psychology
3.3 Evolutionary Considerations
3.4 Cultural Mechanisms for Belief Acquisition
3.5 The Sparseness of Belief
3.6 Belief Revision in Response to Social Cues
3.7 Conclusion: Hypocrisy Unmasked
References
Chapter 4: The Chimpanzee Stone Accumulation Ritual and the Evolution of Moral Behavior
4.1 Introduction
4.1.1 The Question
4.1.2 Definition of Morality and Its Components
4.1.3 Background – Chimpanzee Cognitive, Affective, Moral and Communicative Capacities
4.2 Summary of Kühl et al. (2016)
4.2.1 Hurl, Bang, Toss (Cache, Heap) and Stone Accumulations
4.2.2 Flowchart
4.2.3 Problematic Explanatory Hypotheses
4.3 Method
4.3.1 Explanation
4.3.2 Stone Accumulation Behaviors as Ethological Ritualization and Ritual
4.3.3 Semiotic Interpretation
4.4 Results
4.4.1 Relevant Chimpanzee Aggression Display Variables
4.4.2 Relevant Chimpanzee Vocal Communication Variables
4.4.3 Moral Context for the Chimpanzee Stone Throwing/Caching Ritual
4.4.4 Contextualizing the Stages of the Ritual
4.5 Discussion
4.6 Conclusion
References
Part II: The Evolution of Moral Cognition
Chapter 5: Morality as an Evolutionary Exaptation
5.1 How to Do Evolutionary Ethics: Four Sequential Questions
5.2 Morality as Prudential Risk-Aversion
5.3 The Elements of Moral Cognition
5.4 The Diverse Evolutionary Advantages of Our Moral Capacities
5.5 Conclusion
References
Chapter 6: Social Animals and the Potential for Morality: On the Cultural Exaptation of Behavioral Capacities Required for Normativity
6.1 Introduction
6.2 Morality and Biology: Evolution, Natural Selection, and Adaptation
6.3 An Inclusive Understanding of Social Behaviors
6.3.1 Normativity, Culture, and Conformity (Among Others)
6.3.2 The Relationship Between Various Types of Norms
6.3.3 Complexity of Animal Social Behaviors: Prerequisites of Morality Fulfilled
6.4 From Normativity to Morality
6.4.1 Morality as Secondary Adaptation (Exaptation)
6.4.2 Cultural Evolution in Animals
6.4.3 Implications for Moral Universality and Diversity
6.4.4 Animals and Moral Behavior
6.5 Conclusion
References
Chapter 7: Against the Evolutionary Debunking of Morality: Deconstructing a Philosophical Myth
7.1 Mechanistic Explanation vs. Cognitive Capture of Objective Values
7.2 Moral Objectivity and Its Sequels in the Philosophical Tradition
7.2.1 Hume and the Projection Mechanism
7.2.2 Mackie and the Claim to Objectivity in Moral Language
7.3 Evolutionary Debunking: Selection for Illusory Objectivity
7.4 An Alternative Mechanistic Explanation
References
Part III: The Cultural Evolution of Morality
Chapter 8: The Cultural Evolution of Extended Benevolence
8.1 Extended Benevolence in Darwin’s Descent of Man
8.2 Darwin on the “Moral Sense”
8.3 Extended Benevolence: Behaviors, Institutions, and Attitudes
8.4 Extended Benevolence Evolving
8.4.1 Transmission Biases and Human Rights
8.4.2 Transmission Biases and Animal Welfare
8.5 The Moral Sense as an Assemblage of Adapted Transmission Biases
8.6 How Extended Benevolence Emerged
8.7 The Proliferation of Extended Benevolence
8.8 Conclusion: An Evolutionary Foundation for Extended Benevolence
References
Chapter 9: The Contingency of the Cultural Evolution of Morality, Debunking, and Theism vs. Naturalism
9.1 Introduction
9.2 The Contingency Question
9.3 The Contingency Thesis: The Contingency of Cultural Evolution
9.4 The Philosophical Implications: Debunking Arguments and Theism vs. Naturalism
9.5 Empirical Support for the Contingency of Cultural Evolution
9.6 Conclusion: Contingency and Its Philosophical Implications
References
Chapter 10: Morality as Cognitive Scaffolding in the Nucleus of the Mesoamerican Cosmovision
10.1 Introduction
10.2 Cosmovision, a Cognitive and Social Phenomenon
10.3 Cosmovision and Scaffolding in Niche Construction
10.4 The Epistemological Challenge
10.5 Conclusions
References
Index

Citation preview

Synthese Library 437 Studies in Epistemology, Logic, Methodology, and Philosophy of Science

Johan De Smedt Helen De Cruz  Editors

Empirically Engaged Evolutionary Ethics

Synthese Library Studies in Epistemology, Logic, Methodology, and Philosophy of Science Volume 437

Editor-in-Chief Otávio Bueno, Department of Philosophy, University of Miami, USA Editorial Board Berit Brogaard, University of Miami, USA Anjan Chakravartty, University of Notre Dame, USA Steven French, University of Leeds, UK Catarina Dutilh Novaes, VU Amsterdam, The Netherlands Darrell P. Rowbottom, Lingnan University, Hong Kong Emma Ruttkamp, University of South Africa, South Africa Kristie Miller, University of Sydney, Australia

The aim of Synthese Library is to provide a forum for the best current work in the methodology and philosophy of science and in epistemology. A wide variety of different approaches have traditionally been represented in the Library, and every effort is made to maintain this variety, not for its own sake, but because we believe that there are many fruitful and illuminating approaches to the philosophy of science and related disciplines. Special attention is paid to methodological studies which illustrate the interplay of empirical and philosophical viewpoints and to contributions to the formal (logical, set-theoretical, mathematical, information-theoretical, decision-theoretical, etc.) methodology of empirical sciences. Likewise, the applications of logical methods to epistemology as well as philosophically and methodologically relevant studies in logic are strongly encouraged. The emphasis on logic will be tempered by interest in the psychological, historical, and sociological aspects of science. Besides monographs Synthese Library publishes thematically unified anthologies and edited volumes with a well-defined topical focus inside the aim and scope of the book series. The contributions in the volumes are expected to be focused and structurally organized in accordance with the central theme(s), and should be tied together by an extensive editorial introduction or set of introductions if the volume is divided into parts. An extensive bibliography and index are mandatory. More information about this series at http://www.springer.com/series/6607

Johan De Smedt  •  Helen De Cruz Editors

Empirically Engaged Evolutionary Ethics

Editors Johan De Smedt An Independent Scholar St Louis, MO, USA

Helen De Cruz Danforth Chair in the Humanities Saint Louis University St. Louis, MO, USA

ISSN 0166-6991     ISSN 2542-8292 (electronic) Synthese Library ISBN 978-3-030-68801-1    ISBN 978-3-030-68802-8 (eBook) https://doi.org/10.1007/978-3-030-68802-8 © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Acknowledgments

We wish to thank the John Templeton Foundation for their funding of our grant Evolution, ethics, and human origins: A deep-time perspective on human morality (2017–2020). As part of this grant, we organized the conference Evolutionary ethics: The nuts and bolts approach (Oxford Brookes University, July 18–21, 2018). A selection of the papers of this conference appears in this volume. The views expressed in this volume do not necessarily represent the views of the John Templeton Foundation. We especially wish to thank our authors for writing up their papers during very challenging times (the coronavirus pandemic). We also wish to thank our reviewers, including Mark Alfano, Sarah Brosnan, Filipe Nobre Faria, Jonathan Jong, David Kaspar, Dustin Locke, Richard Sosis, and Kevin Tobia.

v

Contents

1 Situating Empirically Engaged Evolutionary Ethics����������������������������    1 Johan De Smedt and Helen De Cruz Part I The Nuts and Bolts of Evolutionary Ethics 2 Dual-Process Theories, Cognitive Decoupling and the Outcome-to-Intent Shift: A Developmental Perspective on Evolutionary Ethics��������������������������������������������������������   17 Gordon P. D. Ingram and Camilo Moreno-Romero 3 Not So Hypocritical After All: Belief Revision Is Adaptive and Often Unnoticed ����������������������������������������������������������   41 Neil Levy 4 The Chimpanzee Stone Accumulation Ritual and the Evolution of Moral Behavior����������������������������������������������������   63 James B. Harrod Part II The Evolution of Moral Cognition 5 Morality as an Evolutionary Exaptation ����������������������������������������������   89 Marcus Arvan 6 Social Animals and the Potential for Morality: On the Cultural Exaptation of Behavioral Capacities Required for Normativity������������������������������������������������������������������������  111 Estelle Palao 7 Against the Evolutionary Debunking of Morality: Deconstructing a Philosophical Myth����������������������������������������������������  135 Alejandro Rosas

vii

viii

Contents

Part III The Cultural Evolution of Morality 8 The Cultural Evolution of Extended Benevolence��������������������������������  153 Andrés Luco 9 The Contingency of the Cultural Evolution of Morality, Debunking, and Theism vs. Naturalism������������������������������������������������  179 Matthew Braddock 10 Morality as Cognitive Scaffolding in the Nucleus of the Mesoamerican Cosmovision ��������������������������������������������������������  203 J. Alfredo Robles-Zamora Index������������������������������������������������������������������������������������������������������������������  221

About the Editors

Johan De Smedt  has co-authored A natural history of natural theology: The cognitive science of theology and philosophy of religion (MIT Press, 2015) and The Challenge of Evolution to Religion (Cambridge University Press, 2020), and published empirically informed philosophy of science, religion, and art. Helen De Cruz  is holder of the Danforth Chair in the Humanities at Saint Louis University, Missouri, USA. Her publications are in empirically informed philosophy of cognitive science, philosophy of religion, social epistemology, and metaphilosophy. She is author of, recently, Religious Disagreement (Cambridge University Press, 2020) and co-editor of Philosophy through Science Fiction Stories: Exploring the Boundaries of the Possible (Bloomsbury, 2021).

ix

Contributors

Marcus Arvan  Department of Philosophy, University of Tampa, Tampa, FL, USA Matthew  Braddock  Department of History and Philosophy, University of Tennessee at Martin, Martin, TN, USA James  B.  Harrod  Center for Research on the Origins of Art and Religion, Portland, ME, USA Gordon  P.  D.  Ingram  Department of Psychology, Universidad de los Andes, Bogotá, Colombia Neil  Levy  Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK Department of Philosophy, Macquarie University, Sydney, NSW, Australia Andrés  Luco  School of Humanities, Nanyang Technological University, Singapore, Singapore Camilo Moreno-Romero  Department of Psychology, Universidad de los Andes, Bogotá, Colombia Estelle  Palao  Department of Philosophy, Osgoode Hall Law School, York University, Toronto, Canada J. Alfredo Robles-Zamora  National Autonomous University of Mexico (UNAM), Mexico City, Mexico Interdisciplinary Professional Unit in Energy and Mobility (UPIEM), National Polytechnic Institute (IPN), Mexico City, Mexico Alejandro Rosas  Department of Philosophy, Universidad Nacional de Colombia, Bogotá, Colombia

xi

Chapter 1

Situating Empirically Engaged Evolutionary Ethics Johan De Smedt and Helen De Cruz

Abstract  This introductory essay provides a historical and cross-cultural overview of evolutionary ethics, and how it can be situated within naturalized ethics. We also situate the contributions to this volume. Keywords  Charles Darwin · Pyotr Kropotkin · Paul Rée · Arthur Schopenhauer · Naturalistic ethics · Evolutionary ethics · Mozi · Mengzi · Yangming Wang · Immanuel Kant · Moral foundations theory · Henry Sidgwick · G.E. Moore · Competition · Mutual aid · Experimental philosophy

1.1  What Does It Mean to Naturalize Ethics? Empirically engaged evolutionary ethics refers to the study of the evolution of morality with the help of one or more empirical sciences and its philosophical implications. Since the nineteenth century, philosophers and scientists have examined ways to bring evolutionary theory in conversation with ethics, looking at the broad implications of descriptive evolutionary ethics for normative ethics, metaethics, and applied ethics. However, the quest for naturalizing ethics preceded evolutionary theory. In 1840 Arthur Schopenhauer wrote a polemical essay in which he pushed back against deontological ethics with its focus on what we ought to do, as expressed in particular in Kant’s Groundwork of the Metaphysics of Morals (1785 [1998]). Schopenhauer instead proposed that ethics should not focus on what ought to be, but on what actually is the case.

J. De Smedt An Independent Scholar, St Louis, MO, USA H. De Cruz (*) Danforth Chair in the Humanities, Saint Louis University, St Louis, MO, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. De Smedt, H. De Cruz (eds.), Empirically Engaged Evolutionary Ethics, Synthese Library 437, https://doi.org/10.1007/978-3-030-68802-8_1

1

2

J. De Smedt and H. De Cruz The end which I place before Ethical Science is to point out all the varied moral lines of human conduct; to explain them; and to trace them to their ultimate source. Consequently there remains no way of discovering the basis of Ethics except the empirical. (Schopenhauer 1840 [1903], 148)

This concise statement by Schopenhauer provides a useful summary for naturalistic projects in ethics: the empirical sciences serve both as a grounding for ethics (i.e., its ultimate source) and as a methodology (i.e., the empirical sciences are the only or best way to discover the basis of morality). Moreover, naturalistic approaches to ethics often aim to determine whether an ethical project is in line with human nature, however perceived. Naturalizing ethics, then, contains an acknowledgment that ethical life is grounded in physical, embodied interactions with others and our environment. In other words, the formulation of ethical principles ought to be constrained by empirical findings about the biological, social, and other constraints and possibilities to which moral agents (principally, human beings) are subject. In the decades following Schopenhauer (1840), the publication of Darwin’s Descent of Man (1871) gave a new impetus to naturalistic ethics. A vigorous discussion ensued on how the Darwinian project could be integrated into philosophy – a discussion that went on with various interruptions until the present day, and of which this book is a part. This volume presents nine original essays by authors from various disciplines, including philosophy, anthropology, developmental psychology, and primatology, who write in conversation with neuroscience, sociology, and cognitive psychology.

1.2  Empirically Engaged Ethics Before Darwin Empirically engaged ethics is not a radically new project that only emerged in conversation with evolutionary theory. Throughout history, ethicists have considered the practical constraints and ramifications of their theories. As Ruse and Wilson (1986) point out, evolutionary ethics can be traced back to a broader tendency to naturalize ethics which was prominent in the nineteenth century, but can also be found in pre-Darwinian authors such as Robert Chambers (1844 [1994]), who saw precursors of human morality in animals, and envisaged a gradual moral progress in human societies over time, the result of early socialization in children and cultural evolution. However, if one focuses on western philosophy written in the twentieth century, one may get the impression that naturalizing ethics is a fringe project. For example, Flanagan et al. (2008, 2) observe that “ethical naturalism has a fair number of philosophical advocates, but most people reject it  – including many in the academy.” Be that as it may, many ethical traditions show naturalistic tendencies. Traditions from large-scale historical societies such as in ancient Greece (e.g., Stoicism), ancient China (e.g., pre-Qin Confucianism, in particular Mengzi and Xunzi), and the Indian subcontinent (e.g., the hedonistic ethics of the Cārvāka philosophical school) have extensive and well-developed theories on normative and meta-ethics,

1  Situating Empirically Engaged Evolutionary Ethics

3

as do traditions in small-scale societies, such as Native American philosophies (see e.g., Marshall 2001, Waters 2004). With few exceptions, these ethical theories are also naturalized: they are firmly rooted in the practicalities of human life, and they consider limitations such as weakness of will, as well as the role of emotions such as anger and empathy, as important constraints on morality. To give but one example, the neo-Confucian philosopher Wang Yangming (1472–1529) addressed questions of a hypothetical student in his Questions on the Great Learning (1527 [2014]). He held that compassion and benevolence, important components of morality, are part of our innate human nature, something that all human beings share, including noble, broadminded (“great people”) and narrow-­ minded (“petty”) people. The reason that we are all able to feel compassion and benevolence is that we are in fact all part of the same universe: we share its abstract structure 理, lǐ, and its primordial stuff (matter and mind) 氣, qì. Because of this intimate metaphysical connectedness, we cannot but feel concern for creatures and things that we share the planet with: The ability great people have to form one body with Heaven, Earth, and the myriad creatures is not something they intentionally strive to do; the benevolence of their minds is fundamentally like this. […] Even the minds of petty people are like this. […] This is why, when they see a child [about to] fall into a well, that they cannot avoid having a mind of alarm and compassion for the child. This is because their benevolence forms one body with the child. Someone might object that this response is because the child belongs to the same species. But when they hear the anguished cries or see the frightened appearance of birds or beasts, they cannot avoid a sense of being unable to bear it. This is because their benevolence forms one body with birds and beasts. Someone might object that this response is because birds and beasts are sentient creatures. But when they see grass or trees uprooted and torn apart, they cannot avoid feeling a sense of sympathy and distress. This is because their benevolence forms one body with grass and trees. Someone might object that this response is because grass and trees have life and vitality. But when they see tiles and stones broken and destroyed, they cannot avoid feeling a sense of concern and regret. This is because their benevolence forms one body with tiles and stones. (Wang 1527 [2014], pp. 241–242)

Wang drew inspiration from the earlier, pre-Qin philosopher Mengzi (4th c. BCE [2008], Book 2A6) who came up with the thought experiment of the child teetering at the rim of a well to illustrate that people innately have incipient moral tendencies, including compassion, that can grow into a fully-fledged morality. To explain the origin of our innate morality Wang suggested a connection between our ontological status as parts of a larger whole and our ethical concerns for the other parts of the universe. Naturalism in ancient ethical traditions is not only expressed in how philosophers situate ethics as part of human nature; it is also an important part of how ethical claims are or could in principle be tested. Perceived testability has been an important measure for ethical theories throughout history, even though empirical testing was not done in a systematic way, and occurred often in the form of thought experiments where the reader has to imagine what she would do in a given situation. Consider the arguments by Mozi (5th–3rd c. BCE [2009], part 16) in favor of impartialist ethics. Mozi was a pre-Qin philosopher who advocated an ethics of

4

J. De Smedt and H. De Cruz

impartiality, where one should not treat close family and friends more favorably than strangers. He offered two empirical arguments, both in the form of thought experiments. The first has the reader imagine that they are going on a long trip with an uncertain outcome, and need to entrust their family to a caretaker. Mozi holds that you would rather have your family taken care of by an impartial than by a partialist caretaker, since the former will treat your and his family in the same way. The second imagines a country where a severe pestilence is causing economic havoc and deprivation. If you lived in this country, would you rather it were ruled by an impartialist who tries to instate policies that benefit everyone without distinction, or would you prefer a partialist ruler who puts the wellbeing of his own family and friends above that of other citizens? To Mozi, the answer is clear: everyone would prefer an impartial caretaker and an impartial ruler, and this preference (that even a partialist would have, under these uncertain circumstances) vindicates impartiality. The history of ethics abounds with such thought experiments. Though they are not controlled empirical studies, they show a concern of philosophers for the empirical limitations and strengths of their ethical theories. These considerations lead us to conclude that ethical naturalism is continuous with the way philosophers have examined ethics over the past three millennia. Ethical naturalism is not something radically new that came to the fore in the nineteenth century. Nevertheless, as we review in the next section, the publication of evolutionary theories did have a significant and long-lasting impact on ethical naturalism.

1.3  Evolutionary Ethics: Some Historical Notes The publication of evolutionary theories gave rise to new developments in ethics. One of the main catalysts was Darwin (1871), which addresses the emergence of the moral faculty in humans as a result of natural selection. However, this was not the first work to address evolutionary ethics; pre-Darwinian evolutionary thinkers such as Herbert Spencer were inspired in their thinking about human behavior and psychology by earlier authors such as Jean-Baptiste Lamarck and Robert Chambers. Spencer situated psychology, ethics, and sociology within a broader evolutionary framework. His synthetic philosophy saw evolution as something governing the whole universe, not just biological evolution, but also how galaxies came about, and how human societies changed. His Principles of Psychology (1855), the earliest articulation of this generalized principle, predates the publication of the Origin of Species (Darwin, 1859); it was based on Lamarck’s principle of inheritance of acquired characteristics. The importance of The Descent of Man (Darwin, 1871) lies in its detailed account of the origins of human morality through a process of group selection that was entirely naturalistic, thus presenting an alternative to the at the time popular view that morality originates from God. Rather than present an exhaustive review of this

1  Situating Empirically Engaged Evolutionary Ethics

5

historical period, we will highlight a few examples of how Darwin’s theory influenced theorizing about ethics in the late nineteenth and early twentieth centuries. The German philosopher Paul Rée (1877) wrote an early evolutionary account of ethics that was clearly inspired by Darwin. He argued that humans possess two kinds of innate drives, self-regard and other-regard (Rée 1877, 1–7). Our other-­ regard is expressed in emotions such as pity (when things go badly for others) or happiness (when things go well for them). At times, our self-regard overshadows our other-regarding drive, for example, we might feel jealous when things go well for others, or schadenfreude when things go badly. The other-regarding sentiments explain why humans sometimes behave altruistically, but do not explain why we regard unselfish actions as good and selfish actions as bad. In Rée’s view, morality results from an interaction of these evolved sentiments with culture. Rejecting moral realism, Rée argued that our judgments of good and evil can be ultimately traced back to judgments about what is good or bad for individuals, and these judgments are accorded a fundamental normative status. Through a process of cultural group selection (following Darwin, 1871), this gave rise to moral conceptions of good and evil which are acquired through learning (Rée 1877, 24). Groups where people thought that unselfishness is good and selfishness bad enjoyed a selective advantage over groups that did not hold this view, because members of the former could cooperate better (Rée 1877, 9). Rée’s account foreshadows later error theory views on morality, notably by Richard Joyce (2006) and Michael Ruse (2010). Like these later error theorists, Rée (1877, 49) claimed that the emergence of morality was ultimately the result of errors: we erroneously perceive as mind-independent ethical judgments that don’t exist independently from us or our experience. But it is also a helpful illusion: the illusion of good and evil helps us to cooperate better, and gives groups who have it an evolutionary advantage. Russian scientists and philosophers were likewise intrigued by Darwinism and its implications for political theory and ethics. One sticking point for Russian intellectuals was the large influence of Malthusianism in Darwinism, notably the idea that evolution is propelled by competition and a struggle for scarce resources. Russian scientists from 1860 to the early 1900s, including Karl Kessler, Modest Bogdanov, Andrey Beketov, and Sergei Korzhinskii, criticized the Malthusian struggle for existence, arguing that this concept of struggle was confused, for instance, in its lack of distinction between different forms of competition, such as direct versus indirect and intraspecific versus interspecific competition. They also maintained that this Malthusian influence came from a socially insidious, faulty view on the bad effects of overpopulation among poor people, next to an unhealthy focus on competitions in English society (see Todes 1989 for an overview). The Russian naturalist, economist, and anarchist political philosopher Pyotr Kropotkin (1902 [1989], 1924) outlined his own evolutionary ethics, stressing that humans have evolved dispositions that push them in two directions. On the one hand, we have a tendency that inclines us to be part of a community and to offer mutual aid; on the other hand, we have a propensity toward individual self-­realization and freedom. Kropotkin did not think we need to achieve a compromise between these or to sacrifice one for the other; rather, societies ought to strive for a synthesis

6

J. De Smedt and H. De Cruz

between these two tendencies. He anticipated a theory akin to moral foundations theory (e.g., Graham et  al. 2013), stipulating evolved tendencies as the basis for moral evaluative judgments and behaviors. These moral foundations consist of sociality (an innate sympathy, or tendency to see others as fundamentally like ourselves), magnanimity (which pushes us to help others, even at the expense of ourselves), and a desire for justice. Kropotkin saw sociality and magnanimity in self-sacrificial behavior that people sometimes display, e.g., “[T]he impulse of a man who plunges into a river (even though unable to swim) in order to save another… cannot be explained in any other way than by the recognition of one’s equality with all others” (Kropotkin 1924, 245). An important aspect of Kropotkin’s ethics is its thoroughgoing naturalism. He agreed with Spencer (1855) that ethics constitutes “one of the divisions of the general philosophy of nature” (Kropotkin 1924, 289), and that it is a specialized domain of science. Like his compatriots, he disavowed Spencer’s and Darwin’s focus on the struggle for existence. Kropotkin posed the following challenge: if we agree that evolution selects only for those tendencies that are advantageous, we should expect that we get most gratification out of being selfish. However, this is not what we observe. Doing well for others gives us a sense of gratification, and this sense needs an evolutionary explanation: “do not the feelings of sociality and of mutual aid, from which gradually and inevitably our moral conceptions had to develop, – do not they constitute just as fundamental a property of human or even of animal nature, as the need of nourishment?” (Kropotkin 1924, 295, emphasis in original) Put differently, Kropotkin saw our altruistic tendencies as foundational for ethical life, something that he thought evolutionary theory (with its emphasis on struggle) could not sufficiently explain. His Mutual aid (1902 [1989]) argues for the central role of altruism in evolution: altruism and cooperation, rather than competition, drive evolution. Kropotkin (1902 [1989], chapter 2) gave many examples of mutual aid in nonhuman animals, for example, social birds mobbing predators, sentry-posting in social mammals and birds, and large nesting colonies. He also sketched how mutual aid is an important feature of human life, notably in cooperation in small-scale societies and in the medieval free city (Kropotkin, 1902 [1989], chapters 5 and 6). In his posthumously published Ethics (1924), he integrated this idea into his picture of evolutionary ethics. In this way, Kropotkin prefigured later discussions on the importance of non-zero-sum games in evolutionary ethics and evolutionary biology more broadly (e.g., Cronk and Leech 2013). The main work that introduced late Qing dynasty Chinese intellectuals to evolutionary theory was not Darwin’s Origin of Species (1859), nor his Descent of Man (1871), but On Natural Evolution (Tianyan lun, 天演論, On Natural/Heavenly Evolution), a compilation of writings by Herbert Spencer and Thomas Huxley, translated by Yan Fu and published in 1898. This work drew an intimate connection between evolutionary theory and social Darwinism, the idea that mechanisms of biological evolution also operate at a human societal level, and that this is desirable. Chinese intellectuals saw this play out among the western colonial powers competing with each other for influence in a struggle for existence, and they saw their own empire (China under the Qing dynasty) under threat and divided by more powerful

1  Situating Empirically Engaged Evolutionary Ethics

7

foreign nations. The initial preoccupation of Yan’s work was not to make a distinction between Darwin, Huxley, Lamarck, and Spencer, but to search for a therapy to secure the survival of the Chinese empire and later republic, which had been threatened in the aftermath of a series of military and political catastrophes at the hands of western countries (Jin 2019, 124). We will here focus on the reception of evolutionary theory by Chinese Buddhists of the period. Contrary to Christianity, Buddhism has no problem with the continuity between humans and other animals that evolution presupposes, and it has no problem with complexity arising out of natural processes, as it does not posit souls or a creator God. But Chinese Buddhists saw a serious incompatibility between Buddhist ethics and the ethics of social Darwinism, which they had come to see as roughly synonymous with evolutionary theory. The struggle for existence was perceived as deeply incompatible with the Buddhist striving to not cling to the self or possessions. In the 1920s and 1930s, Chinese Buddhists warmed to Kropotkin’s version of evolutionary theory with its emphasis on mutual aid and cooperation, which was a better fit with Buddhist ethics. However, they did not think it went far enough because Kropotkin’s view still required a self, and only when one recognized the emptiness of the self could one dedicate oneself entirely to helping others, as bodhisattvas do (Ritzinger, 2013). In the Indian subcontinent, which was under British colonial rule during this period, authors discussed the ramifications of evolutionary theory both for Hinduism (whether factual claims in Hindu scriptures such as the Vedas were compatible with Darwinism), and for ethical theory. For example, Sri Aurobindo (1872–1950) set out to make evolutionary theory compatible with the Hindu theory of successive incarnations of Viṣṇu through avataric evolution (see De Smedt and De Cruz, 2020, 5–6, for discussion). He also criticized Darwinian theory for focusing too much on self-preservation of organisms at the expense of cooperation: “Because the struggle for survival, the impulse towards permanence is contradicted by the law of death, the individual life is compelled, and used, to secure permanence rather for its species than for itself; but this it cannot do without the co-operation of others; and the principle of co-operation and mutual help” (Aurobindo, 1914–1918 [2005]: 212). Evolutionary ethics was a successful and multifaceted strand within the project of naturalizing ethics. Curiously, it provoked a backlash that led to an anti-­naturalism in ethical theory that would dominate a lot of discussion throughout the twentieth century. One influential voice in this anti-naturalism was Henry Sidgwick (1876), who argued that it was unwarranted for evolutionary ethics to go beyond mere description. Much of his ire was directed at Spencer’s notion that ‘more evolved’ would mean ‘better’ (including ethically better). As Sidgwick correctly pointed out, like other early evolutionary ethicists Spencer embraced a notion of progress, where evolution is “not merely a process from old to new, but also a progress from less to more of certain qualities or characteristics” (Sidgwick 1876, 56). Thus, it seems plausible that Spencer’s evolutionary ethics can furnish a highly plausible explanation of the development of morality in a race of animals gregarious, sympathetic, and semi-rational – such as we may conceive man to have been in the præ-moral stage of his development. But I fail to see how we are thus helped to a solu-

8

J. De Smedt and H. De Cruz tion of the conflict between the Utilitarian and Intuitional schools of Ethics: in so far, that is, as either school professes to supply not merely a psychological explanation of human emotions, but an ethical theory of right conduct. (Sidgwick, 1876, 66)

In other words, Sidgwick thought it was problematic that evolutionary ethicists tried to use their theories to adjudicate between normative ethical theories. Later, he went as far as to disavow the study of evolutionary ethics entirely, or at least to relegate it to some field of inquiry outside of ethics: “it appears to me that the investigation of the historical antecedents of this cognition [morality], and of its relation to other elements of the mind, no more properly belongs to Ethics than the corresponding questions as to the cognition of Space belong to Geometry” (Sidgwick, 1907, v–vi). Sidgwick’s student, G.E. Moore, was influenced by this critique and formulated his concept of a naturalistic fallacy, specifically with evolutionary ethicists such as Spencer in mind. In Moore’s view, we cannot identify the moral good with any natural property. The problem for any evolutionary ethicist who wants to go beyond the purely descriptive is what Moore termed the open-question argument (Moore, 1903, § 13): we can always ask whether a given act was good. If one can identify the good with, say, an evolved propensity to be altruistic, then asking “Is this altruistic act good?” would amount to “Is this altruistic act altruistic?”, since – in this view – the good can be equated with altruism. But clearly, these questions are not equivalent. This led Moore to conclude that the good is a non-natural property that cannot be empirically or scientifically tested or verified. David Hume’s (1739–40 [2007], T3.1.1.27) principle that one cannot derive an is from an ought is sometimes seen as a precursor to Moore’s formulation of the naturalistic fallacy. However, these are two quite distinct claims. Hume claimed that we cannot derive a normative claim from a factual claim, at least not without using some bridge principles. In contrast, Moore claimed that we cannot draw moral conclusions from non-moral principles, even when using bridge principles. The reason why we can’t use them, according to Moore, is that he somehow believed such principles weren’t available (Pigden, 2019, 75).

1.4  Evolutionary Ethics Today Although one can conceptualize evolutionary ethics today as a continuation of the earlier wave, there are two key differences: better empirical testing and better theory. We now have access to much better empirical evidence than earlier evolutionary ethicists. For example, authors such as Darwin and Rée could only speculate about human origins. Contemporary authors can draw on a wealth of archaeological, molecular, and other data about the origins of our species. Episodic observations of non-human animals, often anecdotal in character, are now replaced by detailed field observations of primates in the wild and carefully controlled laboratory studies. Earlier evolutionary ethicists hardly had access to anthropological

1  Situating Empirically Engaged Evolutionary Ethics

9

data, and what they had was often unreliable hearsay and distorted reports from travelers and colonists. Today, we can draw on a much broader range of evidence, not only in anthropology, but also in other disciplines that are relevant to the study of morality, such as developmental psychology and neuroscience. A number of present-day ethicists also gather their own evidence. For example, experimental philosophical studies survey people about their ethical intuitions, Knobe (2003) and Schwitzgebel and Cushman (2012) being two seminal papers in this expanding field. Next to this, evolutionary theory is in a much better position today. Earlier evolutionary theory struggled with several issues, such as the extent to which group selection is a driving force in evolution, the question of whether evolution is inherently progressive (many earlier evolutionary ethicists assumed it was), and the frustrating lack of theory on how traits are transmitted from one generation to the next. While these topics continue to be debated, much of this confusion was resolved with the modern synthesis and later theorizing that clarified the notion of different kinds of altruism, including reproductive altruism (toward kin), reciprocal altruism (also toward non-kin), and indirect reciprocity. The extended evolutionary synthesis adds to the predominantly gene-centric view of standard evolutionary theory the importance of ontogeny and of non-genetic inheritance mechanisms in evolution (Laland et al., 2015). Pioneers of the new wave of evolutionary ethics include evolutionary theorists, biologists, and philosophers such as E.O.  Wilson (1975) and Elliott Sober and David Sloan Wilson (1998). This work continues with fruitful explorations of, for example, the role of cultural group selection in the evolution of morality (e.g., Tomasello, 2016). As in the previous wave of evolutionary ethics, the contemporary investigation into the evolution of morality is a multi-faceted debate that is often interdisciplinary. Unfortunately, many philosophers do not engage with the empirical research and do not appear to keep abreast of the latest findings. This reluctance to make their hands dirty leaves a lot of philosophical discussion stuck in high-level generalizations about morality that do not come to grips with the questions of how it evolved in our species, or what the implications of this might be for ethics. The present volume aims to constructively address this situation.

1.5  The Present Volume As the title of our volume Empirically Engaged Evolutionary Ethics indicates, our contributors get into the details of evolutionary ethics, engaging with recent insights from evolutionary theory and other empirical work, while also examining the philosophical implications of these findings for ethics. The papers in this volume present a range of ideas in evolutionary ethics, going beyond the high-level debates that characterize a lot of philosophical discussion. The contributions to this volume can be categorized roughly as follows: Part I focuses on the nuts and bolts of how the sciences can shed light on claims in evolutionary ethics, engaging with developmental psychology, cognitive psychology, and primatology. Part II examines

10

J. De Smedt and H. De Cruz

evolutionary explanations of morality and their implications for meta-ethical debates. Part III considers the role of cultural evolution in discussions about evolutionary ethics. The papers in Part I focus on empirical and interdisciplinary approaches in evolutionary ethics. Gordon Ingram and Camilo Moreno-Romero address the implications of developmental psychology for evolutionary ethics. Recently, cognitive scientists have paid a lot of attention to dual process theories that distinguish between fast, automatic, and evolved impulses (type-1 processes) and more slow, deliberate forms of reasoning (type-2 processes). Such theories often pose a conflict between type-1 and type-2 processes: our speedier, intuitive moral judgments are said to be in conflict with our more deliberate thoughts. However, drawing on their own work as developmental psychologists and on a wide range of studies, Ingram and Moreno-Romero show that this is an oversimplification: to properly understand adult moral cognition, one needs to examine ontogenetic pathways that develop in children as they mature. This chapter provides an overview of recent theories of dual-type processing and morality, particularly in developmental psychology, and looks at some objections to applying this framework to moral psychology. Central in this discussion is the outcome-to-intent shift, a transition in children’s reliance on more automatic processes to controlled, explicit reasoning processes, a shift already described by Jean Piaget. Prior to age eight or nine, children tend to rely on the outcome, rather than the perceived intention of an action in their moral evaluation, whereas older children consider whether a harmful action was done intentionally or accidentally. However, recent developmental evidence indicates that children already from an early age can take intent into account, and that the relative importance of outcome versus intent depends on situational context (e.g., whether the agent will be punished), as well as on cultural context (with more emphasis on intent in urban USA and rural Europe than in many other parts of the world). Rather than seeing type-1 processes as relics of an evolved past, Ingram and Moreno-­ Romero show that type-1 processes can also be learned, and that individual, situational, and cultural variation play a significant role in which of these processes wins out. Neil Levy applies insights from cognitive psychology and evolutionary theory to consider the problem of hypocrisy. People are apt to change their beliefs in line with the prevailing political climate, leaving them open to the charge of hypocrisy. However, Levy argues that humans are very sensitive to external cues when they form and update beliefs. For example, we are subject to prestige bias, a heuristic that inclines us to believe what prestigious members of a group one identifies with believe. As a result, our internal representations are relatively sparse. We may not even notice when we update our internal representations as a result of external cues from our social environment – hence, what can easily be interpreted as hypocrisy is in reality the result of a reconstructive process where we do not notice that our internal, sparse representations are brought in line with social cues. James Harrod examines the curious case of chimpanzee stone accumulations in West Africa. Studying chimpanzees is relevant for our understanding of the evolution of morality, given that they (together with bonobos) are our closest extant

1  Situating Empirically Engaged Evolutionary Ethics

11

relatives. Chimpanzees live in complex social groups which have sophisticated social norms that involve such behaviors as social alliance building, mutual aid, and the removal of abusive dominant individuals. They also show a range of morally relevant emotions such as guilt and shame. However, debate continues on whether chimpanzee behavior can be described as moral. Harrod considers the following behavior in the context of evolutionary ethics: while showing a number of social displays, chimpanzees hurl stones at certain trees, resulting in stone accumulations. Rejecting the hypothesis that these stone accumulations are proto-religious behavior, he instead proposes that they are the result of rituals with moral significance: they involve inhibiting and redirecting a victim’s retaliatory aggression into a creative ritual performance. Instead of attacking a lower-ranked individual to retaliate against inequity, abuse, or harm suffered at the hands of a powerful conspecific, retaliatory aggression is redirected toward an inanimate object: a tree where the stones resulting from such a performance accumulate over time. The contributions to Part II examine how moral cognition might have evolved, what kinds of selective pressures might have led to it, and which broader philosophical implications we can draw from this. Marcus Arvan considers neuroscientific evidence for moral cognition which has expanded significantly in the past few decades. He interprets this evidence as showing that morality originates from cognitive adaptations that help us engage in prudential risk-aversion. Prudence is making instrumentally optimal choices that help our lives go well. Adaptations underlying prudence include mental time travel (which helps us foresee the consequences of potential actions), risk aversion, and taking the perspective of others. In seeing prudence as the root of morality, Arvan defends a broadly Hobbesian view. According to Hobbes (1651), moral cognition is not instilled in us biologically, but is the result of sociocultural norms that instill patterns of social reward and punishment. Arvan agrees, clarifying that prudence has been biologically selected for, while morality is a cultural exaptation: a learned and culturally-transmitted behavior that draws on the older biological adaptations underlying prudence. Estelle Palao considers the relevance of normativity in non-human animals for the study of human morality. Normativity is a key element in the evolution of morality. In her view, to explain how morality evolved in our species, we need to investigate how the broader propensity for following norms evolved. She conceives of moral norms as a subset of broader social norms, where normativity means the ability to decide which behavior to adopt within a social context. Non-human animals have normativity in this broader sense, for example, chimpanzees are driven by norms about reciprocity in social exchanges such as grooming. Moreover, a wide range of animals (including primates, cetaceans, and birds) use tools, and normativity lies at the basis of learning how to make and use tools. Animals are capable of evaluating their individual experiences in the light of behavioral information they acquire socially, and they use such evaluations to conform their behavior to patterns of doing things within their group. Palao uses this broad normative framework to argue that morality is an exaptation that arises from normativity. Alejandro Rosas takes aim at debunking arguments against morality. Very often, such debunking arguments do not only seek to undermine moral objectivity, but

12

J. De Smedt and H. De Cruz

morality more broadly. Authors such as Richard Joyce (2006) have proposed that humans are tricked into believing, through an evolved projection mechanism, that moral properties such as good or bad, or moral actions, characters, and rules, exist independently from our minds. Thus, by providing an evolutionary explanation of our sense of moral authority without postulating objective moral properties or rules, debunkers think they have thereby also undermined moral authority. Rosas explores an alternative to this debunking strategy: he argues that the authority of moral injunctions we feel can be explained without having to posit a projection mechanism. In his view, moral obligations can have an authority over desires directed solely at satisfying our individual well-being when they conflict in particular ways with the interests of others or of the group we belong to. Rosas shows that Darwin developed a Kantian account along these lines of the subjective experience of moral authority in his attempt to naturalize morality. Part III looks at the importance of cultural evolution for evolutionary ethics. Andrés Carlos Luco examines Darwin’s notion of extended benevolence. Darwin (1871) anticipated that the human capacity for sympathy would eventually extend to all nations, all human beings, and even all sentient beings. He hypothesized that the moral sense evolved through group selection, which for him was a form of natural selection, as follows. Social instincts such as sympathy help animals to cooperate. Some animals acquire the ability to further deliberate on past actions when social instincts conflict with self-preservation, leading to more sophisticated social emotions such as regret and shame. In the human lineage, language was added to these emotions, which together with social emulation helped humans to learn sophisticated social norms, and eventually, to reason. Building on Darwin, Luco argues that extended benevolence is the outcome of cultural evolution, which we can witness in the rise of democracies, laws to protect animal welfare, and women’s rights. He draws on sociological findings to show a strong correlation between these extensions of benevolence, arguing they owe their existence in large part to emancipative values, which he describes as normative attitudes. Luco next advances a cultural evolutionary explanation for the spread of these values: rituals and other cultural practices facilitate the cultural evolution of extended benevolence, helping people to make more contact with otherwise distant others and to take their perspective. Matthew Braddock focuses on the implications of the cultural evolution of moral norms for debunking arguments against moral realism, and the implications of this for theism. He argues that unguided cultural evolution could easily have led humans to moral norms and judgments that are mostly false by our current lights. Braddock allows for the fact that evolution through natural selection has likely instilled some moral norms that are fairly robust in the natural world, such as “killing one’s own offspring is bad,” but that practices such as infanticide indicate their cultural malleability. Therefore, if we consider nearby possible worlds where there are slight variations in cultural evolutionary processes, it seems plausible that human beings in such worlds would end up with quite different moral norms, even if we keep their evolved cognitive capacities constant. A moral objectivist would have to allow that we are very lucky that we ended up with the moral norms we have, rather than with different ones that we would not accept by our present lights. In contrast, Braddock

1  Situating Empirically Engaged Evolutionary Ethics

13

points out that if we take (Christian) theism rather than naturalism as our starting point, we should not be surprised by our basic moral reliability. He cites three reasons for this: divine omnibenevolence, imago Dei (humans are created in God’s image), and tradition-specific claims that humans have a basic moral sense, which God has instilled in us. Alfredo Robles-Zamora shows which directions evolutionary ethics can take if applied to a Latin American context, specifically, the concept of Mesoamerican cosmovision. Cosmovision is what enables and conditions our experience and interpretations of the world through practices, which involve forms of tacit knowledge that can be transmitted between generations. Cosmovision can be integrated in the evolutionary extended synthesis, notably in niche construction, which emphasizes the importance of transmission processes that are not purely genetic. Drawing on this framework, Robles-Zamora hypothesizes that the cosmovision of historical Mesoamerican cultures contains a nucleus of practices and relations shared by these cultures that have retained some stability over several thousands of years. The cultural evolution of morality can be seen in this context. In Mesoamerican cultures, we find moral systems that not only guide behavior across societies, but also the interactions with the environment, which have persisted in spite of colonization and missionization. Our volume, both by the geographic diversity of its authors and their engagement with a range of different disciplines, shows that evolutionary ethics benefits from a fruitful exchange with diverse cultural contexts and methodological approaches.

References Aurobindo. (1914–1918 [2005]). The life divine. Pondicherry, India: Sri Aurobindo Ashram Press. Chambers, R. (1844 [1994]). Vestiges of the natural history of creation and other evolutionary writings. Chicago: University of Chicago Press. Cronk, L., & Leech, B. L. (2013). Meeting at Grand Central. Understanding the social and evolutionary roots of cooperation. Princeton, NJ: Princeton University Press. Darwin, C. (1859). On the origin of species by means of natural selection or the preservation of favoured races in the struggle for life. London: John Murray. Darwin, C. (1871). The descent of man, and selection in relation to sex. London: John Murray. De Smedt, J., & De Cruz, H. (2020). The challenge of evolution to religion. Cambridge, UK: Cambridge University Press. Flanagan, O., Sarkissian, H., & Wong, D. (2008). Naturalizing ethics. In W. Sinnott-Armstrong (Ed.), Moral psychology. Vol. 1. The evolution of morality: Adaptations and innateness (pp. 1–25). Cambridge, MA: MIT Press. Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., et al. (2013). Moral foundations theory: The pragmatic validity of moral pluralism. In P. Devine & A. Plant (Eds.), Advances in experimental social psychology (pp. 55–130). Amsterdam: Elsevier. Hobbes, T. (1651). Leviathan or the matter, forme and power of a common-wealth ecclesiasticall and civil. London: Andrew Crooke. Hume, D. (1739–1740 [2007]). A treatise of human nature (D. F. Norton & M. J. Norton, Eds.). Oxford, UK: Clarendon Press.

14

J. De Smedt and H. De Cruz

Jin, X. (2019). Translation and transmutation: The Origin of Species in China. British Journal for the History of Science, 52(1), 117–141. Joyce, R. (2006). The evolution of morality. Cambridge, MA: MIT Press. Kant, I. (1785 [1998]). Groundwork of the metaphysics of morals (M.  Gregor, Trans. & Ed.). Cambridge, UK: Cambridge University Press. Knobe, J. (2003). Intentional action and side effects in ordinary language. Analysis, 63(3), 190–194. Kropotkin, P. (1924). Ethics. Origin and development (L.  S. Friedland & J.  R. Piroshnikoff, Trans.). Binghamton, NY/New York: Dial Press. Kropotkin, P. (1902 [1989]). Mutual aid: A factor of evolution. Montreal, QC/New York: Black Rose Books. Laland, K. N., Uller, T., Feldman, M. W., Sterelny, K., Müller, G. B., Moczek, A., et al. (2015). The extended evolutionary synthesis: Its structure, assumptions and predictions. Proceedings of the Royal Society B: Biological Sciences, 282(1813), 20151019. Marshall, J. (2001). The Lakota way. Stories and lessons for living. New York: Penguin. Mengzi. (4rd century BCE [2008]). Mengzi. With selections from traditional commentaries (B. Van Norden, Trans.). Indianapolis, IN: Hackett. Moore, G. E. (1903). Principia Ethica. Cambridge, UK: Cambridge University Press. Mozi. (5th–3rd c. BCE [2009]). The Mozi: A complete translation (I. Johnston, Ed.). Hong Kong, China: The Chinese University of Hong Kong Press. Pigden, C. (2019). No-ought-from-is, the naturalistic fallacy and the fact/value distinction: The history of a mistake. In N. Sinclair (Ed.), The naturalistic fallacy (pp. 73–95). Oxford, NY: Oxford University Press. Rée, P. (1877). Der Ursprung der moralischen Empfindungen. Chemnitz, Germany: Ernst Schmeitzner. Ritzinger, J. R. (2013). Dependent co-evolution: Kropotkin’s theory of mutual aid and its appropriation by Chinese Buddhists. Chung-Hwa Buddhist Journal, 26, 89–112. Ruse, M. (2010). The biological sciences can act as a ground for ethics. In F. J. Ayala & R. Arp (Eds.), Contemporary debates in philosophy of biology (pp.  297–315). Chichester, UK: Wiley-Blackwell. Ruse, M., & Wilson, E.  O. (1986). Moral philosophy as applied science. Philosophy, 61(236), 173–192. Schopenhauer, A. (1840 [1903]). The basis of morality (A.  Bullock, Trans.). London: Swan Sonnenschein. Schwitzgebel, E., & Cushman, F. (2012). Expertise in moral reasoning? Order effects on moral judgment in professional philosophers and non-philosophers. Mind & Language, 27(2), 135–153. Sidgwick, H. (1876). The theory of evolution in its application to practice. Mind, 1(1), 52–67. Sidgwick, H. (1907). The methods of ethics (7th ed.). London: Macmillan and Co. Sober, E., & Wilson, D. S. (1998). Unto others. The evolution and psychology of unselfish behavior. Cambridge, MA: Harvard University Press. Spencer, H. (1855). The principles of psychology. London: Longman, Brown, Green, & Longmans. Todes, D. P. (1989). Darwin without Malthus. The struggle for existence in Russian evolutionary thought. New York: Oxford University Press. Tomasello, M. (2016). A natural history of human morality. Cambridge, MA: Harvard University Press. Wang, Y. (1527 [2014]). Questions on the Great Learning. In J. Tiwald & B. Van Norden (Eds.), Readings in later Chinese philosophy. Han dynasty to the 20th century (P. J. Ivanhoe, Trans.) (pp. 238–250). Indianapolis, IN: Hackett. Waters, A. (Ed.). (2004). American Indian thought. Malden, MA: Blackwell. Wilson, E. O. (1975). Sociobiology. The new synthesis. Cambridge, MA: Harvard University Press.

Part I

The Nuts and Bolts of Evolutionary Ethics

Chapter 2

Dual-Process Theories, Cognitive Decoupling and the Outcome-to-Intent Shift: A Developmental Perspective on Evolutionary Ethics Gordon P. D. Ingram and Camilo Moreno-Romero

Abstract  A central tenet of evolutionary ethics is that as a result of evolutionary processes, humans tend to respond in certain ways to particular moral problems. Various authors have posited “dual-process” conflicts between “fast”, automatic, evolved impulses, and “slower”, controlled, reasoned judgements. In this chapter we argue that the evolutionary sources of automatic moral judgements are diverse, and include some intuitive processes (especially, reading other people’s intentions) that are quite sophisticated in term of social cognition. In our view, controlled, reflective moral reasoning represents the activity of higher-level processes that arbitrate between conflicting inputs from diverse automatic heuristics, in response to normative concerns. The integration and subjugation of automatic responses to more reflective ones is a developmental process that takes place at varying rates in different people and in diverse cultural contexts. We consider how approaches that represent cognition in terms of dual processes can be rendered more sophisticated by a consideration of evolutionary developmental psychology. We then apply this more developmentally aware approach to an extended example of the phenomenon in children’s moral development known as the outcome-to-intent shift. We outline a model of how automatic and controlled processes may be integrated in children’s social learning in culturally variable ways. Keywords  Child development · Cognitive development · Cultural differences · Dual-process theories · Evolutionary developmental psychology · Executive functioning · Moral development · Moral reasoning · Outcome-to-intent shift · Piaget · Theory of mind

G. P. D. Ingram (*) · C. Moreno-Romero Department of Psychology, Universidad de los Andes, Bogotá, Colombia e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. De Smedt, H. De Cruz (eds.), Empirically Engaged Evolutionary Ethics, Synthese Library 437, https://doi.org/10.1007/978-3-030-68802-8_2

17

18

G. P. D. Ingram and C. Moreno-Romero

2.1  Introduction Most evolutionarily inspired approaches to psychology, including moral psychology, view adult cognitive functioning as designed by evolution to solve adult adaptive problems. An example is the moral foundations theory of Haidt and colleagues (Graham et al., 2013), which sees moral cognition as responding to different kinds of triggers, each with relevance for differing aspects of fitness, including care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, and sanctity/degradation (see Suhler & Churchland, 2011, for an extended critique). All these themes are couched in terms of the decisions that adult individuals have to make about how to prioritize the positive part of each dyad and avoid the negative. The same focus on adult cognition, and on selective pressures that impact adults (for example, attracting a mate), is present in evolutionary psychological work on morality by Tooby and Cosmides (2010) and in the “universal moral grammar” proposed by Mikhail (2007) along Chomskyan lines. But such a conception of moral adaptations is a simplification: adult human psychology does not spring to life fully-formed like Athena from the head of Zeus. As Tomasello (2019, p. 22) recently put it: … the target of natural selection is not adult ‘traits,’ as in classical accounts, but rather ontogenetic pathways. That is, there is natural selection not just for adult end points but also for the construction process that brings them into existence.

In this chapter, we explore the potential contribution to evolutionary ethics of viewing adult moral cognition as rooted in ontogenetic pathways that develop in children as they mature. In particular, we suggest that this kind of evolutionary-developmental perspective (as elaborated by Bjorklund & Pellegrini, 2002; and by Tomasello, 2019) can enrich our understanding of dual-process theories of cognition. Dual-process theories originated from a fusion of several traditions in cognitive science, including the classic study on controlled and automatic attention by Shiffrin and Schneider (1977), the neuroscientific work of Panksepp (2005), and the heuristics-and-biases tradition in behavioral economics developed by Kahneman and Tversky (see Kahneman, 2011, for a popular account). According to dual-process theorists, the human mind involves two fundamentally different types of cognition. Depending on their origin, these distinctions have received different conceptual labels, such as implicit/explicit, intuitive/deliberative, online/offline, and primary-process/ secondary-­process. Kahneman (2011) tried to unify these distinctions in describing cognition in terms of two “systems”: System 1 is “hot”, fast, automatic and unconscious, while System 2 is “cold”, slow, controlled and conscious. More recently, the older terms “Type 1” and “Type 2” have been preferred, particularly in the work of Evans and Stanovich (2013; see also Evans, 2020; Stanovich, 2011), who argued that Type 1 processes represent a diverse “grab-bag” of automatic processes that are autonomous from one another, and may arise from many different cognitive systems within the mind, rather than forming one coherent system. Before turning to a consideration of moral development, we address two criticisms that have been levelled quite recently at dual-process theories of cognition in

2  Dual-Process Theories, Cognitive Decoupling and the Outcome-to-Intent Shift…

19

general (earlier influential criticisms, e.g. by Keren & Schul, 2009, were reviewed by Evans & Stanovich, 2013). The first criticism is a recurring one that was elaborated most recently by Melnikoff and Bargh (2018). They held that the tendency to divide cognitive processes into groups of two is itself an artefact of human binary reasoning styles, without necessarily much basis in reality. They questioned whether there is really much evidence for the claim that cognitive traits such as speed of response, automaticity, impenetrability and evolutionary age cluster together into one coherent system. As stated above, dual-process theorists themselves have gone some way to addressing this with their emphasis on Type 1 processes as a “grab-­ bag” of automatic responses, an “autonomous set of systems” rather than a single system (Evans & Stanovich, 2013; see also the reply to Melnikoff & Bargh by Pennycook, De Neys, Evans, Stanovich, & Thompson, 2018). These automatic responses encompass both “instinctive” responses to things like loud noises or emotional displays, and learned associations and activities that we once had to consciously attend to but no longer have to do so, such as understanding a written word or riding a bicycle. That is, evolutionarily old processes are necessarily automated, but automated processes are not necessarily old. Another step towards moving away from overly simplistic dichotomies is the suggestion by both Evans and Stanovich that controlled Type 2 processes may be divided into two sub-types: one pertaining to the “algorithmic mind”, which follows rules of symbolic processing to arrive at a goal, and the other pertaining to the “reflective mind”, which refers to: the tendency to collect information before making up one’s mind, the tendency to seek various points of view before coming to a conclusion, the disposition to think extensively about a problem before responding, the tendency to calibrate the degree of strength of one’s opinion to the degree of evidence available, the tendency to think about future consequences before taking action, and the tendency to explicitly weigh pluses and minuses of situations before making a decision. (Evans & Stanovich, 2013, p. 230; see also Stanovich, West, & Toplak, 2011)

A second line of criticism, articulated by Railton (2014), is that intuitive, Type 1 reactions are neither necessarily fast nor innate, because they can embody a lifetime of experience summed up in a single emotional judgement. This process of experience or getting to know one’s environment, so that one can have adaptive reactions to it, he calls “attunement”. He gave an example of a lawyer who relies on tried and trusted formal rules of setting out her case, until she intuitively realizes that this is not getting through to the jury, and suddenly changes her approach to appeal more to their emotions, with positive results (Fig. 2.1). As with Melnikoff and Bargh’s (2018) critique, this points to a rejection of the clustering together of various ways of categorizing cognitive processes, since implicit or intuitive knowledge is not always “fast” or automatic. However, as Pennycook et al. (2018, p. 668) pointed out: DPT [dual-process theory] advocates such as Evans and Stanovich (2013) have explicitly argued against assuming an alignment of the numerous characteristics that have been assigned to so-called ‘Type 1’ and ‘Type 2’ processes over the years (see also Evans, 2012; Stanovich et al., 2011). Instead, they distinguish between defining features – those that are

20

G. P. D. Ingram and C. Moreno-Romero

Fig. 2.1  Stanovich’s tripartite model of the mind. (After Evans & Stanovich, 2013, Fig. 1) used to define the two-types distinction  – and typical correlates  – those that various researchers have associated with the two-types distinction.

As pointed out above, the autonomous set of systems captured in the idea of Type 1 processes can include some that are fully automatic and innate, and others that reflect a lifetime of learning via what Railton refers to as “attunement”. Furthermore, an important point not covered by Railton is that the learning that “attunes” intuition often involves deliberative, Type 2 processes: one suffers a setback, reflects on what went wrong, and tries a different approach, which then becomes ingrained if it works better. In his lawyer example, the conscious realization that she was not getting through to her audience led the lawyer to change her behavior in this instance, and probably on certain occasions in the future as well. Railton’s example is thus an illustration of how the two types of processing – often presented as in competition with each other, especially in experimental designs – can actually function as a coherent system, ensuring that an individual learns new strategies when it is adaptive to do so. The notion of attunement nevertheless holds promise for looking at how intuitions change during development: can we identify the sorts of social and cognitive processes (including Type 2 processes) responsible for molding or attuning Type 1 intuitive reactions in adaptive ways? One of the goals of this chapter is to show how functional cognitive systems (and the development of these systems through childhood) actually involve elaborate forms of interaction between Type 1 and Type 2 processes, influenced by cultural norms and feedback from the social environment. Since Type 1 processes are commonly seen as more evolutionarily ancient than Type 2 processes (Stanovich, 2011), it has been natural to assume that they develop earlier in children (Barrouillet, 2011; Evans, 2011). In order to understand them from an evolutionary point of view, and how they interact with later-developing Type 2 processes, we therefore turn to the theoretical perspective of evolutionary

2  Dual-Process Theories, Cognitive Decoupling and the Outcome-to-Intent Shift…

21

developmental psychology. The key insight of evolutionary developmental psychology is that adult human psychology did not evolve as a set of static adaptations to adult life. Instead, what evolved were a set of interconnected developmental processes, having divergent courses, rates and termini in different individuals, with newer – in some sense higher-level and more integrative – cognitive processes layered on top of more situation-specific heuristics. In introducing an evolutionary-developmental program to psychology, Bjorklund and Pellegrini (2000) argued that a developmental perspective is vital for a better understanding of the evolved psychology of a slow-developing species such as humans. Their approach was inspired by the growing influence of “evo-devo” theory on evolutionary biology, where the idea that developmental biases place important constraints on the evolution of complexity and diversity in all organisms has become widely accepted (Brakefield, 2006; Carroll, 2005; Lickliter & Honeycutt, 2013). An important insight of evolutionary developmental psychology is that certain forms of cognitive functioning can offer adaptive value that is unique to individuals in certain age ranges: these are known as ontogenetic adaptations. In contrast, deferred adaptations are not of much adaptive value to a child at their current point in development, but rather serve as preparations for behavior that will have adaptive value later in the lifespan. According to the principles of evolutionary developmental psychology, then, it could be the case that evolved Type 1 processes are either ontogenetic adaptations to the environments in which children tend to find themselves, or deferred adaptations that serve as building blocks for later-­developing algorithmic processes. We return to this idea in the conclusion. More recently, Bjorklund (2015) has augmented the concepts of ontogenetic and deferred adaptations with a new concept, that of “evolved probabilistic cognitive mechanisms.” This concept is based on the insight that evolution does not tend to specify patterns of behavior very precisely in the genotype of generalist species, like humans, that live in widely varying environments and thus need to show a lot of flexibility in their responses to different stimuli. Instead, adaptations direct an individual’s attention to fitness-relevant stimuli in the environment. Two examples cited by Bjorklund are learning a fear of snakes and spiders (LoBue & Rakison, 2013), and picking up relevant emotional information from human facial expressions (Farroni, Massaccesi, Menon, & Johnson, 2007). With their attention directed to the stimuli most relevant to their survival and prosperity (in fitness terms), human children then have an opportunity to use probabilistic social learning to acquire effective responses from other, more knowledgeable individuals who are attending to the same stimuli at that moment in time. According to this view, then, what evolves in children is a set of interconnected developmental or learning processes, which lead to a gradual layering of acquired cognitive processes on top of innate heuristics (cf. Rochat, 2015). Because learning is probabilistic and the initial amount of attention paid to stimuli varies genetically, these developmental processes have divergent courses, rates and termini in different individuals. This conceptualization recalls Railton’s (2014) argument that “intuitive” moral judgements are not crude heuristics, but complex learned responses of the affective system. Although his idea of “attunement” (discussed above), is not

22

G. P. D. Ingram and C. Moreno-Romero

meant to capture the development of an evolved cognitive mechanism, it may not be so different from Bjorklund’s conception of probabilistic learning. We have instinctive attentional biases for paying attention to faces and other implicit emotional information (Farroni et al., 2007), and then through experience we learn probabilistically things like how to read the signs of when people are sympathetic to our argument, when we are provoking hostility, when we are boring them, etc. This intuition becomes a resource that people can then use more reflectively, as in his lawyer example. To give a more systematic example of how evolved probabilistic mechanisms may be similar to processes of attunement, in the rest of the chapter we consider the case of the outcome-to-intent shift in moral development.

2.2  The Outcome-to-Intent Shift Although they have not always phrased cognitive development in terms of two kinds of processes, many developmental psychologists from Piaget onwards have tried to explain how children transition from more automatic (informal, implicit) to more controlled (formal, explicit) types of reasoning. An important theoretical advance was made by Karmiloff-Smith (1992), who explored several examples from non-social cognition of interactions between the two types of processes. She used the idea of “representational redescription” to describe the sort of U-shaped curve that occurs in the development of certain cognitive abilities, where initial implicit performance can actually be better than subsequent explicit performance, until the latter has been calibrated and integrated with implicit skills to yield explicit behavioral mastery (Baylor, 2001; Siegler, 2004). For example, children’s use of irregular past tense in common English verbs is marked by an initial predominance of correct performance, apparently due to exact imitation; later, with more linguistic experience, correct performance often decreases due to “overgeneralization” of regular forms (e.g. *maked for made); and finally, the calibration of different sets of rules enables children to use both regular and irregular past tenses correctly (Pauls, Macha, & Petermann, 2013; Siegler, 2004). More recently, in the domain of social cognition, a similar pattern has been proposed as an explanation of the puzzling incongruity between apparent implicit expectations of false beliefs from late infancy and the tendency of younger three-year-olds to fail explicit false-belief tasks (Grosse Wiesmann, Friederici, Singer, & Steinbeis, 2017). But the most relevant example for evolutionary ethics – and one for which a U-shaped curve has also recently been proposed (Margoni & Surian, 2016) – is a feature of children’s moral development known as the outcome-to-intent shift.1

1  The term “outcome-to-intent shift” seems to have been coined by Cushman, Sheketoff, Wharton, and Carey (2013). The term used by Piaget (1932) for the prioritizing of outcomes by younger children was “moral realism,” but this has fallen into disuse, perhaps due to the risk of confusion with the philosophical position of the same name, which has quite a different meaning.

2  Dual-Process Theories, Cognitive Decoupling and the Outcome-to-Intent Shift…

23

The basic idea of the outcome-to-intent shift is that young children tend to give more weight to the outcome than the intention of an action when making moral judgements, a pattern that is reversed in older children and adults. Historically, the classic demonstration of the phenomenon was by Piaget (1932). He showed that when judging how bad a rule violation is, younger children (before age 8 or 9) tend to focus on what might be called the physical extent of the rule violation’s consequences, whereas older children (9–10 years onwards) tend to focus on the intentions behind it. This was in keeping with Piaget’s wider perspective on the development of moral reasoning, which he characterized in terms of a shift from heteronomous reasoning (rules are seen as having to be obeyed in all circumstances, with any deviations being bad) to autonomous reasoning (rules are seen as social constructions that are flexible depending on the context, and deviations can be excused in various ways). It is a common misrepresentation of Piaget’s work to view these stages as irreversible “phases” through which children pass without any possibility of return. In fact, as with other systems of stages in Piaget’s thought, they are more like “styles of reasoning” which are more or less accessible at different ages. Even adults may use heteronomous reasoning quite frequently, for example if they are tired or stressed, or if the social context is one in which they are treated like children. The developmental difference is that adults and older children also have the autonomous style of reasoning available to them as an option, whereas younger children do not. Thus, there may be parallels with the idea of dual processes in adult reasoning. Experimentally, one tactic that Piaget used to demonstrate the shift between heteronomous and autonomous moral reasoning was to compare good intentions with neutral or negative ones; for example: There was once a little girl who was called Marie. She wanted to give her mother a nice surprise and cut out a piece of sewing for her. But she didn’t know how to use the scissors properly and cut a big hole in her dress. A little girl called Margaret went and took her mother’s scissors one day when her mother was out. She played with them for a bit. Then, as she didn’t know how to use them properly, she made a little hole in her dress. (Piaget, 1932, p. 122)

Asked “who is naughtier”, younger children tended to say the first child, Marie, while older children indicated the second child, Margaret. At this point, it is worth noting that there was an important confound in the example of the two vignettes given above, as in many other examples used by Piaget. In order to judge that Margaret’s action is worse, a child had to ignore (inhibit) the greater damage, or harm, done by Marie. In subsequent research it has been more common to control for this factor by comparing negative with neutral or positive intentions while keeping the level of damage in the outcomes constant – that is, comparing failed attempts at rule transgressions versus actual, accidental transgressions – but the underlying principle is the same. As with much of Piaget’s stage-based theory, more careful experimental work from the 1970s onwards (Berg-Cross, 1975; Hebble, 1971; King, 1971; Shultz, Wright, & Schleifer, 1986) showed that children are capable of using a more advanced stage of reasoning at younger ages than Piaget claimed, while still

24

G. P. D. Ingram and C. Moreno-Romero

generally supporting his overall picture of an age-based transition in preferences for the two reasoning styles. It turned out that if the level of damage in the outcome was kept constant, some children as young as 4 years of age would say that the girl with the negative intention was naughtier, or that the girl with the positive intention was less naughty. For example, Zelazo, Helwig, and Lau (1996) found that 3-year-olds based their judgments of acceptability and punishment solely on an action’s outcomes, but from 4 years of age children were able to use rules in which both intentions and outcomes were included, particularly if the intentions were negative (see also Helwig, Zelazo, & Wilson, 2001). Hence the term “outcome-to-intent shift” may be a misnomer. While there does appear to be a shift in emphasis from favoring outcomes to favoring intentions as triggers of moral judgements when the two are pitted against each other, this is often glossed as something like, young children do not assign any importance to intentions when making moral judgements, which is not at all correct. The developmental change that takes place instead seems quite specific to accidents. When confronted with an accidental action that results in more harm/damage but without harmful intent, children do tend to judge it as worse than a failed attempt at harm or damage. As they get older, they come to assign even more weight to intentions even in the face of more damaging outcomes.

2.3  U  sing Dual-Process Models to Explain the Outcome-­to-Intent Shift This more recent evidence would seem to invalidate Piaget’s preferred explanation for the outcome-to-intent shift – that intention-related concepts are simply not available to young children in their moral reasoning, due to a lack of perspective-taking. It seems more that information about both intentions and outcomes is always available, but that when the two types of information are in conflict (as with accidents) children initially prioritize the outcome and later prioritize the intention. One way of modeling this is to consider the two types of information as handled by different processes that develop separately. This was the approach of Cushman et al. (2013; see also Cushman, 2013), who argued that moral judgement in humans is governed by two processes: a causality-sensitive process that infers whether an agent is physically responsible for an outcome, and an intentionality-sensitive process that infers culpability by considering the intentions behind the agent’s actions. They measured both moral judgements of character (“Is X a bad, naughty boy/girl?”) and recommendations of punishment (“Should X be punished?”) through moral vignettes of a child character either acting with neutral intentions but getting a bad outcome, or acting with negative intentions but failing to achieve these. They hypothesized that the effect of the outcome-to-intent shift would be seen first in abstract judgements of badness, before these intent-based judgements came to constrain punishment decisions. They demonstrated that for accidental wrongs, the outcome-to-intent shift took place in their sample at about the age of 5 for abstract judgements (an

2  Dual-Process Theories, Cognitive Decoupling and the Outcome-to-Intent Shift…

25

earlier age than Piaget had supposed) and about age 7 for punishment recommendations. They also replicated the more recent finding that children tend to judge failed negative intentions harshly from as early childhood (the youngest children in their sample were 4 years old). This latter, unpredicted, replication was however a problem for Cushman and colleagues’ theory, since it showed intent-based moral judgement at an earlier age than they predicted it would appear. That is, there was an asymmetry between children’s early negative judgements of failed negative attempts (both abstract judgements and recommendations of punishment) and their later-developing exoneration of accidental wrongs (developing later for punishment than for abstract judgements). They tried to resolve this by suggesting that perhaps children had an automatic negative response to “bad acts,” which included the attempt to perform such acts (e.g., an attempt to kick another child): We suggest that children have an early developing automatic negative reaction to “bad acts”, along the same lines as their capacity for their negative reaction to “bad outcomes” that gets harnessed during the preschool years to concepts of naughtiness, punishability, and wrongness (concepts that are not initially differentiated). (Cushman et al., 2013, p. 18)

However, this could be seen as begging the question, since they provided no theoretical reason why a child would automatically interpret an unsuccessful bad intention as if it were a bad act, but not interpret an unsuccessful good intention as if it were a good act. Furthermore, they did not explain why the “constraint” of the intentional process over the causal process developed for abstract judgements sooner than for punishment when children were evaluating accidental wrongs, but was present early on for both abstract judgements and punishment when they interpreted failed attempts at wrongs. If information about both causality and intention is used from as early as 3 years when judging failed negative intentions, it is difficult to explain, using the framework of separate processes, why it is not used until 5 years when judging accidental violations in an abstract way, and not until 7 years when judging whether they deserve punishment. Our previous consideration of dual-process theories may help us construct a more adequate theoretical explanation for the outcome-to-intent shift, particularly with respect to the change in attitudes for the accidental condition (which as argued above, seems to be the key change during development). Cushman and colleagues referred to their theoretical model as a “two-process” and “dual-system” model, in what seems to be quite a loose usage of the term (following Greene, whom they cited in the article), simply identifying two different processes that respond to different aspects of the same situation, rather than tapping in to the more developed dual-process tradition of theorists like Kahneman, Stanovich, and Evans. Indeed, they explicitly claimed to be agnostic about whether the two types of moral judgement (causal and intentional) that they proposed were implicit or explicit: “the two process model suggests that they operate in parallel among adults and is agnostic about their status as explicit conceptual systems versus automatic processes of moral judgment” (Cushman et al., 2013, p. 9). Use of the term “dual-system” seems just to imply that they had identified two processes that worked independently and

26

G. P. D. Ingram and C. Moreno-Romero

responded to different facets of experience. In contrast, here we propose abandoning agnosticism and examining the implications of supposing that these two processes of causal and intentional inference are in fact both automatic. Research on causal reasoning has revealed an early sensitivity to causal inferences associated with the behavior of both objects and agents (Perner & Roessler, 2012). Many studies have used a habituation/dishabituation paradigm, in which infants are habituated to a specific type of causal event, and then presented with a different type of event during test (dishabituation) trials. A longer looking time in the dishabituation trials, compared to control trials with another causal form, is taken as evidence of causal inference (Bélanger & Desrochers, 2001). This method has shown that even 4-month-old infants are sensitive to typical spatial and temporal cues about causal motion, which cause dishabituation if they are absent (Cohen & Amsel, 1998; Muentener & Bonawitz, 2017). Causal inferences in infancy are not restricted to objects: by 6  months of age, infants dishabituated when a person reached for a new object if the same person had reached for a different object in previous trials (Cicchino & Rakison, 2008). This shows that infants can encode goals of intentional human action with reference to objects, and that automatic causal reasoning may emerge from early statistical learning about interactions between agents and objects (Kushnir, Xu, & Wellman, 2010; Wu, Gopnik, Richardson, & Kirkham, 2011). As these latter studies indicate, we also know that processing of intentions (goals) can happen very early in life, and probably automatically (Woodward, 2009). Even rapid belief ascription via theory of mind may sometimes be automatic, in both infants (Scott & Baillargeon, 2017) and adults (Schneider, Slaughter, & Dux, 2017), although the infant results are controversial (Kulke, Reiß, Krist, & Rakoczy, 2018, described several failures to replicate them). More directly relevant to the outcome-to-intent shift are the moral developmental studies of Hamlin and colleagues (e.g., Hamlin, 2013), which showed that preverbal infants preferred toys/ characters with good intentions. For example, 8-month-old infants preferred to play with toys that had accidentally hindered another toy’s progress up a slope, or that had tried but failed to help the other toy, rather than with those that had tried to hinder the toy but failed. In other words, the outcome for all the characters in these situations was the same – they failed to reach their goal – but children chose which character to play with based on their supposed intentions (for similar results with preschoolers, see Van de Vondervoort & Hamlin, 2018). Since 8-month-old infants are generally considered too young to have explicit reasoning about intentional concepts, the assumption is that they were automatically encoding the characters’ intentions and using these to make moral judgements. This kind of automatic, “sympathetic” reaction may also explain the findings of Chernyak and Sobel (2016). They designed a game in which 4-year-old children watched a puppet destroy a tower made by the child and another puppet, either intentionally or accidentally according to experimental condition, while the experimenter was not watching. Then, the experimenter punished the first puppet by not giving it stickers as a prize and instead giving all the stickers to the child. The authors found that children spontaneously shared more stickers with the puppet that

2  Dual-Process Theories, Cognitive Decoupling and the Outcome-to-Intent Shift…

27

accidentally destroyed the tower than with the puppet that intentionally destroyed it, correcting the inequality and unfair punishment given by the experimenter. This was the case even though the same children, when asked to reflect explicitly on the goodness or badness of the people they had helped, tended to judge them according to a typical outcome/intention shift (that is, the youngest children used only the outcome in their explicit judgements, while older children used both outcome and intention). Still more direct evidence about the flexible role of intentions in moral judgements comes from a study showing that when asked to judge a character’s action, younger children used information about outcomes in judging accidents while older children used information about intentions; but when the question was rephrased to ask instead about the moral worth of the character themselves, even younger children could use information about intentions (Nobes, Panagiotaki, & Bartholomew, 2016). What this change in the phrasing of questions shows is that children can automatically focus on intentions – even positive intentions – in making moral judgements. The outcome-to-intent shift may thus be sensitive to experimental framing, due to the way in which the particular wording of questions directs children’s attention either to the agents’ moral worth or to the valence of specific actions. This result was partially replicated in our own recent study (Moreno-Romero & Ingram, 2019). Children from 3 to 9 years old were asked to provide acceptability and punishment judgments for four different situations that combined intentions and outcomes factorially (i.e., the four combinations of positivity and negativity for both intentions and outcomes); additionally they had to decide how many points they would like to share with the main character in the situation given its behavior, something that can be considered a third-party punishment task in which a behavioral reward could be decided on the basis of the moral judgement. The study found that children showed no evidence of an outcome-to-intent shift when asked to make abstract judgements of story characters: from an early age children considered both outcome and intentions, and characters responsible for accidents (with positive intentions but negative outcomes) were judged as better and less deserving of punishment than characters whose negative intentions failed. However, there was some evidence of an outcome-to-intent shift in their behavioral reward decisions: older children exhibited reward decisions in line with their abstract judgements, but younger children’s allocations were not congruent with their judgements of goodness and punishment (Moreno-Romero & Ingram, 2019). Given this evidence from diverse studies, it is clear that inference about both causality and intentionality can be early-developing and presumably often automatic. Therefore, both are probably best modelled as Type 1 processes. As argued above, more recent elaborations of the dual-process approach are based on a tripartite view of the mind, with the automatic part, known as Type 1 processes, divided into a “grab-bag” of innate, maturational responses that are always automatic (mostly designed to direct attention, though also including some basic emotional programs, such as the fear response to a looming physical threat), and a portfolio of culturally and experientially acquired skills that are consciously learned but then become automatic through practice, such as writing or riding a bicycle (Evans &

28

G. P. D. Ingram and C. Moreno-Romero

Stanovich, 2013; Stanovich et al., 2011; cf. McCauley’s, 2011, distinction between “maturational” and “practiced” behavior). Type 2 processes, as we have seen, load heavily on executive functioning (in particular, on working memory), override Type 1 processes, and are divided into an algorithmic mind that executes processes of symbolic reasoning, and a reflective mind that decides when to deploy such processes. Applying this model to the outcome-to-intent shift, it could be that evaluation of negative outcomes involves a Type 1 process that analyzes causality (a simple negative emotional reaction leading to abstract moral judgement and concrete advocation of punishment). Evaluation of failed negative intentions might involve a separate Type 1 process responsible for mindreading, but one that leads to a similarly negative emotional reaction (because of the imagined negative outcome), and again, to an advocation of punishment. The key age difference comes with the evaluation of failed positive intentions (accidental harms). For younger children, depending on the details of the experimental paradigm, either one of these competing Type 1 processes (outcome evaluation or mindreading) may be activated. This is never normally the case for older children and adults, who tend to react more forgivingly to accidents based seemingly on Type 2 processes of mental simulation (in “developed”, large-scale societies at least; we return to possible cultural influences on these processes later in the chapter). What then is responsible for this developmental change? In the next section we consider the Type 2, “reflective mind” process of cognitive decoupling, which may be responsible for mediating or arbitrating in some way between the two competing Type 1 processes, eventually leading to a shift in emphasis on intent over outcome.

2.4  C  ognitive Decoupling and the Articulation of Divergent Intuitions While there may be separate Type 1 heuristics that handle outcomes and intentions, what really needs explaining is how during development these heuristics become articulated together, so that in the case of accidents, information about good intentions overrides information about bad outcomes. According to recent dual-process theories, the overriding of intuitive, Type 1 reactions by Type 2 responses can be explained by the operation of a reflective Type 2 process called “cognitive decoupling” (Pennycook, Fugelsang, & Koehler, 2015; Stanovich, 2011; Stanovich & West, 2008). In simple terms, this is the ability to block out emotionally stimulating (i.e., Type 1 process triggering) information and focus on the application of formal rules: “Decoupling occurs when the automatic input-output connections of the brain are suspended while cognitive simulations of hypotheticals are carried out” (Toplak, West, & Stanovich, 2014, p. 1045). An everyday example would be applying simple algebra to calculate contributions to a restaurant bill when the adult members of an extended family have different numbers of dependents (Kahneman, 2011, reviewed

2  Dual-Process Theories, Cognitive Decoupling and the Outcome-to-Intent Shift…

29

many examples of how the application of formal rules in this type of situation can be impaired by various kinds of biases). The extent to which this process is deployed varies between individuals, which helps to explain individual differences in susceptibility to biases. Moreover, a cognitive developmental study found that the application of cognitive decoupling increased with children’s age, as well as being correlated with their intellectual and executive-functioning abilities. (This was measured by performance on a variety of cognitive bias tasks –denominator neglect, belief bias, base rate sensitivity, resistance to framing, and “otherside thinking” – where participants had to override a Type 1 response with Type 2 thinking in order to find the correct answer; Toplak et al., 2014.) Therefore, a kind of cognitive decoupling could be responsible for children’s tendency to focus more on intentions in moral reasoning as they get older, especially in situations involving an accidental transgression. The idea is that children might learn to block out the information from the Type 1 negative evaluation of outcomes, applying a process of cognitive decoupling to focus on excusing the character’s accidental action because of their positive or neutral intent. Cognitive decoupling can help explain some of the key results from the literature on the outcome-to-intent shift reviewed above. In particular, we want to highlight the findings that (a) the outcome-to-intent shift may not occur at all – or may occur earlier – when children are asked to evaluate agents rather than actions (Moreno-­ Romero & Ingram, 2019; Nobes et al., 2016); and that (b) it seems to occur later when analyzed using more behavioral methods, such as asking children how much punishment they think an agent deserves (Cushman et  al., 2013), or how many reward points they want to assign them (Moreno-Romero & Ingram, 2019). Both of these findings can be explained using the idea of cognitive decoupling. With (a), when judging actions rather than agents, children need to engage in cognitive decoupling in order to “forgive” an accident, overriding the negative effects that they witness in order to take the positive or neutral intentions into account. In contrast, cognitive decoupling as a Type 2 process may not always “need” to be triggered when judging an agent, since when children are asked questions about an agent they may focus automatically on the agent’s intentions and thus their long-­ term potential for other bad behavior, leading to more forgiving judgements about accidents in particular. This may depend on the particular characteristics of the task: for instance, cognitive decoupling could still be needed for designs like that of Piaget, 1932, discussed above, where the direct comparison in the extent of the damage might have made the outcome more salient, explaining why he found an outcome-to-intent shift even though he asked children to judge an agent. The same sort of idea may apply to finding (b): when making abstract judgements, attention is drawn to abstract concepts that are long-term indicators of character, such as intentions; the valence of the results is not so salient, and thus does not have to be suppressed through cognitive decoupling. On the other hand, when deciding a behavioral response, principles of reciprocity may lead to an impulse to respond “in kind” to the bad outcome of the observed action, and these may have to be overridden by Type 2 processes in order to treat an accident more favorably than an action with a negative intention behind it. What we see, then, is a move from two

30

G. P. D. Ingram and C. Moreno-Romero

competing Type 1 processes being triggered by the characteristics of the situation (the way the experiment is set up) to the operation of a Type 2 process that selectively overrides the outcome-responsive process (in the case of accidents) in favor of the intent-responsive process, in favor of algorithmic considerations like norm-­ compliance and consistency. This analysis is not far from other recent theoretical accounts of the outcome-to-­ intent shift (reviewed by Margoni & Surian, 2016), which emphasized that only after developing complex executive functioning (EF) do children treat accidents as less blameworthy in verbal tasks. Such an explanation is needed because of the evidence that from an early age, accidents may be treated more favorably than failed negative attempts in preverbal, “partner-choice” tasks, where a child had to choose between two toys to play with, each of which were presented as causing different types of accidental or intentional, good or bad outcomes (Van de Vondervoort & Hamlin, 2018; Woo, Steckler, Le, & Hamlin, 2017). In common with many accounts of the developmental differences between pre-verbal and verbal false-belief-task performance (reviewed by Apperly, 2012), one tactic has been to say that two- and three-year-old children “pass” pre-verbal partner-choice tasks but “fail” verbal judgements because of the higher executive functioning demands imposed by the latter type of task. The problem with invoking EF demand in general as an explanation for such developmental changes, as opposed to cognitive decoupling in particular, is that it is hard for general EF accounts to distinguish between different framings of the experimental tasks involved. For example, it is difficult to use developments in EF to explain the finding that children are more tolerant of accidents earlier when asked to judge the agent rather than the action (as seen in the discussion above of the results of Nobes et al., 2016). What is it about asking children to judge an action that would impose more of a demand on EF? Might we not expect judgements of the agent behind the action to be more demanding? However, this problem does not mean that explanations in terms of EF cannot be accommodated to dual-process accounts. A possible way forward is suggested by the work of Carlson, Davis, and Leach (2005), who divided EF into “hot” and “cold” functions (Metcalfe & Mischel, 1999), corresponding in our terminology to Type 1 and Type 2 processes. Using a task originally designed for chimpanzees (Boysen & Berntson, 1995), Carlson and colleagues showed that children were better at solving “Less Is More” problems (which involve asking for a smaller set of rewards in order to get another, larger set) when they had to request abstract symbols (such as Arabic numerals) representing the rewards, than when the request was concretely for the rewards themselves. Symbolic distancing may thus be conceptually related to cognitive decoupling, in that it is necessary to invoke a Type 2 explicit process in order to override the impulse to ask for the bigger reward. It seems moreover that cognitive decoupling may be easier when thinking through symbols, engaging “cold” EF, than when thinking about concrete objects (food rewards, in many cases) and having to engage “hot” EF to override an automatic Type 1 response. However, what is not often addressed in studies of executive functioning is why we have this kind of Type 2 process that can override Type 1 cognitive biases in the

2  Dual-Process Theories, Cognitive Decoupling and the Outcome-to-Intent Shift…

31

first place, and why it may be triggered more easily in certain, “symbolically distant” tasks than in others (Stanovich, 2009). After all, visual symbols were presumably not present in adaptively relevant environments (Dehaene, Cohen, Morais, & Kolinsky, 2015). Perhaps, as well as being triggered by symbols, cognitive decoupling may be easier when one is thinking about the intentions and motivations behind people’s actions than when one is thinking only about the actions themselves  – as in the case of the outcome-to-intent shift. Looked at in evolutionary terms, the application of such thinking to control one’s own actions – inhibiting the impulse to punish something which had an unwelcome result, if it did not have a negative intention behind it  – could arise from the sort of probabilistic learning process described by Bjorklund (2015). Recall that Bjorklund argued that instinctive responses (Type 1, in our terminology) do not emerge fully-formed in babies, but instead depend on a flexible process of interaction with the environment (in the case of humans, the social and cultural environment). Young children could find that other people react differently if they try to punish accidental actions that led to harm, than if they punish actions with negative intentions. To see how this would work, it is helpful once again to compare reactions to accidents with reactions to failed negative intentions. In our study of the four different possible conditions of positive/negative outcomes/intentions (Moreno-Romero & Ingram, 2019), perpetrators of accidents were rated less favorably than people who successfully fulfilled positive intentions. In contrast, people who failed to fulfil their negative intentions were evaluated as just as bad as people who successfully fulfilled negative intentions. This could be because of an evolved negativity bias which leads children to be attentive to possible examples of negativity regardless of whether they are witnessed in the form of outcomes or intentions. However, the necessity of finding good cooperative partners might also have selected for an ability to spot positive intentions, explaining the results of Hamlin and colleagues, discussed earlier (Hamlin, 2013; Van de Vondervoort & Hamlin, 2018) that children are tolerant of accidents in preverbal “partner choice” tasks. Parallel statistical learning processes about the responses to outcomes and intentions would lead to parallel “calibrations” of these Type 1 responses depending on the social environment (reinforcing negative judgements of negative intentions, and positive judgements of positive intentions). Cognitive decoupling might then come into play when children are later asked to make verbal responses. The initial tendency might simply be to give a negative verbal reaction to any negative intention or outcome, due to negativity bias. But processes of calibration might encourage children to modify their Type 1 responses, through Type 2 cognitive decoupling, to make it more consistent with the combination of results for Type 1 processes described for Hamlin and colleagues’ preverbal tasks. This process might be easier for agents due to the way that judgements of agents trigger reflective thinking more easily than judgements of actions.

32

G. P. D. Ingram and C. Moreno-Romero

2.5  Cultural Variation in the Weight Placed on Intentions Viewing cognitive decoupling as tied to an evolved probabilistic learning process also helps to explain why this process can have different outcomes in different cultures. Similarly to the way in which there are cultural differences in the seriousness ascribed to conventional violations (so that in some societies or groups, breaking a conventional rule can be regarded as just as bad as inflicting personal harm on someone; Haidt, Koller, & Dias, 1993), there are cultural differences in the rules that different groups follow to make moral judgments according to outcomes or intentions. One landmark study of ten different societies (eight small-scale, two large-­ scale) presented adult participants with hypothetical scenarios containing different transgressions (e.g., physical harm, damaging the wellbeing of a group, or a food taboo violation), while systematically varying the intentionality of the perpetrator. The results demonstrated a great deal of cross-cultural variation in the extent to which people assigned importance to intentions, with both rural European and urban North American participants seeing them as critically important to the evaluation of accidents, but other cultures giving them less weight (Barrett et al., 2016). One of the two societies that were least inclined to see intentions as important was the Yasawa culture from Fiji, in the Pacific Islander cultural area. This region was already known in the anthropological literature for its “opacity of mind” norms, which in many contexts prohibit verbal speculation about the internal reasons why other people made particular decisions (Robbins & Rumsey, 2008). Thus it may be that in WEIRD (Henrich, Heine, & Norenzayan, 2010), “mind-­ minded” (Hughes, Devine, & Wang, 2018) cultures, children are reminded to think about –and potentially forgive – accidents from an early age, triggering a routine learned response when they are faced with analyzing an accident in an experiment. In cultures with “opacity of mind” norms they do not receive such feedback, since other people’s intentions are not often publicly discussed; thus, even adults in these cultures have a tendency to react to accidents with judgements that happen to be similar to children’s intuitive, Type 1 judgements. An alternative possibility, suggested by Saxe (2016), is that while Yasawa adults’ public evaluations of the experimental scenarios were congruent with their explicit norms about appropriate public discourse, they diverged from their private cognitions, which implicitly processed the characters’ intentions (cf. Astuti, 2015). However, another recent cross-cultural study (using a similar methodology to Barrett et al., 2016) showed that even Yasawan adults could be primed to focus on intentions – by asking them to list thoughts that God would or would not like them to think – which led them to be more forgiving of accidents (McNamara, Willard, Norenzayan, & Henrich, 2019). It seems unlikely that mere priming about “thoughts” was enough to induce participants to abandon their norms against taking others’ intentions into consideration and reveal their private analysis of the experimental vignettes: more likely, it prompted them to consider an aspect of the situations in the vignettes that they were overlooking before. The Yasawa don’t excuse accidents automatically: their habitual action is to continue condemning them because this has rarely been overridden by explicit

2  Dual-Process Theories, Cognitive Decoupling and the Outcome-to-Intent Shift…

33

instruction. However, they can still be induced to react using a Type 2 process by an experimental prime. This fits with the theoretical conception of cognitive decoupling as a resource that is available to everyone in every situation, whenever we want to look at a problem from a different angle. The circumstances that trigger decoupling may include standard routines ingrained within a culture, situational primes, or even original reflections by individuals. An outline of our model for cultures without opacity of mind norms is summarized in Fig. 2.2, which attempts to conceptualize the Type 1/2 typology in an integrated way, rather than simply referring to two different types of processes. Here they are modeled in terms of two different routes of learning, a Type 1 route that uses Bjorklund’s (2015) idea of innate attentional biases associated with evolved probabilistic learning mechanisms, and a Type 2 route that uses Stanovich’s (2011) distinction between the reflective mind and the algorithmic mind (compare Fig. 2.1). Children begin social learning about moral actions with a set of innate attentional biases, which cause them to focus on social actions in the first place, but also to pay more attention to negatively valenced actions than to positively valenced actions. Through probabilistic learning they naturally pick up on people’s intentions, learning to associate them with outcomes of the equivalent valence. Because the resulting judgements are dependent on attentional focus, they are sensitive to experimental framing: when asked if an action is wrong at an early age, they may focus on the outcome, whereas when asked if an agent is bad, some (but not all) children may focus on the intention earlier on (this may also be the case in partner-choice paradigms). The outcome-to-intent shift thus appears specifically when asked about actions or for behavioral measures, as children grow older and start to use the Type 2 learning route, at least in WEIRD cultural contexts. This involves engaging in cognitive decoupling (perhaps encouraged by the cognitive dissonance between their intention-based processing and their outcome-based processing) to reflect on an agent’s motives even though they have been asked whether the action was right or wrong. They then apply normative reasoning to come up with a response based on culturally learned moral reasoning algorithms (for example, excusing an accident if a person was not negligent, but not excusing it if they were perceived as

Fig. 2.2  Model of two routes of learning about moral actions

34

G. P. D. Ingram and C. Moreno-Romero

negligent; Nobes, Panagiotaki, & Engelhardt, 2017). Finally, this analysis of and response to accidents can become automaticized as a Type 1 learned response through practice and reinforcement over time, perhaps in an analogous process to the probabilistic learning responsible for the automaticization of Type 1 responses. However, in some non-WEIRD cultures such as the Yasawa of Fiji, this Type 2 route may be much less common and will not usually become automatic, either because children are not prompted to think so much about intentions, or because they do not have the normative reasoning about accidents readily available to come up with an algorithmic response. The former interpretation would seem to be supported by the fact that excusing accidents could be encouraged by priming about “thoughts” in McNamara et  al.’ (2019) experiment. Thus, cultural differences in weight given to outcome versus intent likely do not represent differences in capacities so much as differences in experience with particular practices and reasoning styles.

2.6  Discussion We believe that this model is an advance on previous theoretical conceptions of the outcome-to-intent shift. For example, Cushman et  al.’ (2013) account is under-­ specified in two main ways. First, they introduced the idea of a “bad act detector” to explain why children are sensitive to negative intentions from an early age, without explaining why this would be triggered by negative intentions as well as negative outcomes, when there was no analogous detector for “good acts” that was activated by positive intentions. That does not fit with the evidence from Hamlin’s (2013) study that 8-month-old children are capable of taking good intentions into account, in partner choice paradigms. This pattern of results is easily integrated into our model by assuming that being asked to choose a partner may prime children to think more of actors’ intentions than the rightness or wrongness of the act itself. Similarly, the notion that only “bad acts” are automatically linked to intentions does not square with the evidence of Nobes et  al. (2016) that children evaluate intentions earlier when asked about agents than when asked about actions.2 The latter study, using explicit judgements rather than partner choice as an outcome, points to the second area in which Cushman and colleagues’ model is underspecified. They explained the change associated with the outcome-to-intent shift as resulting from a “conceptual reorganization within the system of moral reasoning itself; the emergence of an intent-based process for judging moral wrongs” (Cushman et al., 2013, p. 14), yet did not explain why this would apply earlier to judgements of negative compared to positive intentions, and to judgements of agents compared to actions. In our model, in contrast, this change is conceptualized more as a capability for cognitive

2  In fairness, of course Cushman et al. (2013) would not necessarily have been able to read about these two studies before writing up their own.

2  Dual-Process Theories, Cognitive Decoupling and the Outcome-to-Intent Shift…

35

decoupling that allows children to step back from focusing on the outcome when evaluating the morality of an action. This explains both why they are capable of preferring accidental to failed intentional violators at younger ages (which again does not fit with the notion of an “internal” moral reorganization at a later age); and why, in certain cultures such as the Yasawa that do not draw children’s attention to mental states, they continue judging accidents harshly into adulthood. A different theoretical account of the outcome-to-intent shift has focused on developments in executive functioning (in common with many explanations of the development of false-belief understanding). According to Margoni and Surian (2016, p. 3): “The additional processing demands of the elicited-response [explicit judgement] tasks compared to the spontaneous-response [partner choice] task used in the infant literature, lead kindergartners to produce outcome-based evaluations.” However, this begs the question since they do not explain why outcome-based evaluations would be less demanding on three-year-olds’ executive functioning than intent-based evaluations, seeing as eight-month-old infants are capable of both. And, as with that of Cushman et al. (2013), this explanation does not tally with the more recent findings either from Nobes et  al. (2016) that accidents are excused much earlier when judging agents than when judging actions, or from Barrett et al. (2016) and McNamara et  al. (2019) that adults often fail to take intentions into account in some non-WEIRD societies. Conceptualizing a “failure” to show an outcome-to-intent shift in those societies as the result of a “deficiency” in executive functioning would be unpalatable (see Kidd, Palmeri, & Aslin, 2013, for a related critique of the universality of the “marshmallow task” as a measure of executive functioning across different cultural environments). Instead, reconceptualizing the broad category of executive functioning in terms of “cognitive decoupling” or “symbolic distancing”  – clearly related to executive functioning, but much more focused in their reference – allows us to understand how different cultural and normative environments (not to mention different experimental framings) might encourage people to a greater or lesser extent to “take a step back” from particular problems and consider them in a detached, abstract way. As a high-level model of social cognitive development, the dual-process account elaborated in this chapter is not particularly easy to test. Nevertheless, we believe that there are two main ways in which it could be examined empirically. Firstly, since excusing accidents is not universal, even among WEIRD children, micro-­ genetic longitudinal studies of development could test whether there is developmental continuity between participants who tend to excuse accidents as infants (when framed in terms of partner choice) and those who excuse them as older children performing explicit judgement tasks, after the outcome-to-intent shift has taken place. “Continuity” accounts that stress executive functioning (e.g., Margoni & Surian, 2016) would predict that the same children should be more likely to excuse accidents in both paradigms. In contrast, our dual-process account would predict that these children might be different individuals, since infants excuse accidents using the Type 1 route (meaning that those infants more susceptible to framing effects should be more likely to excuse them), and older children excuse them using the Type 2 route (meaning that those children who engage in more cognitive

36

G. P. D. Ingram and C. Moreno-Romero

decoupling should be more likely to excuse them). This could be tested along with appropriate covariates (framing sensitivity and executive-functioning measures of cognitive decoupling, respectively). The predictive value of the model could also be tested on adults by putting them under high cognitive load while judging accidents and failed intentional violations. Our model would predict that when under cognitive load even WEIRD adults should often revert to more childlike, outcome-based Type 1 judgements, at least in novel situations (unless they have a lot of experience with automatically judging accidents), since cognitive decoupling happens less frequently under stress. This is not a prediction of models which explain the outcome-­ to-­intent shift in terms either of a conceptual reorganization in the moral domain, or the development of new executive-functioning skills.

2.7  Conclusion One of our main aims in this chapter has been to show with a developmental example how Type 1 and Type 2 processes are not always opposed to each other but can interact in interesting ways in development. Recent empirical work shows that the textbook idea of the outcome-to-intent shift as a developmental change from exclusively using outcomes in moral judgements, to exclusively using intentions, is incorrect. Children take both outcomes and intentions into account from an early age, and gradually learn to integrate them in their verbal judgements and controlled behavioral responses. What has rarely been explained in the literature is the contradiction between the classic Piagetian studies showing an absence of consideration of intentions in young children under certain experimental paradigms, and the newer results showing that they do take intentions into account under other paradigms. Our contribution here has been to map out a way of reconciling those divergent sets of results, with a dual-process model in which the Type 2 process of cognitive decoupling overrides one Type 1 response in favor of another. The way in which this happens may be analogous to Railton’s (2014) idea of attunement. Just as his lawyer was unsatisfied with the jury’s response to her habitual, formal style of argumentation, and engaged her reflective mind to override that in favor of a more intuitive approach, so children who are faced with normative pressure to forgive accidents may override an intuitive outcome-based response in favor of an equally intuitive, but situationally weaker, intent-based response. Comparing those two examples also underlines that Type 1 processes are not always innate: in fact, as Bjorklund’s (2015) idea of evolved probabilistic learning mechanisms suggests, instincts are always somewhat flexible and sensitive to experience. Applying this idea to evolutionary ethics is valuable because it can help us get away from the assumption that automatic, Type 1 moral reactions are “irrational” relics of an evolved past, and necessarily inferior to more reflective, Type 2 moral responses (see Pennycook et al., 2018). In reality, moral dilemmas such as weighing outcome against intent probably set off conflicting Type 1 intuitions. There is individual, situational and cultural variation in which of these intuitions wins out, and

2  Dual-Process Theories, Cognitive Decoupling and the Outcome-to-Intent Shift…

37

some of that variation may be caused by differences in the tendency to use Type 2 processes to inhibit an automatic response and engage in formal, algorithmic reasoning. Nevertheless, these Type 2 processes also have an evolutionary rationale, in terms of helping us find a socially acceptable response in the context of the normative environment in which we find ourselves.

References Apperly, I. A. (2012). What is “theory of mind”? Concepts, cognitive processes and individual differences. Quarterly Journal of Experimental Psychology, 65, 825–839. Astuti, R. (2015). Implicit and explicit theory of mind. Anthropology of This Century, 13, 636–650. Barrett, H. C., Bolyanatz, A., Crittenden, A. N., Fessler, D. M., Fitzpatrick, S., Gurven, M., et al. (2016). Small-scale societies exhibit fundamental variation in the role of intentions in moral judgment. Proceedings of the National Academy of Sciences, 113, 4688–4693. Barrouillet, P. (2011). Dual-process theories of reasoning: The test of development. Developmental Review, 31, 151–179. Baylor, A. L. (2001). A U-shaped model for the development of intuition by level of expertise. New Ideas in Psychology, 19, 237–244. Bélanger, N. D., & Desrochers, S. (2001). Can 6-month-old infants process causality in different types of causal events? British Journal of Developmental Psychology, 19, 11–21. Berg-Cross, L.  G. (1975). Intentionality, degree of damage, and moral judgments. Child Development, 46, 970–974. Bjorklund, D. F. (2015). Developing adaptations. Developmental Review, 38, 13–35. Bjorklund, D.  F., & Pellegrini, A.  D. (2000). Child development and evolutionary psychology. Child Development, 71, 1687–1708. Bjorklund, D. F., & Pellegrini, A. D. (2002). The origins of human nature: Evolutionary developmental psychology. Washington, DC: American Psychological Association. Boysen, S.  T., & Berntson, G.  G. (1995). Responses to quantity: Perceptual versus cognitive mechanisms in chimpanzees (Pan troglodytes). Journal of Experimental Psychology: Animal Behavior Processes, 21, 82–86. Brakefield, P. M. (2006). Evo-devo and constraints on selection. Trends in Ecology and Evolution, 21, 362–368. Carlson, S. M., Davis, A. C., & Leach, J. G. (2005). Less is more: Executive function and symbolic representation in preschool children. Psychological Science, 16, 609–616. Carroll, S. B. (2005). Endless forms most beautiful: The new science of evo devo and the making of the animal kingdom. New York: Norton. Chernyak, N., & Sobel, D. M. (2016). “But he didn’t mean to do it”: Preschoolers correct punishments imposed on accidental transgressors. Cognitive Development, 39, 13–20. Cicchino, J.  B., & Rakison, D.  H. (2008). Producing and processing self-propelled motion in infancy. Developmental Psychology, 44, 1232–1241. Cohen, L., & Amsel, G. (1998). Precursors to infants’ perception of the causality of a simple event. Infant Behavior and Development, 21, 713–732. Cushman, F. (2013). Action, outcome, and value: A dual-system framework for morality. Personality and Social Psychology Review, 17, 273–292. Cushman, F., Sheketoff, R., Wharton, S., & Carey, S. (2013). The development of intent-based moral judgment. Cognition, 127, 6–21. Dehaene, S., Cohen, L., Morais, J., & Kolinsky, R. (2015). Illiterate to literate: Behavioural and cerebral changes induced by reading acquisition. Nature Reviews Neuroscience, 16, 234–244. Evans, J. S. B. (2011). Dual-process theories of reasoning: Contemporary issues and developmental applications. Developmental Review, 31, 86–102.

38

G. P. D. Ingram and C. Moreno-Romero

Evans, J.  S. B. (2012). Dual process theories of deductive reasoning: Facts and fallacies. In K.  J. Holyoak & R.  G. Morrison (Eds.), The Oxford handbook of thinking and reasoning (pp. 115–133). New York: Oxford University Press. Evans, J. S. B. (2020). Hypothetical thinking: Dual processes in reasoning and judgement (Classic ed.). New York: Psychology Press. Evans, J. S. B., & Stanovich, K. E. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8, 223–241. Farroni, T., Massaccesi, S., Menon, E., & Johnson, M. H. (2007). Direct gaze modulates face recognition in young infants. Cognition, 102, 396–404. Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S.  P., et  al. (2013). Moral foundations theory: The pragmatic validity of moral pluralism. Advances in Experimental Social Psychology, 47, 55–130. Grosse Wiesmann, C., Friederici, A. D., Singer, T., & Steinbeis, N. (2017). Implicit and explicit false belief development in preschool children. Developmental Science, 20, e12445. Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65, 613–628. Hamlin, J.  K. (2013). Failed attempts to help and harm: Intention versus outcome in preverbal infants’ social evaluations. Cognition, 128, 451–474. Hebble, P. W. (1971). The development of elementary school children’s judgment of intent. Child Development, 42, 1203–1215. Helwig, C. C., Zelazo, P. D., & Wilson, M. (2001). Children’s judgments of psychological harm in normal and noncanonical situations. Child Development, 72, 66–81. Henrich, J., Heine, S. J., & Norenzayan, A. (2010). Most people are not WEIRD. Nature, 466, 29. Hughes, C., Devine, R. T., & Wang, Z. (2018). Does parental mind-mindedness account for cross-­ cultural differences in preschoolers’ theory of mind? Child Development, 89(4), 1296–1310. Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Strauss, & Giroux. Karmiloff-Smith, A. (1992). Beyond modularity: A developmental perspective on cognitive science. Cambridge, MA: Bradford. Keren, G., & Schul, Y. (2009). Two is not always better than one: A critical evaluation of two-­ system theories. Perspectives on Psychological Science, 4, 533–550. Kidd, C., Palmeri, H., & Aslin, R. N. (2013). Rational snacking: Young children’s decision-making on the marshmallow task is moderated by beliefs about environmental reliability. Cognition, 126, 109–114. King, M. (1971). The development of some intention concepts in young children. Child Development, 1145–1152. Kulke, L., Reiß, M., Krist, H., & Rakoczy, H. (2018). How robust are anticipatory looking measures of Theory of Mind? Replication attempts across the life span. Cognitive Development, 46, 97–111. Kushnir, T., Xu, F., & Wellman, H. (2010). Young children use statistical sampling to infer the preferences of other people. Psychological Science, 21, 1134–1140. Lickliter, R., & Honeycutt, H. (2013). A developmental evolutionary framework for psychology. Review of General Psychology, 17, 184–189. LoBue, V., & Rakison, D. H. (2013). What we fear most: A developmental advantage for threat-­ relevant stimuli. Developmental Review, 33, 285–303. Margoni, F., & Surian, L. (2016). Explaining the U-shaped development of intent-based moral judgments. Frontiers in Psychology, 7, 1–6. McCauley, R.  N. (2011). Why religion is natural and science is not. New  York: Oxford University Press. McNamara, R.  A., Willard, A.  K., Norenzayan, A., & Henrich, J. (2019). Weighing outcome vs intent across societies: How cultural models of mind shape moral reasoning. Cognition, 182, 95–108. Melnikoff, D. E., & Bargh, J. A. (2018). The mythical number two. Trends in Cognitive Sciences, 22, 280–293.

2  Dual-Process Theories, Cognitive Decoupling and the Outcome-to-Intent Shift…

39

Metcalfe, J., & Mischel, W. (1999). A hot/cool-system analysis of delay of gratification: Dynamics of willpower. Psychological Review, 106, 3–19. Mikhail, J. (2007). Universal moral grammar: Theory, evidence and the future. Trends in Cognitive Sciences, 11, 143–152. Moreno-Romero, C. O., & Ingram, G. P. D. (2019). Children consider both outcome and intent in moral judgements, punishment recommendations, and reward allocations. Unpublished manuscript. Preprint available at: https://psyarxiv.com/zpv6h Muentener, P., & Bonawitz, E. B. (2017). The development of causal reasoning. In M. R. Waldmann (Ed.), The Oxford handbook of causal reasoning (pp.  677–698). New  York: Oxford University Press. Nobes, G., Panagiotaki, G., & Bartholomew, K. J. (2016). The influence of intention, outcome and question-wording on children’s and adults’ moral judgments. Cognition, 157, 190–204. Nobes, G., Panagiotaki, G., & Engelhardt, P. E. (2017). The development of intention-based morality: The influence of intention salience and recency, negligence, and outcome on children’s and adults’ judgments. Developmental Psychology, 53, 1895–1911. Panksepp, J. (2005). Affective consciousness: Core emotional feelings in animals and humans. Consciousness and Cognition, 14, 30–80. Pauls, F., Macha, T., & Petermann, F. (2013). U-shaped development: An old but unsolved problem. Frontiers in Psychology, 4, 301. Pennycook, G., De Neys, W., Evans, J. S. B., Stanovich, K. E., & Thompson, V. A. (2018). The mythical dual-process typology. Trends in Cognitive Sciences, 22(8), 667–668. Pennycook, G., Fugelsang, J. A., & Koehler, D. J. (2015). What makes us think? A three-stage dual-process model of analytic engagement. Cognitive Psychology, 80, 34–72. Perner, J., & Roessler, J. (2012). From infants’ to children’s appreciation of belief. Trends in Cognitive Sciences, 16, 519–525. Piaget, J. (1932). The moral judgment of the child (M. Gabain, Trans.). Lavergne, TN: Free Press. Railton, P. (2014). The affective dog and its rational tale: Intuition and attunement. Ethics, 124, 813–859. Robbins, J., & Rumsey, A. (2008). Introduction: Cultural and linguistic anthropology and the opacity of other minds. Anthropological Quarterly, 81, 407–420. Rochat, P. (2015). Layers of awareness in development. Developmental Review, 38, 122–145. Saxe, R. (2016). Moral status of accidents. Proceedings of the National Academy of Sciences, 113, 4555–4557. Schneider, D., Slaughter, V. P., & Dux, P. E. (2017). Current evidence for automatic Theory of Mind processing in adults. Cognition, 162, 27–31. Scott, R.  M., & Baillargeon, R. (2017). Early false-belief understanding. Trends in Cognitive Sciences, 21, 237–249. Shiffrin, R. M., & Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. Psychological Review, 84, 127–190. Shultz, T. R., Wright, K., & Schleifer, M. (1986). Assignment of moral responsibility and punishment. Child Development, 57, 177–184. Siegler, R. S. (2004). U-shaped interest in U-shaped development and what it means. Journal of Cognition and Development, 5, 1–10. Stanovich, K. E. (2009). What intelligence tests miss: The psychology of rational thought. New Haven, CT: Yale University Press. Stanovich, K. E. (2011). Rationality and the reflective mind. New York: Oxford University Press. Stanovich, K. E., & West, R. F. (2008). On the relative independence of thinking biases and cognitive ability. Journal of Personality and Social Psychology, 94, 672–695. Stanovich, K. E., West, R. F., & Toplak, M. E. (2011). The complexity of developmental predictions from dual process models. Developmental Review, 31, 103–118.

40

G. P. D. Ingram and C. Moreno-Romero

Suhler, C.  L., & Churchland, P. (2011). Can innate, modular “foundations” explain morality? Challenges for Haidt’s moral foundations theory. Journal of Cognitive Neuroscience, 23, 2103–2116. Tomasello, M. (2019). Becoming human: A theory of ontogeny. Cambridge, MA: Belknap Press. Tooby, J., & Cosmides, L. (2010). Groups in mind: The coalitional roots of war and morality. In H. Høgh-Olsen (Ed.), Human morality and sociality: Evolutionary and comparative perspectives (pp. 191–234). Basingstoke, UK: Red Globe Press. Toplak, M. E., West, R. F., & Stanovich, K. E. (2014). Rational thinking and cognitive sophistication: Development, cognitive abilities, and thinking dispositions. Developmental Psychology, 50, 1037–1048. Van de Vondervoort, J. W., & Hamlin, J. K. (2018). Preschoolers focus on others’ intentions when forming sociomoral judgments. Frontiers in Psychology, 9, 1851. Woo, B. M., Steckler, C. M., Le, D. T., & Hamlin, J. K. (2017). Social evaluation of intentional, truly accidental, and negligently accidental helpers and harmers by 10-month-old infants. Cognition, 168, 154–163. Woodward, A. L. (2009). Infants’ grasp of others’ intentions. Current Directions in Psychological Science, 18, 53–57. Wu, R., Gopnik, A., Richardson, D. C., & Kirkham, N. Z. (2011). Infants learn about objects from statistics and people. Developmental Psychology, 47, 1220–1229. Zelazo, P. D., Helwig, C. C., & Lau, A. (1996). Intention, act, and outcome in behavioral prediction and moral judgment. Child Development, 67, 2478–2492.

Chapter 3

Not So Hypocritical After All: Belief Revision Is Adaptive and Often Unnoticed Neil Levy

Abstract  We are all apt to alter our beliefs and even our principles to suit the prevailing winds. Examples abound in public life (think of the politician who bases an election campaign on the need to address the budget emergency represented by a deficit, only to be indifferent to an even larger deficit once in office), but we are all subject to similar reversals. We often accuse one another of hypocrisy when these kinds of reversals occur. Sometimes the accusation is justified. In this paper, however, I will argue that in many such cases, we don’t manifest hypocrisy, even if our change of mind is not in response to new evidence. Marshalling evidence from psychology and evolutionary theory, I will suggest that we are designed to update our beliefs in response to social signals: as these signals change, we change our minds, often without even noticing. Keywords  Belief · Belief update · Conformity bias · Cognitive science of religion · Cultural evolution · Epistemic vigilance · Hypocrisy · Rationality · Political psychology · Prestige bias · Social epistemology

3.1  Introduction Hypocrisy rouses a great deal of indignation. Quite why we hate hypocrisy so much is somewhat obscure: after all, hypocrites are usually at least half right. But the fascination, and the revulsion, are clear. This fascination is not limited to laypeople: psychologists, too, are fascinated by hypocrisy. Monin and Merritt (2012) argue that to a first approximation, social psychology just is “the science of moral hypocrisy” (167), a sentiment echoed by Graham, Meindl, Koleva, Iyer, and Johnson, (2015: N. Levy (*) Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK Department of Philosophy, Macquarie University, Sydney, NSW, Australia e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. De Smedt, H. De Cruz (eds.), Empirically Engaged Evolutionary Ethics, Synthese Library 437, https://doi.org/10.1007/978-3-030-68802-8_3

41

42

N. Levy

158).1 They argue this designation is apt on the grounds that a great deal of work in social psychology is motivated by concerns arising out the discovery that we have poor access to our own motives, on the one hand, and a desire to make a good impression, on the other. We are therefore apt to attribute self-serving motives to ourselves, regardless of our real motives. Hypocrisy is certainly a genuine phenomenon. In this paper, however, I am going to argue that there is much less hypocrisy around than we tend to think. It is obviously true that people often act in ways that are at variance with their professed moral standards (surely we don’t need psychology to show us that). Think of the ‘family values’ politician who is secretly having an extra-marital affair or the pastor who denounces homosexuality from the pulpit but has a taste for male escorts. It is not this kind of blatant hypocrisy that is my concern here. Rather, I’m concerned with another kind of apparent mismatch between agents’ (professed) values and their behavior. Blatantly hypocritical actions like these manifest what I will call strong hypocrisy. Strongly hypocritical actions are actions performed by an agent who recognizes, at t, that she is committed to a certain principle (because she has publicly espoused it; because she identifies with an institution that is strongly committed to it and has given no reason to think her commitment does not extend to that principle, and so on) and takes her concurrent action to be contrary to that principle. Strong hypocrisy, probably quite rightly, attracts our strongest opprobrium. Since people are motivated to dissimulate such hypocrisy, it is hard to judge how common strong hypocrisy actually is, but it certainly occurs. But not all behavior that we might be tempted to call hypocritical is duplicitous: that is, in some cases of such behavior, the agent does not recognize that she is acting contrary to a principle to which she is committed. Moderately hypocritical behavior may occur when the agent does not know that her action is contrary to her principles, but should have known. “Should have known”, here, is to be understood epistemically: given the evidence available to the agent, only departures from rational cognition could explain their failure to see the conflict between their behavior and their principles. Perhaps such agents do not detect the conflict because they are self-deceived, or because they engage in motivated cognition. For one reason or another, their cognitive processes go awry in a way that prevents them recognizing that their behavior is contrary to their own principles. In addition to strongly and moderately hypocritical behavior, however, there is a third kind of behavior that might qualify as hypocritical. Like moderately hypocritical behavior, weakly hypocritical behavior occurs when an agent acts contrary to principles that she has espoused or to which she has been committed without her recognizing that she is acting contrary to those principles. But — if I my argument  It is moral hypocrisy that is the focus here. Formally, hypocrisy can occur in any domain that is governed by norms: the domain of aesthetics, knowledge, prudence, and so on. But hypocrisy seems to attract opprobrium only in the moral and prudential domains, and even in the latter perhaps only insofar as it is seen as a kind of moral hypocrisy to offer advice one does not abide by oneself.

1

3  Not So Hypocritical After All: Belief Revision Is Adaptive and Often Unnoticed

43

in this paper is correct — her behavior is not moderately hypocritical because it is not the case that she should know that her behavior is hypocritical. Rather, I will suggest, her behavior should be understood as arising out of belief and behavior update mechanisms operating as they are designed to. To that extent, her behavior is itself appropriate. Perhaps, indeed, we should refrain from calling behavior like this hypocritical at all, not even weakly hypocritical. Perhaps we should reserve the label ‘hypocrisy’ for behavior that rightly attracts moral opprobrium, in virtue of how it comes about and how it stands in relation to the agent’s principles. I take no stand on this question, except to note that there is evidence that ordinary people would describe even weakly hypocritical behavior as, precisely, hypocritical (see Alicke, Gordon, & Rose, 2013). The labels we use matter, but I want to set that issue aside now, in favor of developing an account of the processes that underlie these behaviors. Instances of weakly hypocritical behavior are neither rare nor trivial. Recent US politics is a goldmine of possible examples. Think of evangelical support for Donald Trump, for instance, or the way in which a number of high-profile Republicans condemned him, even saying he was unfit for office, only to express support subsequently, often without acknowledging that they had changed their view. Lindsey Graham provides an excellent illustration. In February 2016, he described Trump as a “kook” who is “unfit for office”. In November 2017, he expressed concern at the way the media stooped to portraying the president “as some kind of kook not fit to be president” (Lopez, 2017). Graham does not seem to have acknowledged his change of mind, and it does not seem to have occurred in response to new evidence of Trump’s fitness. For a second example, consider the apparently self-serving abandonment of strongly held principles by evangelicals. In 2011, white evangelicals were the most likely of all groups to express support for the claim that personal immorality renders a person unfit for elected office. By 2016, they were least likely (Kurtzleben, 2016). This dramatic change in principle is likely driven by the fact that the candidate they supported seems to be personally immoral. A number of high profile evangelical leaders who condemned Bill Clinton, arguing that his personal immorality disqualified him from high office, are strongly supportive of Trump. Some of these same evangelical leaders were signatories to a 1998 letter condemning Bill Clinton on character grounds, and denouncing those who thought that character should be downplayed in favour of attention to policy. We therefore cannot absolve them of hypocrisy on the basis that their support for Trump is really for his policies and despite his character (Miller, 2019). Of course, the phenomenon isn’t confined to the right wing of politics. In section one, I will briefly sketch the best-known experimental evidence from social psychology purporting to reveal widespread hypocrisy. I argue that the standard paradigm does not probe hypocrisy at all and (therefore) does not show that we are routinely hypocritical. In section two, I situate the discussion of hypocrisy within an evolutionary framework. Well-supported evolutionary models predict that a range of organisms will be willing to pay costs to act morally. Humans cooperate to an even greater degree than these models seem to predict, however. Following a number of other thinkers, I will suggest that our hypercooperativity is made possible

44

N. Levy

by developments in cultural evolution. These same developments render us vulnerable to the shifts in behavior that appear hypocritical, I will suggest. In the third section, I will identify mechanisms for the cultural scaffolding of belief. In the fourth section, I provide evidence that our non-scaffolded beliefs are much less rich in content than we tend to think; because they are so impoverished, scaffolded beliefs may shift from time to time without us noticing this fact, as I show in the fifth section. I conclude with some brief reflections on the import of the hypothesis for the rationality of beliefs and for the significance of (apparent) hypocrisy.

3.2  Hypocrisy in Social Psychology In a series of papers, Batson and colleagues used a dilemma task to assess hypocrisy (e.g. Batson & Thompson, 2001; Batson, Thompson, & Chen, 2002; Batson, Thompson, Seuferling, Whitney, & Strongman, 1999).2 In this paradigm, participants are brought into the lab and told they are to perform one of two tasks. One task is clearly more desirable than the other (the relative desirability of the tasks varies across experiments). The participant has the choice of task; the unchosen task will be assigned to another participant (they are told that the other participant will not know that they have been assigned the task by the first participant). Most participants assign the positive task to themselves, though in follow-up questioning they deny that their action was the moral choice. This simple design yields what Batson and collaborators take to be a measure of hypocrisy: the gap between participants’ assessment of the moral thing to do and their actual behavior. To this simple task, Batson and colleagues added an interesting twist. They told participants that most people said that the fairest way to assign tasks in these circumstances was by flipping a coin, and provided the participants with a coin that they could use if they wished. About half the participants flipped the coin, with the great majority of the remainder assigning themselves the positive task (though a majority agreed that either assigning the other participant the positive task or flipping the coin is the most moral course of behavior). The really interesting finding is this: a large majority of those who flip the coin also assign the positive task to themselves, despite the statistical unlikelihood that the (fair) coin favored them so often. These results are typically interpreted as demonstrating widespread hypocrisy. In fact, they have been described as providing evidence for “moral duplicity” (Graham et al., 2015), which seems roughly equivalent to strong hypocrisy.3 Discussions of these experiments seem universally to conclude that these data show that people 2  A caveat: these studies, like almost all the experimental (but not the correlational) work in social psychology at the time, used small samples and almost certainly report effect sizes that are exaggerated by the failure to report unsuccessful trials. That said, Dong, van Prooijen, and van Lange (2019) report three very much larger studies (one preregistered) that replicate these results. 3  Graham et al. define moral duplicity as “claiming moral motives to others, falsely”. It is not clear whether they require the agent recognizes the falseness. Moral duplicity is also more encompass-

3  Not So Hypocritical After All: Belief Revision Is Adaptive and Often Unnoticed

45

want to look moral, rather than wanting actually to be moral. Performing the coin flip provides them the opportunity to look moral, but if the flip doesn’t favour them, they blithely ignore it. They act contrary to both their own expressed principles (as manifested by what they say would be most moral) and their revealed principles (as manifested by flipping the coin). But these data actually demonstrate only that people do not act in line with what they say would be morally best, and that’s not hypocritical at all (not unless they also claim that they are required to act in accord with the moral best). Very likely, they judge that assigning the less desirable task to themselves is supererogatory, and the supererogatory is not required of us.4 All of us knowingly fall short of the moral best: we all know that we could and should (in some sense of ‘should’) devote far more time and far more money to worthy causes, say (or to our family or our neighbourhood, or whatever) than we actually do. Most of us will freely admit as much. We are not hypocritical for not doing so. We may fall short of the moral ideal, but we don’t think that we are required to live up to the moral ideal.5 Far from showing that most of us are hypocritical, there is a case for regarding this work as rather inspiring. When presented with the option of flipping the coin themselves or having the experimenter flip it (or, equivalently, flipping a binding coin), some participants chose the experimenter flip option (Batson, Tsang, & Thompson, 2000). That is, they chose to remove the option of ‘cheating’ on the flip. Other manipulations are even more powerful: for example, having participants face a mirror increased the percentage of those who assigned the positive task to the other participant after a coin flip to 50% (Batson et  al., 1999).6 If I’m right that

ing than strong hypocrisy, because it includes agents who actually live up to their principles, if and when they are motivated by impression management rather than moral concern to do so. 4  Sie (2015) also argues that the participants in these experiments do not act wrongly. Rather, she argues, experimental demands lead them to wrongly think they are acting contrary to what morality requires of them. I think it is more plausible to think that while the experimental manipulations succeed in altering their conception of where exactly their own action falls on the continuum from forbidden to supererogatory, in all variants they see their action as morally permissible. 5  Lönnqvist, Irlenbusch, and Walkowitz (2014) explicitly address a question that seems to bear on this interpretation: perhaps participants see themselves as already having ‘won’ the lottery by being given the opportunity to assign tasks to themselves and others, and therefore see themselves as having no obligation not to take the positive task for themselves. To test this possibility, they had independent raters assess their slightly different paradigm, a dictator game in which the participant could choose an 5/5 distribution, an 8/2 distribution or to flip a coin. On a 9 point scale that collapsed across measures for “unfair-fair,” “immoral-moral” and “bad-virtuous,” with the midpoint of the scale marked “neutral”, the mean ratings were 8.09 (5/5), 3.56 (8/2) and 6.27 (coin flip). The experimenters claim that these data demonstrate that participants indeed see themselves as acting contrary to their own moral judgments. I disagree: I don’t think this is evidence that 8/2 is seen as selfish, in the sense required to establish hypocrisy, by the participants. The extreme end of these scales does not correspond with the required option; it corresponds with the best option. The midpoint is therefore halfway between the immoral (unfair; vicious) and the morally best. 6  Again, one would should treat this result with caution in the light of the replication crisis and the small number of participants. A number of studies have reported that images of eyes increase prosocial behavior (e.g. Rigdon, Ishii, Watabe, & Kitayama, 2009) but recent meta-analyses suggest

46

N. Levy

assigning the positive task to the other participant, or even taking the risk of so assigning, is supererogatory, these data indicate that a subset of participants can quite easily be induced to act in ways that exceed their own sense of what is required of them. Far from indicating widespread hypocrisy, this work seems to indicate that we’re no worse than morally mediocre and sometimes much better than that. These experiments provide evidence that people fail to live up to their conceptions of how it is best to behave. But that falls short of hypocrisy, since most of us do not think there is any requirement to act as is best. However, we can’t explain many of the instances of hypocrisy mentioned at the beginning of this chapter by reference to the distinction between the supererogatory and the permissible. The pastor who rails against homosexuality only to be caught with a male escort can’t plausibly claim that he thought abstention from gay sex was morally best but not required. The Republican who shouts “Never Trump!” can’t plausibly claim that they meant “ideally not Trump” when they subsequently fall into line behind the president. It is not supererogatory of Christians to abstain from what they see as sin. There seems to be a lot of hypocrisy about after all. Should we conclude, with the social psychologists, that we are routinely motivated to appear moral while avoiding the costs of being moral?

3.3  Evolutionary Considerations An enormous amount of attention has been devoted to the so-called puzzle of altruism, with ‘altruism’ defined in terms of fitness. Prima facie, it is puzzling that organisms seem sometimes to be disposed to benefit others, given that they thereby increase those others’ fitness at a cost to themselves. Shouldn’t such altruism be selected against and therefore disappear? Yet altruism seems to be common, and by no means confined to human beings alone. A familiar response to this puzzle is to introduce the notion of inclusive fitness (Hamilton, 1964), by reference to which much of the behavior ceases to be puzzling. An organism can increase its inclusive fitness by benefiting others that share genes with it: hence altruism toward kin can be adaptive. This explains a host of behaviors across organisms, from ‘helpers at the nest’ — birds that assist in raising chicks that are not their own offspring; Browning, Patrick, Rollins, Griffith, & Russell, 2012 — through to human nepotism. Reciprocal altruism (Trivers, 1971), of course, explains a great deal more. Group selection, lately become respectable once more (Nowak, Tarnita, & Wilson, 2010), can also explain apparently altruistic behavior. In humans, however, altruistic behavior toward kin and non-kin is (relatively often) more costly than can comfortably be explained by these models (Laland, 2017). Plausibly, we cooperate more than any other species, with the exception of

that the effect size is not significantly different from zero (Northover, Pedersen, Cohen, & Andrews, 2017). Perhaps, however, the mirror manipulation succeeds where images of eyes does not.

3  Not So Hypocritical After All: Belief Revision Is Adaptive and Often Unnoticed

47

those with a very high degree of relatedness, such as the eusocial insects and the highly interbred naked mole rat (Henrich & Henrich, 2007). Such cooperation seems to be essential to our adaptive success: far more than any other species in our clade, we depend on cooperation for survival. Much of this reliance on cooperation emerged relatively recently, so an explanation in terms of changes in gene frequencies is unlikely. We have been heavily dependent on cooperation for the life of our species (and probably long before it emerged), but this dependence has increased greatly over the past 100,000 years. In particular, the temporal depth of cooperation has increased dramatically: reciprocation comes to be increasingly delayed, and increasingly indirect over that period. It becomes harder to keep track of who owes what to whom, and harder to assess how much each owes, because costs and benefits are often paid in currencies that are hard to measure against one another. Cooperation becomes less direct (Sterelny, 2014, 2016). Over the past 50,000 years or so, our cooperative activity has come increasingly to involve forgoing certain benefits now for delayed and uncertain benefits later. Given that we cannot keep track of the costs and benefits of this kind of behavior with any precision, the proximate motivation for this activity is likely to be altruistic. Though we remain attentive to signs that individuals are freeriding, we do not expect others to be able to show that they are always doing their fair share. More recently — over the past 12,000 years or so — the extent of human cooperation has increased yet again. Other primates are intolerant even of fellow group members. We, in contrast, have recently taken to living in large groups of conspecifics, many of whom are strangers to one another. Settlements of this size are recalcitrant to our earlier developing dispositions to cooperation. Our neighbors, let alone those who pass through our neighborhood, are not kin. They may not remain long enough to engage in reciprocal exchange with us. Worst of all, the sheer size of the settlement defeats any possibility of keeping track of their reputations. Under conditions like these, cooperation based on the indirect reciprocity characteristic of earlier ways of life, backed by monitoring of reputation, might be expected to unravel. That it does not indicates that we find other ways of stabilizing it. Cultural evolution plays an increasingly large role in human history, and in explaining the outsized role cooperation plays in contemporary human life. An important part of the story may involve the development of prosocial religions with moralizing gods (Norenzayan, 2013; Norenzayan et al., 2016). Whereas the gods and supernatural agents of earlier religions tended to be more concerned with performance of the right rituals (Swanson, 1966), High Gods are concerned with the moral behavior of the faithful: in particular, their behavior toward in-group members. High Gods took over the monitoring that ordinary individuals, with their limited memories and even more limited perceptual faculties, cannot do. God sees all, as the Bible reminds us (“For human ways are under the eyes of the Lord, and he examines all their paths;” Proverbs 5:21). He sees not just what we do, but also our secret thoughts (“the Lord does not see as mortals see; they look on the outward appearance, but the Lord looks on the heart” 1Samuel 16:7) There is no escaping His gaze or the inexorability of his justice.

48

N. Levy

In High Gods societies, our behavior and even our motives are under surveillance. We no longer seem to have the option of appearing moral without paying the costs of being moral. In the eyes of the gods, the only way to appear moral is to be moral. Of course, people continue to engage in immoral behavior — surveillance by the gods is no panacea — but the balance of evidence suggests that moralizing High Gods increase the degree to which people behave consistently with their professed moral norms moral.7 One might want to quibble whether this is genuinely altruism — I set the issue aside — but there is good theoretical reason to expect that people will tend to bring their behavior into line with the moral norms of their group and good historical evidence that they actually do so. Nevertheless, people continue to act contrary to their norms and to their professed principles. For all that the gods (or their secular successors) watch us or nudge us toward behaving consistently with our norms, we often fail to live up to them. Indeed, some of this (apparent) hypocrisy occurs in the domain of religion. Internet porn consumption is significantly lower in highly religious US states on Sundays, but not other days of the week (Edelman, 2009). Muslim stallholders in Marrakesh offered the opportunity to share some or all of a small payment with a charity do so only in the aftermath of the call to prayer (Duhaime, 2015). Equally, we could cite the heightened disposition of Christians to bid in a charitable auction on Sundays but not when offered the same opportunity on other days of the week (Malhotra, 2008), or of Hindus to give more in a religious setting than when approached in a non-religious setting (Xygalatas, 2013). It seems that there is a sufficient gap for hypocrisy to occupy. There might be less of it than we feared, but it remains an obtrusive feature of the landscape. In the remainder of this paper, I will argue that these kinds of shifts in behavior are explained — in important, but by no means exclusive, part — by precisely the same kinds of mechanisms that explain our disposition to hyper-cooperation. The mechanisms of cultural evolution that explain our cooperation also leave us vulnerable to these shifts. I will suggest, however, that attention to these mechanisms and to their functioning reveals that these shifts in behavior are instances neither of strong hypocrisy, nor of moderate hypocrisy. They are neither knowingly duplicitous nor are they driven by faulty reasoning. Instead, they result from us deploying rationally appropriate mechanisms in appropriate ways.

7  Admittedly, there is a good deal of controversy over the High Gods theory, especially with regard to the claim that moralizing gods are needed for the emergence of complex societies (e.g. Whitehouse et al., 2019). But even its harshest critics accept that even if it does not explain the emergence of large scale settlements, moralizing gods probably helps to explain their stability over time (see Gray & Watts, 2017).

3  Not So Hypocritical After All: Belief Revision Is Adaptive and Often Unnoticed

49

3.4  Cultural Mechanisms for Belief Acquisition Genetic and cultural evolution have built in us an organism that is very sensitive to external cues for belief formation and update. The mechanisms of belief formation and update help explain how our behavior may switch from context to context without our realizing it. As the cues on which we rely for belief formation shift, our behavior (including our disposition to assert claims) changes too. Our disposition to revise our belief is explained by a suite of mechanisms, some of which are best understood as belief acquisition (rather than revision) mechanisms. For instance, we are pervasively dependent on social learning for belief formation. In large and complex societies, it is difficult to keep track of the prevailing norms. This is especially the case with regard to conventions. Conventions may often be good responses to problems, even though they are not uniquely good. In many situations, it may not matter much how we act, so long as enough of us agree to act like that. Conventions may also play the role of enabling ingroup members to identify one another, thereby allowing the mechanisms of group selection to work. For these reasons (and more), we are disposed to decide how best to act by reference to how others act, and to settle our beliefs by reference to what others believe. This results in a set of social learning biases responsive to the behavior and apparent beliefs of prestigious individuals, of the majority of individuals, of those with track records of reliability and of those who manifest cues of benevolence towards us. Thus, the prestige bias (Chudek, Heller, Birch, & Henrich, 2012; Henrich & Gil-­ White, 2001) leads us to imitate locally prestigious individuals. This is an adaptive strategy in a causally opaque environment, because prestige tends to be correlated with success. Imitating prestigious individuals increases the chances of hitting upon the strategies that explain their success, without the imitator needing to be able to identify just why they are successful (perhaps it is their hunting technique, or the time at which they hunt, or the tools they use; perhaps it is something altogether different, like the prayers they say: if we don’t know, we might do best by imitating the lot). Similarly, imitating the majority behavior (the conformist bias) enables the individual to hit upon locally satisfactory behavior without paying the costs of lengthy exploration in an environment in which signals are often swamped by noise (Henrich & Boyd, 1998). Importantly, these are belief formation strategies as well as behavioral strategies (Levy & Alfano, 2020). ‘Believe what prestigious individuals profess to believe’; ‘believe what the majority professes to believe’ and ‘acquire the beliefs that seem to explain the behavior of such individuals’ are all good strategies when it is difficult to discern what to believe (of course, we don’t deploy these strategies when signals are easy to detect: we are selective and intelligent imitators of behavior; see Laland, 2017). These strategies are also indirect belief formation strategies: as behavorial strategies they entail beliefs (“this is the way to do things”). These social learning strategies interact with other belief formation strategies that have been extensively studied, such as our selective dispositions to acquire beliefs through testimony. Again, we are pervasively but selectively receptive to testimony. We filter out

50

N. Levy

testimony from out-group members, from those who have track records of unreliability and those that have shown lack of benevolence to us in the past (Harris, 2012; Levy, 2019; Mascaro & Sperber, 2009; Sperber, Clément, et al., 2010). To understand how these mechanisms of belief acquisition are also mechanisms for unwitting belief revision, we need to add one more piece to the puzzle: these same mechanisms leave us with relatively sparse internal representations. Because our internal representations are so sparse — because we are reliant on external cues (like the behavior of others) to scaffold or even partially constitute our beliefs — when these cues change we may change our dispositions to behave and to assert without noticing that we have revised our beliefs.

3.5  The Sparseness of Belief If agents acquire beliefs in causally opaque worlds in important part by social referencing, then in many conditions they do not need to store detailed internal representations of these beliefs. They can employ the 007 principle: keep things on a need to know basis: evolved creatures will neither store nor process information in costly ways when they can use the structure of the environment and their operations upon it as a convenient stand-in for the information-processing operations concerned. That is, know only as much as you need to know to get the job done. (Clark, 1997: 46)

There are several reasons why cognition would rely on such a principle. One reason why it is adaptive to offload beliefs in conformity with the 007 principle is that minimizing internal representations cuts down on the need for expensive cognitive processing. Why go to all the trouble of storing things in the head when they can easily be accessed externally? Storage can be expensive and access costs high: much better to retrieve what we need to know when we need to know it. There is in fact good evidence that our internal models of the world are often much less rich than we think. We are subject to a fridge light illusion: we think our model of the world is rich and detailed because whenever we attend to any aspect of our environment, we retrieve a rich representation of that aspect (see Chater, 2018 for a lively presentation of the evidence). Thus, for instance, if text is presented to us in a way that is precisely locked to our eye movements, we have the experience of reading an unchanging page when in fact the only real words on the page at any moment are those we are currently reading (Rayner, 1998). Change blindness experiments nicely illustrate how we rely on the stability of the world to allow it to represent itself. We do not notice quite large changes in images if they are interspersed with a flicker (Simons & Levin, 1997; without the flicker, the changed element would attract our attention and we would represent it). We may even fail to notice the substitution of the stranger we are talking to by another individual who is dressed similarly but is otherwise only broadly similar to the original (Simons & Levin, 1998).

3  Not So Hypocritical After All: Belief Revision Is Adaptive and Often Unnoticed

51

It is not just savings on cost that favour allowing the world to represent itself. There is also the fact that, as Rodney Brooks (1990: 5) famously put it, “the world is its own best model.” An internal model of the external environment can never be as accurate a representation as the world is of itself. It even tracks changes in itself for us, in real time and in detail. Other things being equal (we do not need an offline copy of the representation for further processing; we expect fluctuation between states, and so on), we will very often do better to outsource the vehicles of our representations to the world itself. While most of the work on how we outsource representations has focused on either perceptual representations or on the demanding and highly distributed representations used by knowledge professionals like scientists, there is evidence that ordinary beliefs, even beliefs about ourselves, are outsourced. There is evidence, first, that we lack stable and detailed internal representations of our own beliefs. Consider, here, cognitive dissonance experiments (Cooper, 2007). In an influential paradigm, individuals are induced to write counterattitudinal essays. To say that the essays are counterattitudinal is to say that the proposition defended is one that the individuals can be expected to reject (often, matched controls are asked about these propositions to validate this fact). For example, college students might be asked to write short essays defending the proposition that tuition fees at their college should rise. A between-subjects design is employed, with one group paid to write the essays, while another is induced (through very mild pressure; mild enough so that they see themselves as free to refuse) to write them. The commonly reported finding is this: individuals in the induction group, but not individuals in the paid group, tend to come to believe the proposition they defended.8 Why do participants come to be better disposed to the proposition they defended when, and only when, they see themselves as having written the essay freely? A plausible, albeit partial, explanation is that they take their own behavior as evidence for what they believe (Carruthers, 2013). Because participants in the paid group, but not the mild inducement group, can readily explain their behavior to themselves as arising from financial inducement, and not from an intrinsic desire to defend the proposition, but those who saw themselves as writing the essay freely do not have that explanation available, we see differences in the attitude each group tends to end up espousing. These experiments provide evidence that our internal representations are relatively impoverished or unstable. They are relatively easily swamped by weak evidence that we actually believe something different. The same kind of fragility of 8  It would be a mistake to place too much weight on these experiments, because most were conducted prior to widespread awareness of problems of replicability, and often had too small a number of participants for us to be confident that the effect is real, even setting aside the possibility of inflated effect sizes due to the file drawer effect. There has not yet been a preregistered multi-lab replication attempt of this work, though one is currently in the planning stages. That said, there are preregistered replications of the basic finding (e.g. Forstmann & Sagioglou, 2020), which makes me somewhat optimistic that the effect will replicate. The researchers behind the multi-lab replication attempt report that they believe that the effect is real, albeit inflated in the published literature (Vaidis & Sleegers, 2018).

52

N. Levy

belief is demonstrated by choice blindness experiments. Once again, these experiments provide evidence that our internal representations are easily trumped by evidence that we believe something else. In these experiments, people make a choice between pairs of options, and their response is recorded. At the end of a block of choices, the experimenters go through the pairs of options once more, and ask the participant to justify her choice. On some questions, however, the responses are switched, so that participants are asked to justify a choice that is different from the one they really made. Participants are given the opportunity to correct their responses if they like, and some do (saying they must have made a mistake). But on most trials the person accepts the switched answer as their own, and goes on to fluently defend it: the evidence the experimenters present (that the participant has chosen that option) is sufficient for the person to attribute the corresponding belief to themselves. This kind of belief revision has been demonstrated in multiple spheres, from judgments of facial attractiveness (Johansson, Hall, Sikström, & Olsson, 2005) to moral judgments (Hall, Johansson, & Strandberg, 2012). It occurs in the political sphere too. In one experiment, members of the public were approached in a park, and asked to participate in a study of their views on political policy (Hall et  al., 2013). They were shown real policy options put forward by the political parties contesting a forthcoming election, and asked the extent to which they agreed with the policies (by marking a point on a 100  mm line, with one end signifying full agreement and the other full disagreement). They were then asked to justify their choices, including choices on which responses had been altered (the median alteration changed the point marked by 35.7 mm). 92% of respondents offered at least one justification of a choice that had been altered. There was no correlation between self-reported levels of political engagement and likelihood of noticing the alteration. Similar results were obtained in a study of attitudes to moral issues in the news, in which the wording of responses was changed to reverse choices (so that “prohibited” became “permitted”, for example; Hall et al., 2012). The ease with which we are led to self-attribute beliefs by these kinds of manipulations suggests that we lack detailed internal representations of our beliefs.9 Just as our internal representations of the visual scene lack detail, and are easily swamped by changes in the external world so long as gross features are retained (white man 9  Interestingly, there is evidence that we represent our beliefs at a finer level of granularity than the beliefs of others (Thornton, Weaverdyck, Mildner, & Tamir, 2019). The evidence I am marshalling here suggests, however, that we do not thereby represent stable and detailed mental states at all. There are three reasons why the data from Thornton et al. are compatible with my picture. First, their claim is comparative: people represent their own states more distinctly than those of others (there is a gradient in distinctness, such that the more socially distant someone is, the less distinct our representations tend to be). More distinct is compatible with very indistinct, of course. Second, as Thornton et al. note, “rich representations are not necessarily accurate ones”; indeed the richness of the representations may help give rise to an illusion that we know our own mental states (6). Third, any temptation to regard this as evidence about the distinctness of one’s mental states should be heavily tempered by the method, which did not ask participants to introspect but instead asked them to consider images paired with state descriptors.

3  Not So Hypocritical After All: Belief Revision Is Adaptive and Often Unnoticed

53

dressed as a construction worker), so our beliefs can be swamped by quite weak evidence that we believed something else all along. Notice, though, that this is not just good evidence that we lack detailed internal representations. It is also good evidence of how we make up for this lack: by reference to cues for belief: in this case, the cues are our own behavior. Under a variety of circumstances, we will tend to construct or reconstruct our beliefs on the spot, rather than recalling them, with attention to cues to belief of various kinds central to this reconstruction.10 In many of the cases in which we might suspect hypocrisy, I suggest, the apparent hypocrite is reconstructing their belief, and generating a response that is different to the one we would have expected earlier. There is, I will argue, good reason to think that the person who endorses a belief at a later time that is at odds with their earlier belief, and does so through this reconstructive process (without noticing the change) should not be described as a hypocrite.

3.6  Belief Revision in Response to Social Cues The evidence reviewed above suggests that our beliefs are fragile: we can be led, by relatively trivial manipulations, to self-attribute beliefs at odds with those we would have reported prior to the manipulation. These data strongly suggest that our internal representations lack the richness and stability we pretheoretically expect of them, and that in turn suggests that we rely on the stability of the environment to ensure stability of belief. Of course, the manipulations involved in these experiments are not intended to mimic real-world processes. It seems unlikely that we outsource our internal representations to our own behavior (though perhaps it is adaptive to bring our inward beliefs into alignment with our behavior, so that we are not out of step with social norms when these norms change). Instead, I suggest that the shifts we see in these experiments arise from mechanisms designed to outsource belief to the social and natural environment. In this section, I will provide evidence of the use of social referencing for belief self-attribution, and argue that this mechanism explains many instances of weak hypocrisy. In the studies I have in mind, belief revision is induced not by manipulating cues to what the person themselves believed in the past but manipulating cues to what others with whom they identify believe. For instance, Maoz, Ward, Katz, and Ross (2002) presented Israeli Jews and Palestinians with a peace proposal. They found  Note that constructing beliefs rather than recalling them is in fact very common, especially with regard to dispositional beliefs. As Gareth Evans (1982) influentially noted, we answer questions like “do you believe there will be another World War?” not by looking inward, to a repository of our beliefs, but by considering the world. This kind of case is somewhat different from those under discussion, since in this kind of case we (apparently) attempt to consider the facts that make the claim true or false in it, rather than looking to indirect cues for whether we should accept it (in fact, I am confident that we also look to cues to belief in even the best of cases). Nevertheless, given the similarities it is unsurprising that there is no phenomenological difference between the different ways of constructing our beliefs on the spot.

10

54

N. Levy

that information about who drew up the proposal (Jews or Palestinians) significantly influenced participants’ degree of support for them. Cohen (2003) reports even stronger evidence: he found that information about whether welfare policies were supported by House Democrats or House Republicans was a more powerful predictor of attitudes to it than policy content. Belief revision in choice blindness experiments turns on the (apparent) fact that (apparently) I believe that p. Evidence that I have previously endorsed the proposition trumps the content of my representations when it comes to determining my attitude toward it. In these experiments, evidence that people like me believe that p has the same kind of effect. Evidence that my fellow Democrats/Jews/Dodgers fans support the policy is a cue to acceptance; conversely (and, I suspect, independently) evidence that the Other Side opposes it has the same effect. None of this is surprising, if people lack detailed stable internal representations. Instead, we must often construct our beliefs by looking outward. Many cases of apparent political hypocrisy can be explained via this mechanism. Consider support for Donald Trump. When he was a candidate for the nomination, party members and supporters chose between him and rivals by reference to a variety of different things, including what he said and did, as well as cues like the (apparent) beliefs of others they regard as relevantly like them (locals; fellow party members; Ivy-league educated professionals; white working-class men — different group memberships will have different strengths as cues for different individuals, depending on their degree of identification with them). Once he was the candidate, however, there was a strong cue for accepting the proposition that he’s the best candidate for those who identified with the party. Rather than a mix of signals from the party, some supporting Trump and others supporting rival candidates, the signal strengthened and unified. For those who had supported rival candidates, the sampling of cues tended to cause unknowing belief revision. Hence the fact (for example) that many people who had said “never Trump” came to be enthusiastic supporters. Of course, these individuals often retained memories of endorsing rivals. Our memories are extremely fragile and context-sensitive, however, and recall is less reliable that might be naively supposed. We are apt to confabulate and rationalize, and may reinterpret what we recall as more equivocal than it was at the time. Even the person who recalls saying “never Trump!” may now confabulate mental reservations of all kinds. The mechanisms of cognitive dissonance reduction work subpersonally; agents need not have any awareness of the confabulation. They may move from p to ~p while experiencing themselves as holding steadfast. The process can occur very quickly: Consider how Republican support for American intervention in Syria rose dramatically in the immediate aftermath of President Trump’s decision to engage in military action (Clement, 2017). That the president has done it is a powerful cue to thinking it’s appropriate for those who identify with him or his party. Of

3  Not So Hypocritical After All: Belief Revision Is Adaptive and Often Unnoticed

55

course, this mechanism is not partisan: we should expect similar processes to work on all sides.11 Of course, this is not intended as a full explanation either of support for Trump or for why people might switch from preferring one of his rivals to backing him. Indeed, social referencing is parasitic on other mechanisms: the social signal can be emitted only once people have already begun to express or evince support for a belief. Many explanations have been offered for Trump’s appeal, and I am not qualified to adjudicate between them. No doubt there are multiple, partial, explanations for his success. My claim is only that social referencing explains some of the variance, both at the group and the individual level. At the group level: there are some individuals for whom social referencing is a primary explanation of their support for Trump. At the individual level: there are some (probably many) people for whom social referencing is a partial explanation for their support for Trump: it reduces the threshold for acceptance on other grounds, or makes them less likely to scrutinize their reasons. Outside the domain of politics (narrowly construed), we ought to see the same mechanism of belief formation and revision at work. As already noted above, there is evidence for this claim, from choice blindness experiments. It may be that the relatively rapid change in attitudes on once controversial topics is explained, in part, by this mechanism. Consider how rapidly support for same-sex marriage has risen in the United States, from 35% in 2001 to 62% in 2017 (Pew Research Center, 2017). While generational change plays a role in explaining the rapid rise in support, support has risen among all generations (doubling, from 21% to 42%, among those born 1928–1945 across the time period). Once again, there are no doubt multiple explanations for this shift (and no reason to doubt that argument and ordinary first-order evidence played a role), but I suspect changing attitudes snowballed via this kind of mechanism: as more people changed their attitudes, more people like me had that belief and signals for belief revision strengthened. Apparent religious hypocrisy, of the kind represented by the Sunday effect (the tendency of Christians to be significantly more likely to display behaviors that conform to the doctrines they accept on a Sunday than on other days of the week) can’t be explained in precisely the same fashion, because these shifts don’t involve changes in cues that people like me hold the relevant belief. I suggest that a related mechanism is at work, however. Some beliefs are particularly hard to internally represent, because they have somewhat counterintuitive contents. This is especially the case with regard to what have come to be called ‘theologically correct’ representations (Barrett, 1999), as opposed to the representations of folk religion. While folk religion may be intuitive (it is in this sense that religion is natural, as McCauley (2011) puts it), doctrinal religions often elaborate concepts that are hard to think. Consider the doctrine of the trinity, or the proposition that God is unbounded by time and space. Because these concepts are highly unintuitive, even those who  It might be weaker on one side than another if group identification is weaker on that side, or if identification is not with party or partisan groupings. The rise of identity politics might entail a weakening of broad-based cues for belief revision, in favour of a fragmentation of such cues.

11

56

N. Levy

accept them in principle may substitute simpler and less counterintuitive concepts when they engage in religious cognition. Counterintuitive concepts are especially apt for offloading. Since they are hard to internally represent, we should expect them to be less salient to agents, and less likely to play a role in cognition unless and until they are prompted. We therefore should expect that agents often rely on external cues to trigger or partially constitute them. The inconsistency of behavior we noted may arise from such reliance: when the cue is salient, but only then, the representation is available to drive behavior (see Levy, 2018 for discussion). The mechanism is likely to be particularly powerful with regard to the representations of High Gods religions, which have emerged relatively recently through cultural evolution. Such religions are heavily reliant on external reminders of religion to increase cooperation (hence the ubiquity of representations of gods as watchful, from the eye of Horus to the Lord as shepherd (Norenzayan, 2013)). As a consequence, we may see fluctuations in the extent to which behavior is consistent with the moral norms of such religions as a consequence of availability of cues to belief.

3.7  Conclusion: Hypocrisy Unmasked If hypocrisy requires knowing or self-deceptive belief revision, then belief revision via cue reliance mechanisms is not hypocritical. The person who revises her belief as cues for belief acceptance shift will often be unaware that she revises. She does not store her representations in sufficient detail for her to be able to detect revision. Rather, she relies on the environment to help constitute her belief, and as the environment shifts, her belief shifts with it. Such reliance is, I have argued, adaptive. If the environment is sufficiently stable and reliable in certain respects, we do better to store our beliefs externally, to the extent we can. Reliance on external cues and sampling on a need to know basis renders our beliefs vulnerable to shifts as cues change, but such shifts may themselves be adaptive (allowing our belief to track relevant changes in the world). Having our beliefs depend on external cues, and especially cues that our ingroup accepts a claim, is adaptive because groups are often better at honing in on important truths than individuals. This is true at a time (group deliberation is often very much better than individual deliberation; see Mercier, Trouche, Yama, Heintz, & Girotto, 2015; Mercier, Deguchi, Van der Henst, & Yama, 2015; Mercier & Sperber, 2017), and over time (cultural evolution allows groups to generate knowledge that cannot be accumulated within a single lifetime; see Richerson & Boyd, 2005; Boyd, Richerson, & Henrich, 2011). While outsourcing in this kind of way entails that our beliefs are vulnerable to shifts in cues, which do not always track the truth, this disadvantage is outweighed by the large benefits it brings. It is not merely adaptive evolutionarily to outsource representational load; it is epistemically adaptive: it increases our reliability at tracking truth.

3  Not So Hypocritical After All: Belief Revision Is Adaptive and Often Unnoticed

57

Indeed, there is a case for going further and arguing that outsourcing in this kind of way is not merely adaptive but also rational. That is, outsourcing may not merely be the best we can do, given our cognitive limitations and the fact that it is often better to act quickly than to deliberate longer, but a way in which we proportion our beliefs to the evidence. The beliefs of others, after all, constitute evidence for us. If I want to know whether p I should take into account not only the evidence whether p but also other people’s beliefs about p. If I believe that p but p is a minority belief among my epistemic peers, it is typically more likely that I have made a mistake than that my peers are wrong. The beliefs of others are (higher-order) evidence for me, and I rationally ought to take that evidence into account (Christensen, 2007; Lackey, 2010).12 The cues to belief in my environment are often higher-order evidence for me, insofar as their existence is causally dependent on the beliefs of others. The hypothesis presented here absolves many  — though by no means all  — apparently hypocritical individuals of possession of the vice. They do not have a vice, moral or epistemic: rather, they have an adaptive and arguably rational disposition to outsource representational duties. They are governing their cognition in accordance with the proper function of the relevant mechanisms. Indeed, I would suggest that in the increasingly complex societies in which we live today, and in which epistemic labor is highly and increasingly distributed, such outsourcing remains highly adaptive. Its costs are the inevitable upshot of the large benefits it brings us. Finally, it is worth noting that the hypothesis has clear implications for how (apparent) hypocrisy is best reduced: by attention to the environment and the cues it evinces. If we want people to act consistently with normative claims they previously accepted, we need to ensure that the cues for action guidance by these claims are in place at the right times and in the right places. However, the fact that outsourcing of representational load to the environment is adaptive, and that it is adaptive to track changes in cues as they occur, suggests that there is little value in worrying about such consistency specifically. We do better to concern ourselves with ensuring that the cues track truth than with worrying about whether people are consistent over time. It is also worth noting the hypothesis may be a hopeful one, in some respects, in our current environment. Many people fear that recent political events (the rise of the far right in Europe, the US presidential election and the Brexit referendum, for example) indicate much deeper and broader prejudice in the populations of the relevant countries than we had thought. If I’m right, that prejudice might not be as deep as feared. Rather than indicating that (say) many more people than we might

 It is important to note that many epistemologists who accept that disagreements with epistemic peers constitute higher-order evidence might reject my claim that outsourcing is rational, because they advance extremely restrictive notions of who counts as an epistemic peer. As Lackey (2010) notes, the idealized notion of peerhood these accounts work with threatens to cut the debate off from the realworld cases that gives it its point. In any case, we can set this issue aside: when I disagree with many others, I can be very confident that among the dissenters are many people who are at least my epistemic peer (no doubt some are my epistemic superior – at least with regard to the proposition at issue – and I ought to give their dissent especially heavy weight).

12

58

N. Levy

have thought support the sexism and racism of Donald Trump, it may simply be that people will sway their allegiance to whatever candidate is apparently supported by the party they identify with. They may just as easily swing back to a more principled candidate, and find themselves sincerely affirming their deep-seated opposition to racism. Beliefs are often very much less deep-seated than we hope or fear and revision may come sooner than we expect. Acknowledgements  I am grateful to an audience at “Evolutionary ethics: The nuts and bolts approach,” held at Oxford Brookes in July 2018 for helpful comments. I am especially grateful to Helen De Cruz, Johan De Smedt and Mark Alfano for extremely helpful comments on the written version, in light of which the paper has been revised extensively.

References Alicke, M. D., Gordon, E., & Rose, D. (2013). Hypocrisy: What counts? Philosophical Psychology, 26(5), 673–701. Barrett, J.  L. (1999). Theological correctness: Cognitive constraint and the study of religion. Method & Theory in the Study of Religion, 11(4), 325–339. Batson, C.  D., & Thompson, E.  R. (2001). Why don’t moral people act morally? Motivational considerations. Current Directions in Psychological Science, 10(2), 54–57. Batson, C. D., Thompson, E. R., & Chen, H. J. (2002). Moral hypocrisy: Addressing some alternatives. Journal of Personality and Social Psychology, 83(2), 330–339. Batson, C. D., Thompson, E. R., Seuferling, G., Whitney, H., & Strongman, J. A. (1999). Moral hypocrisy: Appearing moral to oneself without being so. Journal of Personality and Social Psychology, 77(3), 525–537. Batson, C. D., Tsang, J., & Thompson, E. R. (2000). Weakness of will: Counting the cost of being moral. Unpublished manuscript. Boyd, R., Richerson, P. J., & Henrich, J. (2011). The cultural niche: Why social learning is essential for human adaptation. Proceedings of the National Academy of Sciences, 108(supplement 2), 10918–10925. Brooks, R. A. (1990). Elephants don’t play chess. Robotics and Autonomous Systems, 6(1–2), 3–15. Browning, L. E., Patrick, S. C., Rollins, L. A., Griffith, S. C., & Russell, A. F. (2012). Kin selection, not group augmentation, predicts helping in an obligate cooperatively breeding bird. Proceedings of the Royal Society B: Biological Sciences, 279(1743), 3861–3869. Carruthers, P. (2013). The opacity of mind. Oxford: Oxford University Press. Chater, N. (2018). The mind is flat: The remarkable shallowness of the improvising brain. New Haven, CT: Yale University Press. Christensen, D. (2007). Epistemology of disagreement: The good news. Philosophical Review, 116(2), 187–218. Chudek, M., Heller, S., Birch, S., & Henrich, J. (2012). Prestige-biased cultural learning: bystander’s differential attention to potential models influences children’s learning. Evolution and Human Behavior, 33(1), 46–56. Clark, A. (1997). Being there: Putting brain, body, and world together again. Cambridge, MA: MIT Press. Clement, S. (2017, April 10). Poll: Narrow support for Trump’s strike in Syria. Washington Post. https://www.washingtonpost.com/world/national-­security/poll-­narrow-­support-­for-­trumps-­ strike-­in-­syria/2017/04/10/15dab5f6-­1e02-­11e7-­a0a7-­8b2a45e3dc84_story.html?utm_term=. d786f6570982

3  Not So Hypocritical After All: Belief Revision Is Adaptive and Often Unnoticed

59

Cohen, G. L. (2003). Party over policy: The dominating impact of group influence on political beliefs. Journal of Personality and Social Psychology, 85(5), 808–822. Cooper, J. (2007). Cognitive dissonance: Fifty years of a classic theory. London: Sage. Dong, M., van Prooijen, J.-W., & van Lange, P. A. M. (2019). Self-enhancement in moral hypocrisy: Moral superiority and moral identity are about better appearances. PLoS One, 14(7), e0219382. https://doi.org/10.1371/journal.pone.0219382 Duhaime, E. P. (2015). Is the call to prayer a call to cooperate? A field experiment on the impact of religious salience on prosocial behavior. Judgment and Decision making, 10(6), 593–596. Edelman, B. (2009). Red light states: Who buys online adult entertainment? Journal of Economic Perspectives, 23(1), 209–220. Evans, G. (1982). The varieties of reference. New York: Oxford University Press. Forstmann, M., & Sagioglou, C. (2020). Religious concept activation attenuates cognitive dissonance reduction in free-choice and induced compliance paradigms. Journal of Social Psychology, 160(1), 75–91. Graham, J., Meindl, P., Koleva, S., Iyer, R., & Johnson, K. M. (2015). When values and behavior conflict: Moral pluralism and intrapersonal moral hypocrisy. Social and Personality Psychology Compass, 9(3), 158–170. Gray, R.  D., & Watts, J. (2017). Cultural macroevolution matters. Proceedings of the National Academy of Sciences, 114(3), 7846–7852. Hall, L., Johansson, P., & Strandberg, T. (2012). Lifting the veil of morality: Choice blindness and attitude reversals on a self-transforming survey. PLoS One, 7(9), e45457. Hall, L., Strandberg, T., Pärnamets, P., Lind, A., Tärning, B., & Johansson, P. (2013). How the polls can be both spot on and dead wrong: Using choice blindness to shift political attitudes and voter intentions. PLoS One, 8(4), e60554. Hamilton, W.  D. (1964). The genetical evolution of social behaviour: I. Journal of Theoretical Biology, 7(1), 1–16. Harris, P. (2012). Trusting what you’re told: How children learn from others. Cambridge, MA: Harvard University Press. Henrich, J., & Boyd, R. (1998). The evolution of conformist transmission and between-group differences. Evolution and Human Behavior, 19(4), 215–242. Henrich, J., & Gil-White, F. (2001). The evolution of prestige: Freely conferred deference as a mechanism for enhancing the benefits of cultural transmission. Evolution and Human Behavior, 22(3), 165–196. Henrich, N., & Henrich, J. (2007). Why humans cooperate: A cultural and evolutionary explanation. New York: Oxford University Press. Johansson, P., Hall, L., Sikström, S., & Olsson, A. (2005). Failure to detect mismatches between intention and outcome in a simple decision task. Science, 310(5745), 116–119. Kurtzleben, D. (2016, October 23). POLL: White evangelicals have warmed to politicians who commit ‘immoral’ acts. National Public Radio. https://www.npr.org/2016/10/23/498890836/ poll-­white-­evangelicals-­have-­warmed-­to-­politicians-­who-­commit-­immoral-­acts Lackey, J. (2010). A justificationist view of disagreement’s epistemic significance. In A. Haddock, A.  Millar, & D.  Pritchard (Eds.), Social epistemology (pp.  298–325). New  York: Oxford University Press. Laland, K.  N. (2017). Darwin’s unfinished symphony: How culture made the human mind. Princeton, NJ: Princeton University Press. Levy, N. (2018). In praise of outsourcing. Contemporary Pragmatism, 15(3), 244–265. Levy, N. (2019). Due deference to denialism: Explaining ordinary people’s rejection of established scientific findings. Synthese, 196(1), 313–327. Levy, N., & Alfano, M. (2020). Knowledge from vice. Mind, 129(515), 887–915. Lönnqvist, J. E., Irlenbusch, B., & Walkowitz, G. (2014). Moral hypocrisy: Impression management or self-deception? Journal of Experimental Social Psychology, 5, 53–62.

60

N. Levy

Lopez, G. (2017, 30 November). Lindsey Graham, 2017: I’m tired of media portraying Trump as a kook. Graham, 2016: Trump is a kook. Vox. https://www.vox.com/ policy-­and-­politics/2017/11/30/16720814/lindsey-­graham-­trump-­kook Malhotra, D. K. (2008). (When) are religious people nicer? Religious salience and the ‘Sunday effect’ on pro-social behavior (NOM Working Paper No. 09-066). Boston: Harvard Business School. https://doi.org/10.2139/ssrn.1297275 Maoz, I., Ward, A., Katz, M., & Ross, L. (2002). Reactive devaluation of an “Israeli” vs. “Palestinian” peace proposal. Journal of Conflict Resolution, 46(4), 515–546. Mascaro, O., & Sperber, D. (2009). The moral, epistemic, and mindreading components of children’s vigilance towards deception. Cognition, 112(3), 367–380. McCauley, R.  N. (2011). Why religion is natural and science is not. New  York: Oxford University Press. Mercier, H., Deguchi, M., Van der Henst, J.-B., & Yama, H. (2015). The benefits of argumentation are cross-culturally robust: The case of Japan. Thinking & Reasoning, 22(1), 1–15. Mercier, H., & Sperber, D. (2017). The enigma of reason. Cambridge, MA: Harvard University Press. Mercier, H., Trouche, E., Yama, H., Heintz, C., & Girotto, V. (2015). Experts and laymen grossly underestimate the benefits of argumentation for reasoning. Thinking and Reasoning, 21(3), 341–355. Miller, D. D. (2019). The mystery of evangelical trump support? Constellations, 26(1), 43–58. Monin, B., & Merritt, A. (2012). Moral hypocrisy, moral inconsistency, and the struggle for moral integrity. In M. Mikulincer & P. Shaver (Eds.), The social psychology of morality: Exploring the causes of good and evil (pp. 167–184). New York: American Psychological Association. Norenzayan, A. (2013). Big gods: How religion transformed cooperation and conflict. Princeton, NJ: Princeton University Press. Norenzayan, A., Shariff, A. F., Gervais, W. M., Willard, A. K., McNamara, R., Slingerland, E., et  al. (2016). The cultural evolution of prosocial religions. Behavioral and Brain Sciences, 39(1), 1–19. Northover, S. B., Pedersen, W. C., Cohen, A. B., & Andrews, P. W. (2017). Artificial surveillance cues do not increase generosity: Two meta-analyses. Evolution and Human Behavior, 38(1), 144–153. Nowak, M.  A., Tarnita, C.  E., & Wilson, E.  O. (2010). The evolution of eusociality. Nature, 466(7310), 1057–1062. Pew Research Center. (2017). Changing attitudes on gay marriage, June 26. http://www.pewforum.org/fact-­sheet/changing-­attitudes-­on-­gay-­marriage/ Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124(3), 372–422. Richerson, P. J., & Boyd, R. (2005). Not by genes alone. Chicago: University of Chicago Press. Rigdon, M., Ishii, K., Watabe, M., & Kitayama, S. (2009). Minimal social cues in the dictator game. Journal of Economic Psychology, 30(3), 358–367. Sie, M. (2015). Moral hypocrisy and acting for reasons: How moralizing can invite self-deception. Ethical Theory and Moral Practice, 18(2), 223–235. Simons, D.  J., & Levin, D.  T. (1997). Change Blindness. Trends in Cognitive Sciences, 1(7), 261–267. Simons, D. J., & Levin, D. T. (1998). Failure to detect changes to people during a real-world interaction. Psychonomic Bulletin & Review, 5(4), 644–649. Sperber, D., Clément, F., et al. (2010). Epistemic vigilance. Mind & Language, 25(4), 359–393. Sterelny, K. (2014). A paleolithic reciprocation crisis: Symbols, signals, and norms. Biological Theory, 9(1), 65–77. Sterelny, K. (2016). Cooperation, Culture, and Conflict. British Journal for the Philosophy of Science, 67(1), 31–58. Swanson, G. E. (1966). The birth of the gods. Ann Arbor, MI: University of Michigan. Thornton, M. A., Weaverdyck, M. E., Mildner, J. N., & Tamir, D. I. (2019). People represent their own mental states more distinctly than those of others. Nature Communications, 10, 2117.

3  Not So Hypocritical After All: Belief Revision Is Adaptive and Often Unnoticed

61

Trivers, R.  L. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46(1), 35–57. Vaidis, D., & Sleegers, W. (2018). Large scale registered replication project  – Cognitive dissonance: Induced compliance paradigm with counterattitudinal essay. Open Science Foundation. https://osf.io/9xsmj/ Whitehouse, H., François, P., Savage, P. E., Currie, T. E., Feeney, K. C., Cioni, E., et al. (2019). Complex societies precede moralizing gods throughout world history. Nature, 568, 226–229. Xygalatas, D. (2013). Effects of religious setting on cooperative behaviour. A case study from Mauritius. Religion, Brain and Behavior, 3(2), 91–102.

Chapter 4

The Chimpanzee Stone Accumulation Ritual and the Evolution of Moral Behavior James B. Harrod

Abstract  What might constitute early evidence for the evolution of ethics? The report of chimpanzee stone accumulation behavior at four sites in West Africa (Kühl HS, Kalan AK, Arandjelovic M, Aubert F, D’Auvergne L, Goedmakers A, … Boesch C, Sci Rep 6:22219, 2016) is a strong candidate for such evidence. The authors hypothesized the behavior qualified as a form of ritualized behavioral display. They suggested several explanatory hypotheses, but found them inadequate and the behavior puzzling and enigmatic. I develop a hypothesis to explain and interpret the behavioral pattern based on positing its behavioral contexts and re-­analyzing its relation to the everyday aggression display and other communicative behaviors. It is explained as an ethological ritualization that down-regulates aggression toward the alpha or indirectly toward a scapegoat, and enacts a ritual performance of non-violent resistance to high-ranking male abuse via a set of creative moral behaviors. This has implications for homologous behaviors descending from the common ancestor of humans and chimpanzees, ca. 7–12  million years ago, and for hypothesizing stages in the evolution of morality and ethics in human and other species. Keywords  Evolution of ethics · Evolution of morality · Evolution of culture · Chimpanzee · Cairn · Sacred tree · Scapegoat · Sublimation · Non-violence · Creativity

4.1  Introduction The first thing I want to say is that the chimpanzee species is under immanent threat of extinction. The situation is urgent. Humans have an ethical responsibility to protect chimpanzees, conserve and expand their natural habitat and where that is not J. B. Harrod (*) Center for Research on the Origins of Art and Religion, Portland, ME, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. De Smedt, H. De Cruz (eds.), Empirically Engaged Evolutionary Ethics, Synthese Library 437, https://doi.org/10.1007/978-3-030-68802-8_4

63

64

J. B. Harrod

possible to establish chimpanzee reserves with as wild a habitat as feasible. This is in the self-interest of the human species, as it preserves species diversity, genetic diversity and cultural diversity (Kühl et al., 2019). We owe this to our primate kin, who are sentient beings for whom it is in their interest not to suffer from human harms, and this moral obligation proclaimed in human religious traditions and law globally.

4.1.1  The Question What might constitute early evidence for the evolution of ethical and moral behavior in the two million years of human evolution? To answer this question and advance research on the evolution of ethics, it would be useful to know whether there is any evidence for chimpanzees or an early species of Homo that might be categorized as ethics or ethical-moral behavior, with possible creative symbolic features, realizing that these heuristic terms may be less differentiated for such species. I suggest Kühl et al. (2016) provides such evidence. They reported the discovery at four sites in West Africa where chimpanzees hurl stones at trees, bang stones against them, and toss stones into tree hollows. They hypothesize that this stone accumulation behavior is a form of ritualized behavior, but what it signifies they considered to be puzzling and enigmatic. They suggested three possible explanatory hypotheses and found none convincing. They provided no behavioral context that might explain the behaviors. In this study I propose interrelated behavioral contexts for the ritual, including high-rank male aggression display, subordinate abuse, victim’s scapegoating behavior, and aggressive stone weapon concealment behavior. Given such a context I hypothesize that the stone accumulation behavior involves inhibiting and redirecting a victim’s retaliatory aggression into a creative ritual performance. The stone accumulation behavior at special trees provides the first evidence for human-like deep moral behavior in our non-human kin species. This has implications for homologous behaviors in early hominins and hypothesizing stages in the emergence of ethics in human evolution.

4.1.2  Definition of Morality and Its Components Addressing the question of morality in human evolution as well as across different species requires a concise yet comprehensive definition of morality. A limited literature review suggests moral behavior can be characterized by at least four components (Rest, 1984; Swaner, 2005). Drawing on task terms used in neuroimaging studies on morality and studies on evolution of culture, I have added four more components: social cooperation and social norms; distributive justice and unfair

4  The Chimpanzee Stone Accumulation Ritual and the Evolution of Moral Behavior

65

share, moral-self regulation, and hierarchical norm maintenance, for a total of eight components. 1. Moral sensitivity; sensitivity to moral violations, witnessing and protesting harm to others (e.g., war, assault, child and domestic abuse, child poverty, individuals with disabilities and other stigmata conditions). 2. Dominance/submission behaviors in power hierarchy, such as alliance-building, retaliation, norm maintenance, conflict resolution, and submission responses, including appeasement, scapegoating, frustration venting, social pain (feeling hurt or humiliated), rejection sensitivity, ressentiment, social exclusion or withdrawal and separation distress. 3. Social cooperation (reciprocal altruism); social norm/convention/etiquette violation (embarrassment); decision to cooperate versus defect; decision to trust. 4. Distributive justice, which in neuroscience involves inequity perception, judging an individual has received an unfair share or unfair cooperation payoff. 5. Moral judgment, judging right and wrong, ability to reason correctly about what ‘ought’ to be done in a specific situation, which may also incorporate moral intuitions or moral affects. 6. Moral motivation, personal commitment to moral action, accepting responsibility for the outcome; and, reciprocally, feeling a moral claim of the other on our responsibility to respond in a moral way. 7. Moral self-regulation, which in neuroscience involves motor inhibition, prepotent inhibition; down-regulation of emotions and drives, including aggression; and selecting moral responses, especially novel responses, such as forgiveness, in the context of a punishment/reward system. 8. Moral character, persistence in spite of fatigue or temptations to take the easy way out.

4.1.3  B  ackground – Chimpanzee Cognitive, Affective, Moral and Communicative Capacities Since the present study develops an explanatory hypothesis for the chimpanzee stone throwing/caching behavior (Kühl et al., 2016), I briefly summarize six relevant aspects of chimpanzee intelligence, communicative behaviors and culture. First, great ape (orangutan, gorilla, chimpanzee, bonobo) intelligence indicates that they achieve the developmental stage of first-order symbolic behavior across cognitive domains, entertain symbolic and perceptual representations concurrently, and are capable of limited hierarchization with combinatorial mechanisms (Russon, 2004: 92–93). Chimpanzees have a capacity for a cognitive theory of mind somewhere between that of monkeys and humans, variously called first-order intentional states, behavioral abstractions, or partial theory of mind (Bräuer, Call, & Tomasello, 2005; Call & Tomasello, 2008; Herrmann, Call, Hernández-Lloreda, Hare, & Tomasello, 2007; Povinelli, Bering, & Giambrone, 2000; Tomasello, Call, & Hare,

66

J. B. Harrod

2003) or full theory of mind (Boesch & Boesch-Achermann, 2000: 242–252). They can distinguish with respect to intentionality notions such as ‘unable’ versus ‘unwilling’ (Call, Hare, Carpenter, & Tomasello, 2004). They have limited analogical reasoning (Flemming, Beran, Thompson, Kleider, & Washburn, 2008; Russon, 2004). They teach by imitation (Whiten, McGuigan, Marshall-Pescini, & Hopper, 2009). Recent neuroimaging research shows that chimpanzees – like humans – have a resting default network and thus can access emotionally laden, episodic memory and some level of mental self-projection into past, future, and another’s individual perspective (Rilling et al., 2007). Second, chimpanzee vocalizations express at least a dozen different emotions, including everyday emotions of social fear, body-contact enjoyment, food enjoyment, social excitement, puzzlement, and fear of strangeness (Goodall, 1986: 127) and they have a rich emotional life (de Waal, 2019). Harrod (2011, 2014a) drew on Goodall’s categorization of call vocalizations and their pluri-significations to interpret chimpanzee behavior patterns when confronting situations of birth, death, consortship and special places in nature, including the ‘rain dance’, as meeting a definition of religious/spiritual behavior. Third, chimpanzees use a sophisticated array of communicative behaviors, including a repertoire of at least 32 distinctive vocalizations, as well as facial expressions, gestures, and postures to express both normative and novel categorical mental representations, which a receiver must interpret according to behavioral context (Goodall, 1986: 114–45). The meaning of a particular display can be modified by varying its sequence of elements (syntax) (Slocombe & Zuberbühler, 2005, 2006). Chimpanzees innovate signals in display sequences and use symbol displacements onto iconic or token forms (Pika & Mitani, 2006; Wrangham, in Cromie, 1999). Callers can modify their own pant-hoots to signal their identity and status to others (Notman & Rendall, 2005). Chimpanzees can signal by combining pant-hoots and drumming on tree buttresses; they apparently modify drumming patterns in order to vary the signal’s content, which is evidence for symbolic communication (Boesch & Boesch-Achermann, 2000: 235–37). Fourth, chimpanzees engage in play and miming at a level that developmental psychology term symbolic play, including pretend play, pretense, and re-enacting actions (events, scripts) outside their usual context and without their usual objectives, and symbolic object use (Russon, 2004: Table 6.2; Hirata, Yamakoshi, Fujita, Ohashi, & Matsuzawa, 2001; Savage-Rumbaugh & Lewin, 1994: 276–78). A solitary individual may also groom a leaf (Goodall, 1986: 391). In two captive groups, adolescent and adult chimpanzees frequently used the bipedal swagger component of aggression displays as invitation to play (Goodall, 1986: 144). Fifth, chimpanzees have culture. Whiten and Erdal (2012) note that studies have identified more than 40 different wild chimpanzee cultural traditions pertaining to food processing, tool use, grooming and social and sexual behaviors (Whiten, 2005; Whiten et al., 1999). Communities can sustain alternative cultural techniques and cultural traditions can spread from community to community (Whiten & Mesoudi, 2008) and there is evidence for cultural conformity and conservatism (Hopper, Schapiro, Lambeth, & Brosnan, 2011; Whiten, Horner, & de Waal, 2005).

4  The Chimpanzee Stone Accumulation Ritual and the Evolution of Moral Behavior

67

Sixth, with respect to morality, chimpanzees have a tendency to develop social norms and enforce them, break up fights, and engage in conflict reconciliation to maintain community peacefulness. They build alliances to establish a social power hierarchy but also engage in mutual aid, reciprocity and sharing, and removal of abusive dominants. They show signs of guilt and shame when violating social rules of the group and use appeasement gestures (‘politeness’). They have a sense of fairness and inequity aversion, distress over the suffering of other group members; and sympathy for individuals with disabilities. They can self-regulate/inhibit instincts, including aggression, and delay gratification (de Waal, 1996). Language-competent chimpanzee and bonobos have moral precursors for use of symbols in value judgments ‘right’ or ‘wrong’ and ‘good’ or ‘bad’ (Lyn, Franks, & Savage-Rumbaugh, 2008). Chimpanzees engage in functional altruism (costly acts to performer, benefit to recipient) typical of most animals; intentional targeted helping (entails awareness of how the other will benefit) typical of large-brained animals, such as primates, cetaceans and elephants; and selfish-helping (helper intentionally seeking return benefits) typical of some large-brained animals (de Waal, 2006), all aspects of prosocial concern (Burkart, Brügger, & van Schaik, 2018). In this study, the above chimpanzee capacities for morality and the other cognitive, affective, communicative, symbolic play and cultural behaviors provide a backdrop for analyzing, interpreting and developing an explanatory hypothesis for chimpanzee stone accumulations at special trees (Kühl et al., 2016).

4.2  Summary of Kühl et al. (2016) 4.2.1  H  url, Bang, Toss (Cache, Heap) and Stone Accumulations Kühl et al. (2016) reported the novel discovery of wild chimpanzee stone accumulation sites in West Africa. Sampling 51 chimpanzee research sites across Africa they found 4 populations in Côte d’Ivoire, Guinea, Guinea Bissau and Liberia where chimpanzees habitually hurl or bang rocks against trees or toss them into tree hollows or tree buttresses. This results in stone accumulations in and around the trees. Video observations recorded around 20 mostly adult males, and 1 adult female and 1 juvenile, engaged in 64 instances of display, collection and stone handling behaviors. Authors reported the same individual at the same tree repeatedly engaging in behaviors, suggesting individuals frequently revisited sites. They engaged in three distinct behaviors, labeled HURL, BANG and TOSS (Fig. 4.1), the latter resulting in stones heaped in a hollow tree or heaped between buttress roots, and all three behaviors resulting in stone scatter at sites (Fig. 4.2). TOSS could also be termed CACHE or HEAP. The study categorized this behavior as habitual, which by definition is a behavior that occurs repeatedly in several individuals, consistent with some degree of social

68

J. B. Harrod

transmission. Habitual behaviors in one or more populations but absent in other populations are counted as one type of cultural variation (Whiten, et  al., 1999). Stone throwing/caching at trees is now added to the list of chimpanzee cultural behaviors, along with forty or more other behaviors, including consortship (exclusive-­partner relationship away from the main group, predominantly consensual and egalitarian, the only form of mating with affiliative behaviors); the rain dance and various tool-making and tool-use behaviors.

4.2.2  Flowchart Kühl et  al. (2016) analyzed and summarized their observations into a flow chart (Table 4.1, with slight modifications and renumbering of steps, after Kühl, et al., 2016: Fig. 3). Three behaviors were common to all observations (numbered 1, 2, 3 below). First, the sequence began with picking up and handling a rock. This action appeared to trigger or cue variable elements of the initial threat stage of a chimpanzee aggression display, including piloerection (autonomic hair bristling; human analog frisson, chill and shudder, goose bumps), swaying back and forth or bipedal stance. Then the introductory and build up phase of the pant-hoot vocalization occurred in 50 of 50 audio recordings. This was followed in about half the cases with HURL and about a quarter, BANG, and a quarter, TOSS, or more precisely CACHE or HEAP. Instances of pant-hoot climax with screams and/or drumming with hands or feet on the tree concluded the performances as well as other less common behaviors.

Fig. 4.1  HURL, BANG and TOSS (CACHE). (After Kühl, et al., 2016: Fig. 2a, reordered)

4  The Chimpanzee Stone Accumulation Ritual and the Evolution of Moral Behavior

69

Fig. 4.2  Stone heaps and site scatter at Boé, Guinea-Bissau. (After Kühl, et al., 2016: Fig. 2b)

4.2.3  Problematic Explanatory Hypotheses Kühl et  al. (2016) judged this behavior intriguing, puzzling and enigmatic. They suggested three possible hypotheses to explain it, but found all three hypotheses problematic. First, the behavior could be a modification of the male chimpanzee aggression display to enhance sound propagation in more open savannah-woodland landscapes. Stone throwing, specifically the ‘bang’ variant of the behavior initially may have emerged as a variant of hand and feet drumming, and over time, become more and more independent of that original trigger. In this case, accumulations are a by-product of modified display behaviors occurring at fixed locations. Second, and this is not exclusive of the first hypothesis, they may be marking territorial boundaries and pathways with cairns, a practice found among many historical human societies. Kühl et  al. (2016: 6) acknowledged that these first two explanations ‘need to be evaluated with long-term studies to elucidate the broader context in which this behavior is shown’. Kehoe (2016), a co-author, observed that both the first and this second hypothesis are tricky to test given that many of these sites are outside of protected areas and undergoing local habitat loss. Third, the chimpanzee stone accumulations are “superficially…very similar” to stone accumulation shrines at sacred trees described for indigenous West African peoples, and raise the question whether there are any parallels between chimpanzee stone throwing and human cairn building. Kehoe noted that the third hypothesis is highly speculative and the least likely explanation, though worth pondering. When this sacred cairn hypothesis went internationally viral, she publicly rejected it. I agree and add to the problematics of each hypothesis. Hypothesis 3 suggests chimpanzee stone caching and stone throwing at trees may be some sort of proto-­ religious behavior similar to stone cairns that humans place at sacred trees in cultural traditions around the world. Harrod (2011) proposed a definition of religious behavior suitable across different species, including components awe, wonder, careful observation, reverence, non-ordinary silence, dyadic intimacy, etc. Harrod (2014a) reviewed candidates for chimpanzee religious behaviors and demonstrated that reported religious-like behaviors correspond to components of the

70

J. B. Harrod

Table 4.1  Chimpanzee stone accumulation flow chart (Modified, after Kühl, et al., 2016: Fig. 3)

(Start) 1. Pick-up and handling a rock (n=63)

Swaying (n=35)

Display Behavior

Bipedal stance

Piloerection (n=48)

(n=25) Leaf-clipping (n=1)

2. Pant-hoot (50 of 50) Introduction and/or build up phase

3. ‘Throw’ stone BANG (n=15) Repeatedly hit stone against the tree

(n=63) HURL (n=36) Directed throw at the tree

TOSS (n=12) Throw stone into a hollow of tree or roots

Pant-hoot climax

Drumming (n=21)

(n=24)

with hands or feet

with scream

on tree

elements (End) Wait & listen (n=14)

Display (n=14)

Continue traveling (n=18)

cross-species definition of religion and the associated anthropological concept of rites of passage, in this case birth, death, consortship and experiences of exaltation in ‘nature’. Evaluated against this cross-species definition of religious behavior chimpanzee stone throwing and caching do not appear to correspond to any component of that definition. Nor does it occur in a situation that in human-like terms may

4  The Chimpanzee Stone Accumulation Ritual and the Evolution of Moral Behavior

71

be characterized as a rite of passage. Thus, I, too, reject the hypothesis that the ritual is some sort of religious behavior. Kühl et al’s Hypothesis 2 is that stone throwing/caching is some kind of territorial marking behavior. Currently there is no evidence to support this and the study notes that spatial patterning studies are needed to determine whether the accumulation sites are centrally, peripherally or randomly located in a chimpanzee territory. I suggest that since chimpanzees normally maintain their territorial boundaries by patrolling them and acting aggressively toward any intruder, it seems unlikely that stone throwing at trees and caching stones in them would evolve as a substitute for patrolling. Moreover, when travelling near or patrolling a border, chimpanzees generally maintain silence. In contrast, stone hurling and banging behavior is noise-­ making and thus would seem an unlikely substitute for silent boundary patrolling. Analogizing to the human behavior of making cairns, whether at sacred trees or graves or along traveling routes or as territorial boundary markers, is also problematic because such human cairns are designed to be visible and public. The case of chimpanzee caching behavior seems precisely the opposite of this; stones are deposited in buttress cavities and tree hollows where they are fully or partially concealed, hidden from distant view, and in a ritual space which a social group (or ‘public’) does not enter. We are left with Hypothesis 1, that the behaviors are a modification of the chimpanzee aggression display incorporating stones to enhance sound propagation. This hypothesis appears problematic for several reasons. First, Kühl, et al. had to rely on videos of the scene-at-the-tree and archeological residues at the tree, and could not access a behavioral context for the stone accumulation behaviors, which they note would require future research. Any events that may have affected the participating chimpanzees prior to or after the stone accumulation behaviors remain unknown and these are critical for any interpretative explanation. Second, though they emphasized chimpanzee stone throwing/caching behavior as a modulation of the chimpanzee aggression display, they did not adequately differentiate the stages of chimpanzee aggression displays nor the critical difference between vocal and nonvocal displays. Nor did they consider that pant-hoots and screams also occur in courtship displays, in group reunions, food excitement, cries for help from abuse, and other behaviors. Third, Hypothesis 1 emphasized long-distance sound propagation, and noted that fundamental frequencies produced by stone throwing may be higher and travel further in an open environment and thus the BANG variant may have initially emerged as a variant of hand and feet drumming. This seems problematic. First, only HURL occurs at each of the four sites (Table 4.1); and BANG (‘repeatedly hit against the tree’, n = 15) and TOSS (deposition caching of stones in tree buttress roots or hollows, n = 12) occur two to three times less frequently than HURL (‘directed throw at the tree’, n = 36). Whether or not BANG emerged first, I suggest it is HURL that is the current behavioral variant first needing interpretation, and BANG and TOSS interpreted in its light. Second, listening to seven posted videos and scaling the sound, of the four labeled HURL, two have a loud sound, one moderate and one seems silent; in the one video of BANG, the sound is loud; and in two videos of

72

J. B. Harrod

TOSS, one has a moderate sound and one seems silent. This range from loud to silent does not seem to strongly support the long-distance sound propagation hypothesis. I now suggest how to go beyond these problematic hypotheses to formulate a hypothesis that more adequately explains and interprets the stone accumulation behaviors.

4.3  Method 4.3.1  Explanation To arrive at a more adequate explanatory hypothesis, I review in more detail the chimpanzee aggression display and its behavioral context, showing its multi-valent signification. It is also necessary to take into account known variant significations of pant-hoot and scream vocalizations and other relevant communicative behaviors; the differing significations of HURL, BANG and TOSS(CACHE); the greater frequency of HURL; and how the stone accumulation behaviors comprise a performance, categorizable as ritual.

4.3.2  S  tone Accumulation Behaviors as Ethological Ritualization and Ritual While Kühl et al. refer to the chimpanzee stone accumulation behaviors as ‘ritualized behaviour patterns’, an explanatory hypothesis that goes beyond simply calling the behaviors an enigma observes that the ritualized behavior pattern is the product of an ethnological ritualization, strictly speaking, and the product itself is a ritual performance. In ethology the term ritualization refers to a stereotyped behavior that has evolved out of converting conflicted displays (instincts, drives) such as sexual and aggression displays into a single display, involving a more pacific intent, which transfers everyday communicative signals into non-everyday signals, as in courtship displays (Wilson, 1980: 110–113; Lorenz, 1963: 54–80). I analyze the chimpanzee stone accumulation behavior pattern as an example of such a ritualization of conflicted instincts. With respect to application of the term ritual, Kühl et al. state that with respect to human ritual practices there is “no overarching, agreed-upon definition of ritual”, citing Liénard and Boyer (2006). The latter assert that with respect to the concept of ritual “there is no clear criterion by which cultural anthropologists or other scholars of religion or classics determine that a particular type of behavior is or is not an instance of ritual” (814). This claim is reiterated in Boyer and Liénard (2006), “there is no precise definition of ‘ritual’ in any of the three fields that deal with its

4  The Chimpanzee Stone Accumulation Ritual and the Evolution of Moral Behavior

73

typical manifestations”, namely, cultural anthropology, ethnology and clinical psychology (1); they do not even mention religious studies in this list. Boyer and Liénard overlook the discipline of religious studies, possibly because of bias or simply lack of knowledge. The latter is suggested by claiming Bell (1992) asserts no precise definition, and failing to cite Bell (1997), the standard textbook on ritual in the field of religious studies. They also give a simplistic comment that the ethologists’ notion of ritualization uses criteria such as repetition and stereotypy; a comment that hardly does justice to the zoological definition of ritualization, for which they cite Lorenz (1963). Briefly, Bell (1997: 138–169) characterizes ritual or ritual-­ like actions as having six general categories of attributes: formalism (involving use of a more limited and rigidly organized set of expressions and gestures, a ‘restricted code’ of communication or behavior), traditionalism, disciplined invariance, rule-­ governance, sacral symbolism, and performance. With respect to a restricted code, Bell cites the anthropologist Maurice Bloch (1974) who argues that a ritual obliges participants to use a formal oratorical style, which is limited in intonation, syntax, vocabulary, loudness and fixity of order. Rituals appeal to tradition and generally continue to repeat historical precedent. Rituals are invariant in having a careful choreography, a disciplined set of actions marked by precise repetition and the actor’s physical control, whether performed in groups or individually. Rituals are governed by rules, which limit acceptable behaviors and movements. Rituals are performances, have dramatic, spectacular or other nonordinary qualities, a theatrical-­ like frame or setting, and a symbolic dimension. Finally, rituals involve some degree of sacrifice, letting go of an oppressive burden, release of tension or conflict, and offering up to that which is higher or greater than one’s mere ego or social role. Bell is clear that rituals may be religious but are not unique to religious institutions or traditions or rites of passage. As I noted earlier, evaluated against a cross-species definition of religious behavior, chimpanzee stone throwing and caching do not appear to correspond to any component of that definition, and I reject the hypothesis that the ritual is some sort of religious behavior. However, given the already schematized flow chart, and further analysis and results below, all six of Bell’s characteristic categories of ritual or ritual-like behaviors occur in the chimpanzee stone accumulation behaviors. Therefore, I regard these behaviors as categorically a ritual, and, as I will show, specifically, an ethical-moral or proto-ethical-moral ritual.

4.3.3  Semiotic Interpretation If the chimpanzee stone accumulation ritual recombines a restricted set of call vocalizations with different significations, the ritual can be considered a case of semiosis. As such several semiotic aspects of the ritual require analysis and interpretation: (a) Recombination of sequential elements drawn from everyday behaviors;

74

J. B. Harrod

(b) Communicative behavior in which a gesture or vocalization is pluri-signifying, with multiple significations, which may be interpreted as metaphors or proto-metaphors; (c) Expression of meaning-complementarities (‘binary opposites’) held in tension, as part of an ethological ritualization, evoking sublimated conflict. To avoid too anthropocentric speculation, semiotic interpretation of the chimpanzee ritual must be consistent with and limited by the structural patterning and flow chart of the ritual elements, communicative behaviors, and symbolic complementarities.

4.4  Results To develop robust explanatory and interpretive hypotheses for the ritual, in this results section, I review how primatology differentiates the three stages of a chimpanzee aggression display and the two different types of aggression displays, vocal and nonvocal; and how primatology decodes the pluri-signifying nature of pant-­ hoots and scream calls. Finally, I propose a contextual matrix of moral behaviors in which the ritual is situated, which serves as basis for a hypothesis interpreting what the elements of the ritual signify. While it might be suggested that this procedure provides evidence to support or falsify a hypothesis, on the contrary, I suggest that I am only developing hypotheses based on the evidence provided by Kühl, et al. It is up to primatologists to obtain suitable contextual evidence in the field that might support or falsify the hypotheses I propose. I only claim that my hypotheses appear to more adequately and comprehensively explain elements of the ritual than the problematic hypotheses of Kühl, et al. The overall hypothesis that is developed is that the ritual manifests and expresses a down-regulation of retaliatory aggression for unfairness, inequity or harm; inhibition of redirecting such aggression, scapegoating an innocent; and engaging in a resilient alternative creative performance.

4.4.1  Relevant Chimpanzee Aggression Display Variables Goodall (1986) described three stages of the chimpanzee aggression display: (a) threat, (b) charging, and (c) attack. (I italicize elements that occur in the stone accumulation ritual.) The threat display itself escalates from chin-up-jerk; arm-raise threat; hitting toward with overarm throwing movement; flapping slaps in the air; hunching; throwing rocks, branches or other loose objects; hitting toward with stick or branch; bipedal swagger, swaying from foot to foot, arms akimbo; and running upright toward an opponent, often waving arms in the air. Vocalizations may escalate from soft bark, bark, waa-bark to screaming. More intense threats are accompanied by hair bristling (piloerection); which makes one appear larger and which serves as a nondirected threat, alerting other chimpanzees to the aggressive mood of

4  The Chimpanzee Stone Accumulation Ritual and the Evolution of Moral Behavior

75

the individual concerned. During a threat display two or more threat behaviors may be combined. It is important to note that in everyday settings, hair bristling is pluri-­ signifying. It may be a component of aggression display threat stage. But it also occurs in other contexts: social excitement, courtship display, and signaling others to join in a baboon hunt (Goodall, 1986: 122; 286, 315–316, 334). The second aggression display stage, charging, may include running along the ground; slapping and stamping the ground; scratching leaves; swaying, tearing or dragging branches or other vegetation; flailing with branch or stick; rolling or throwing rocks to intimidate an opponent; and tree drumming, leaping up to hit and stamp at a tree and leaping from tree branch to tree branch. Tree drumming is hitting or kicking tree trunks, especially buttresses, and may include leaping up to hit or stamp at a tree, grabbing a buttress with hands, stamping on it with feet, kicking backwards with feet; individuals may kick or stamp using one or two feet and hit with one or two palms or fists. Drumming may be accompanied by pant-hoots or be without vocalization. Drumming patterns vary from one individual to another. There are particular drumming trees that trigger a drumming display and members of a traveling group are likely to drum one after the other as they pass (Goodall, 1989; Nishida, Kano, Goodall, McGrew, & Nakamura, 1999). West African chimpanzees, especially alpha males, drum on buttresses to indicate their location to other group members and inform them about direction in which the drummer progresses in order to lead a group that way; using spatial and numerical combinations of sequences they communicate intentions about travel movements and changes of direction (Boesch, 1991). Thus, a key element of the stone accumulation ritual, tree-­ drumming, is pluri-signifying. Agreeing with Bygott (1974), Goodall distinguishes two types of charging displays. A non-vocal charging display may lead to stage three, attack, assaulting a target. This may include hitting, kicking, stamping on, dragging, slamming, scratching, biting, whipping or clubbing with branch or stick, directly hitting with branch, stick or rocks, or as missiles (targets of the latter include chimpanzees, baboons, humans, bush pigs and predatory felines). This does not occur in the stone accumulation ritual. A vocal display, accompanied with pant-hoots and noisier display elements, such as slapping and stamping the ground and tree drumming, is typically not directed at any particular individual. “If an attack does occur it is usually of the hit-in-passing variety” (316). Vocal displays are not only a component of a vocal aggression display, they are also common during reunions, food excitement and when an individual has been frustrated in obtaining a desired goal (Goodall, 1986: 314–317; 549–557). If so, a vocal display is pluri-signifying.

4.4.2  Relevant Chimpanzee Vocal Communication Variables The pant-hoot vocalization is a prominent feature of the chimpanzee stone accumulation ritual. Kühl, et al. considered it an element in an aggression display. I suggest that Goodall’s categorization of the chimpanzee call vocalizations needs to be

76

J. B. Harrod

considered. The pant-hoot has multiple significations. There are four kinds of pant-­ hoots. Three are everyday signals: (1) ‘inquiring pant-hoot’ (used by a traveling chimpanzee arriving on ridge or valley to request response from any other chimpanzee in the area so the inquirer learns who is there, friend or foe; is followed by a pause, listening for a response); (2) ‘arrival pant-hoot’ (roar-like or with screams, announces arrival at a good food source, calling others to it, and also proclaims identity, ‘I am here’, when joining a group); and (3) ‘roar pant-hoot’ during high arousal, always during the vocal charging display and common during reunions, food excitement and when an individual has been frustrated in obtaining a desired goal. The fourth (4) ‘spontaneous pant-hoot’ is uttered by peacefully feeding or even (less often) resting individuals, and has a ‘melodious, almost singing’ quality in which the callers do not appear to be motivated by a need for information, for instance among chimps observing a beautiful sunset from their sleep nests prior to going to sleep (Goodall, 1986: 134–135). The Kühl hypotheses foregrounded the aggressive roar pant-hoot; and did not consider the possible alternatives. Half the recordings of pant-hoots ended in scream calls. Kühl, et al. did not interpret the role of the scream call. As noted earlier, it can occur as one element of a threat display. Goodall noted that scream calls also signify social fear, anger, social excitement or food enjoyment, and may be modulated into the ‘SOS scream’ made by an individual severely threatened or attacked as a call for help from an absent ally, and qua frustration has a modification in temper tantrum screaming (1986: 127, 135). de Waal (2019: 150) notes that the chimpanzee loudest vocalization is the scream, which expresses fear and anger, and is typically aimed against a high-­ ranking individual. I consider the scream call aimed at a high-ranking individual, not directly present, a key for any explanatory interpretation of the stone accumulation ritual. In short, I suggest that the stone accumulation ritual is a remarkable ethological ritualization of aggression, mating, attachment and other instinctive behaviors, which results in recombining elements from dominance threat and charging displays; a non-attack, vocal aggressive display aimed at an abusive, high-ranking individual; venting frustration at not obtaining a desired goal; greeting and courtship; social excitement at a reunion; excitement at finding food or going on a hunt; and tree-drumming to signal one’s identity, location, change of direction and capacity to lead. Remarkably, while the pre-ritualized component behaviors all occur in social dyadic or group interactions, the emergent ritual is a performance that is conducted by a single individual (with the exception of an instance in which a mother carries an infant) and not by a group. Yes, some videos show a few other chimps roaming in the surrounding wooded area, but in the ritual space, the dramatic behaviors are performed by one individual basically in solitude. A special tree is distant from the collective group and its power structure. No companion, opponent or other group members are present in the space, and this is highly non-ordinary in the social world of chimpanzees.

4  The Chimpanzee Stone Accumulation Ritual and the Evolution of Moral Behavior

77

4.4.3  M  oral Context for the Chimpanzee Stone Throwing/ Caching Ritual To further elaborate an explanatory and interpretive hypothesis for the chimpanzee stone accumulation ritual, I propose it is situated in a fourfold combinatoric matrix with respect to three other cross-reverberating behavioral contexts: (a) the ritual; (b) high-ranking male aggression display; (c) inhibiting a stone throwing aggression display and making a concealment for throwing-stones; and (d) redirection of a subordinate’s retaliatory aggression and frustration rage into scapegoating an innocent. I allocate the first context constituent to the everyday dominant male aggression display, including its stages of threat, non-vocal charging and attack. For the second constituent, I posit ordinary chimpanzee scapegoating behaviors. de Waal describes chimpanzee retaliation (revenge) behaviors (2019: 132–135). Chimpanzees also engage in scapegoating the innocent. Goodall (1986) observed that on many occasions a subordinate individual frustrated by a superior, for example, a superior who refuses to share food, became enraged but was powerless to retaliate and dare not express her or his feelings. Some victims redirected their retaliatory aggression from attacking the higher-ranking individual to attacking a lesser ranking innocent bystander. In turn, that lesser rank individual may attack another scapegoat. Chimpanzee males more often attacked females than other males, “partly because they are a less risky proposition and partly because there are more of them; [thus] females are more likely to become scapegoats in instances of redirected aggression” (342). Goodall (323–324) notes that it can be “difficult to identify such an attack because it can be considerably displaced in time.” Rather than attacking a scapegoat, “a frustrated individual (usually male)…may perform a charging display, during which he may stamp on the ground or tree trunks…wave branches about, and perhaps throw rocks.” These are all aggressive patterns, “even though directed at inanimate objects.” This “usually reduces the displayer’s level of arousal…or…acts as an outlet for social tensions. A temper tantrum serves a similar purpose”. For the third context constituent, with respect to stone throwing, I suggest down-­ regulation of aggression in the context of intentional deception by concealment. Osvath and Karvonen (2012; Osvath, 2009) reported a high-ranking captive male chimpanzee hiding stone projectiles in concealments he made from hay and in naturally occurring concealments, near the zoo-visitor’s observation area, so that these intruders would approach close enough that he could strike them with stones before they got away. This demonstrates an ability to invent means for a future deception, in a “perceptually or contextually detached future”. They observe that the chimpanzee consistently combined two deceptive strategies: hiding projectiles and inhibiting dominance display behavior. Goodall (1986: 37–38, 571–582) summarizes multiple reports of chimpanzee intentional deception, including hiding food from superiors; deceiving them by false signals about where food is located; a male concealing a consortship by hiding his sexual display from an alpha male; females

78

J. B. Harrod

during clandestine copulations inhibiting the copulation scream or squeal; and suppressing vocal sounds when near the periphery of their home range. With respect to how stone throwing occurs in the chimpanzee threat and charging displays and attack behaviors, I suggest this throwing-stone concealment behavior is remarkable in showing that chimpanzees are capable of inhibiting the supposedly ‘hard-wired’ everyday patterning of the aggression display itself and simultaneously dissociating the association of stone throwing during threat, charging and attack stages of the aggression display by concealing throwing-stones. I suggest this aggression inhibition and stone concealment behavior as the third constituent in our contextual matrix. One, two, three but where is the fourth? The fourth is the stone throwing/accumulation ritual. The following two-by-two matrix (Table 4.2) summarizes the four modulations of the aggression display, which provide a moral behavioral context for the stone accumulation ritual. The four contextual behaviors are relationally structured by two pairs of behavioral opposites: (a) throwing stone at a target individual versus throwing stone redirected at non-target; and (b) extraversion versus introversion (inhibition). These undergo recombinations: 1. ‘Perform display (dominate)’, i.e., perform stages of aggression display behavior to dominate and subordinate another individual into submission; 2. ‘Perform display (redirect)’, i.e., displace one’s retaliatory aggression onto a scapegoat; 3. ‘Inhibit display (dominate)’, i.e., make concealments for throwing-stones to more effectively attack and dominate a future opponent; Table 4.2  Stone throwing moral behavioral context matrix Dominant’s aggression at target

Extraversion Aggression display Threat, non-vocal charge, attack, e.g., throw stones, hit

Fx(a) ‘Perform display (dominate)’ Scapegoat Subordinate’s retaliatory Redirect victim retaliatory aggression aggression displaced onto scapegoat to reduce arousal, e.g., throw stones at scapegoat; vocal charging display; throw tantrum

Fx(b) ‘Perform display (redirect)’

Introversion Make concealments For throwing-stones; deceptively suppress non-vocal aggression display Fy(a) ‘Inhibit display (dominate)’ ‘Sublimate’ (a) HURL stone at ‘inanimate and concealing’ tree, reduce arousal; (b) BANG stone to evoke and release one’s social pain; (c) TOSS(CACHE) stone in tree hollow matrix, perform aggressive power visually and visibly Fy(b) ‘Inhibit display (redirect)’

4  The Chimpanzee Stone Accumulation Ritual and the Evolution of Moral Behavior

79

4. ‘Inhibit display (redirect)’, i.e., neither retaliate against inequity or abuse, nor reactively scapegoat nor deceptively inhibit aggressive attack until later time, but rather both inhibit and redirect aggression into ritual cultural performance (culture-­making via sound-making and cache-making). The four cells of this matrix may be conceptualized as polythetic components of a recombinatorial transformation field having the formula: Fx(a): Fy(b) as Fx(b): Fy(a). In this light the stone throwing/caching ritual enacts remarkably pluri-signifying vocal display behaviors, which simultaneously dis-places a charging aggression display, scapegoating behaviors, and concealment of throwing-stones for future use, and re-directs aggressive energy into the ritual performance of novel, creative behaviors, HURL, BANG and TOSS(CACHE). The performance of CACHE itself creatively enacts a superposition of a concealment of throwing stones in tree hollow or buttress roots and a making-more-visible display of these very same throwing stones by accumulating them into non-ordinary heaps.

4.4.4  Contextualizing the Stages of the Ritual The Kühl et al. (2016) flow chart can be re-examined in the light of differentiating vocal from non-vocal aggression displays, the pluri-significations of pant-hoots and scream calls, and the fourfold contextual behaviors. In the first stage the ritual starts with picking up and handling a rock, after which the actor may perform one or more threat display behaviors: hair bristling (piloerection) (n = 48), swaying (n = 35), bipedal stance (n = 25) and leaf-clipping (n = 1). Most frequently reported is piloerection. As Goodall observed, hair-bristling (piloerection, frisson) does not only occur as a threat behavior and initiating element of a charging display, it occurs during social excitement, the courtship display, signaling for others to join in a hunt, and it occurs on seeing or hearing something strange or frightening. Thus, the hair-bristling frisson adds its plurisemous evocations to the stone accumulation ritual. Piloerection also occurs in chimpanzee religious/spiritual behaviors, which involve encounters with birth, death, consortship and awe and wonder before special events of nature, as in the ‘rain dance’ (Harrod, 2014a). The rain dance is a ritual that incorporates coordinated or in parallel slow and/or rapid charges, swaying, bipedal stance, and other display elements, such as ground slap, buttress-beat, branch drag, and pant-hoots (Whiten, et  al., 1999: supplementary information). The other two triggered behaviors are swaying (a bipedal upright posture in which an individual sways side-to-side from foot to foot) and bipedal stance. A bipedal stance behavior is also termed bipedal hunch (standing, shoulders hunched up, arms slightly akimbo), which when combined with sway constitutes a bipedal swagger. The bipedal hunch and swagger occur in aggression and courtship displays and in greeting (Goodall, 1989; Nishida et al., 1999: 158, 174–175). Here again the stone accumulation ritual appears to simultaneously combine elements of aggression, courtship and greeting displays.

80

J. B. Harrod

A rare instance of leaf clipping at the start of the stone accumulation ritual is reported. In the chimpanzee leaf-clip behavior an individual using one hand pulls a leaf repeatedly between lips or teeth producing a conspicuous sound that attracts attention from a prospective sex partner. It is one of the most common courtship display patterns in Mahale chimpanzees. It signals an act of frustration at Bossou and Taï, occurs during play at Bossou, and occurs as a prelude to male drumming behavior at Taï (Nishida, et al., 1999: 148). Leaf clipping is not reported as an element in a chimpanzee aggression display. Though Kühl et al. (2016) observed only one instance, I suggest it supports an interpretation of the stone accumulation ritual as an ethological ritualization of conflicted aggressive and sexual instincts deriving elements from everyday aggression and courtship displays. Stage 2 of the ritual is the introductory and build up phase of the pant-hoot vocalization, which occurred in 50 of 50 audio recordings. Given the fourfold matrix, a key to interpret the ritual is what vocalizations accompany it, in other words, what are the chimpanzees vocally signifying during the ritual. As earlier noted, a vocal charging display is accompanied by pant-hoots and noisier elements and unlike a non-vocal display is not directed at dominating an opponent and does not progress to attack. Vocal displays also occur during reunions, food excitement and when an individual has been frustrated in obtaining a desired goal, and may be like a temper tantrum. One or more of the four types of pant-hoots are possibly being evoked in the ritual, which, if so, would signify an inquiring ‘who is here’; arriving and announcing affirmatively ‘I am here’; roaring to express aggressive power and excitement at reunion and nurturance; and finally, being at peace in a special place like a sleep nest. Stage 3 ‘throw stone’ also needs reexamination. There were 63 incidents, Hurl (n = 36), followed by Bang (n = 15) and Toss (= Cache) (n = 12). The most frequent behavior is HURL (throwing stones at the tree). While one might interpret HURL as an element from the everyday aggressive threat or charging display, which pertains to enforcing status, as noted earlier, when chimpanzees normally throw stones at an inanimate object it is uniquely a component of retaliatory aggression aimed at an abusive higher-ranking individual redirected at a scapegoat. If so, HURL at a special tree supports interpreting the stone accumulation ritual as a ritualized moral-ethical alternative to direct retaliation or redirect retaliation in scapegoating. It seems homologous to a human creative performance. As a mime artist observed (personal communication, 2018), in the video (Supplementary Movie 5) showing an adult male whose right arm is only a stump, he first appears to be rehearsing his move, or more precisely ‘pre-experiencing’, and then arouses and expresses an energetic display of his power by leaping (drumming) against and hurl a stone at the buttress root of a tree. (On pre-experience of future tool use, forethought and self-control in great apes see Osvath & Osvath, 2008.) Not mentioned by Kühl et al., in the video of an adult female (Supplementary Movie 1), she adds a remarkable, and apparently unique, motif to the ritual. In making her approach to the tree to perform HURL, her right rear foot steps on three stones, one after the other, not on the ground. Speculating, this might express both the inhibition of aggression with concealment, and the theme of inhibition of redirection of aggression on a scapegoat.

4  The Chimpanzee Stone Accumulation Ritual and the Evolution of Moral Behavior

81

The less frequent and less ubiquitous BANG (repeatedly hit a stone against a tree) would be a ritualization, which sequentially combines the stone throwing element of an aggressive vocal charging display (against abusive dominant or scapegoat) and tree drumming in which only hands or feet are used to make sounds, and converts them into a ritual performance in which a percussion instrument, a stone, is used to drum the tree, making a repetitive resonant low frequency sound. TOSS, which, as noted earlier, in the two online videos produces in one instance a moderate sound and in the other apparently little or no sound, is the most non-­ ordinary behavior of the three. It naturally struck the primatologists as the most remarkable and puzzling. Like HURL and BANG, TOSS is inexplicable if simply reduced to an element in an aggressive threat or charging display. In the light of the fourfold moral behavioral matrix, TOSS also performs a ritual alternative to retaliatory aggression and scapegoating. Though admittedly more speculative given only one report of weapon concealment, which was in a zoo, I further hypothesize that TOSS is a ritualization alternative to concealing stones for a future dominance attack, converting such concealment into depositing stones in the partial concealment of a tree hollow or buttress crevice. As such it may be interpreted as a culturally mimetic transference or proto-metaphor: TOSS places stones of frustration rage in an external container that both hides and makes visible a display of power in response to unfair share or abuse. The Kühl flow chart noted that after HURL, BANG and TOSS, the ritual ends with pant-hoot to climax with scream elements (n = 24) or drum on trees with hands or feet (n = 21). In at least 18 cases neither occurs. Authors provided no explanation of these behaviors. Perhaps drumming with hands or feet and no longer with stones, returns the performer to their everyday world. If so, the pant-hoot climax with screams is the culmination of the ritual. Half the recordings of pant-hoots ended in scream calls. The pant-hoot climax with screams can be viewed as a ritualization element syntactically combining pant-hoot and scream call and each of their polysemous and multiple affective evocations. I suggest the culminating pant-hoot-scream-call corresponds to what the entire ritual is about. As Goodall (1986: 127, 135) noted scream calls can signify social fear, anger, social excitement or food enjoyment, and may be modulated into the ‘SOS scream’ made by an individual severely threatened or attacked as a call for help from an absent ally, and qua frustration has a modification in temper tantrum screaming. de Waal (2019: 150) notes that the chimpanzees’ loudest vocalization is the scream, which expresses fear and anger, and is typically aimed against a high-­ ranking individual. If so, the scream call element further supports interpreting the set of stone accumulation behaviors as an ethological ritualization inhibiting, redirecting and transmuting retaliatory aggression against a higher-ranking male’s unfairness or abuse into a ritual performance of an alternative ethical-moral behavior, down-regulating aggression and affirming moral character. The syntactically combined pant-hoot appears to express something like the affirmation ‘I am here; in this space of playful freedom, displaying my power and evoking affective care and nurturance, beyond social defeat’, the human analog of which would be a creative performance of self-identity and self-assertion in resistance to unfair share, oppression and abuse.

82

J. B. Harrod

Finally, the flowchart lists three final behaviors: ‘Wait and listen’ (n  =  14), ‘Display’ (n = 14) and ‘Continue traveling’ (n = 18). These are everyday behaviors and seem a transitional returning to the everyday world. (For concept of structuralist recombinatorial transformation fields, see Harrod, 2018.)

4.5  Discussion The above results lead to the hypothesis that the chimpanzee stone accumulation ritual at special trees manifests and expresses a down-regulation of retaliatory aggression for unfairness, inequity or harm; inhibition of redirecting such aggression, scapegoating an innocent; and engaging in a resilient alternative cultural performance. If this hypothesis were to be supported by verified contextual evidence from the field, the chimpanzee stone accumulation ritual at special trees would provide the first evidence for a non-human primate homologue to what in human evolution can be termed ‘deep moral behavior’. Whiten and Erdal (2012) proposed a common hominin origin for the evolution of the human ‘socio-cognitive’ niche, having five principle components: forms of cooperation, egalitarianism, theory of mind, language and cultural transmission, and in relation to each of these components, review how chimpanzees, admittedly to a markedly different degree, have a socio-cognitive niche with homologous components. Harrod (2014a) added a categorized set of religious behaviors to the human and chimpanzee niche, and Harrod (2014b), palaeoart, at least descending from our Oldowan ancestors. With respect to egalitarianism, Whiten and Erdal note that although chimpanzees have a strongly hierarchical power structure, they do sometimes share meat when begged and abusive alpha males sometimes face counter-­ dominance social coalitions. If supported by future field research, our hypothesis for explaining and interpreting the stone accumulation ritual, would add ‘deep moral behavior’ to the chimpanzee as well as human socio-cognitive niche, and, unless demonstrated to be an independent, convergent evolution, to the socio-­cognitive niche of the common hominin ancestor of the human and chimpanzee niches. To characterize the human analogues for deep moral behavior, I briefly suggest considering the perspective of Elias Canetti (1962) who contrasts the abusive, predatory behavior of crowds, packs and hierarchical social collectives in the long history of human suffering versus transformative and embodied, empathic “presentiment” responses to animal and human suffering among southern Africa Bushmen (337–342). René Girard (1986) provides a history of the role of religious moral norms that counter scapegoating behavior, and Sylvia Perera (1986), a depth psychotherapy for overcoming the scapegoat complex. In hypothesizing the chimpanzee ritual as an ethological ritualization, I suggested a degree of analogy to the human behavior as described by the Freudian psychoanalytic concept of sublimation (Gay, 1992); which has its precursor in the philosophical concept of Hegel, aufheben, to pick up, lift up, keep safe, rescind, neutralize. I also suggest, though to a markedly limited degree, the stone accumulation ritual space and the signifying

4  The Chimpanzee Stone Accumulation Ritual and the Evolution of Moral Behavior

83

performance of depositing stones in tree hollows and buttress root cavities, is analogous to the object-relations notions of ‘transitional object’, ‘holding environment’ and ‘playing’ (Winnicott, 1951, 1971). It is also analogous to the art-­space and therapy space as a ‘matrixial borderspace, a threshold, where I and non-I part selves borderlink’ and where tears in its web call for a response, a performance attuned to vibrations with deep resonant frequencies, like heartbeats and breathing; and which is both proto-aesthetic and proto-ethical (Ettinger, 2006, 2011, 2016). In anthropological terms, it is analogous to ‘anti-structure versus structure’ rites and ‘liminal (threshold) phase’ of a ritual passage or a ‘liminoid space’, like a theater for drama performance (Turner, 1969, 1974). In a final characterization of human deep moral behavior, the religious studies scholar Charles Long (1986) picks up the motif of stones. He speaks of the opacity of the suffering of the marginalized and oppressed as like that of stone, hard, mute and unintelligible. “The oppressed have faced the hardness of life. The world has often appeared as a stone” (197). “The musical phenomena called the blues is another expression of the same consciousness…[a] consciousness that has experienced the ‘hardness’ of life, whether the form of that reality is the slave system, God, or simply life itself. It is from such a consciousness that the power to resist and yet maintain one’s humanity has emerged. … It is…an example of what Gaston Bachelard described in Hegelian language as the lithic imagination” (178). “[L]ithic imagination [is] that mode of consciousness which in confronting reality in this mode formed a will in opposition. This hardness of life was not the oppressor; the oppressor was the occasion for the experience but not the datum of the experience itself. The hardness of life or of reality was the experience of the meaning of the oppressed’s own identity as opaque” (197). “It is the power to be, to understand, to know even in the worst historical circumstances, and it may often reveal a clearer insight into significant meaning of the human venture than the power possessed by the oppressors” (195). The heap of stones at the tree increases over time. It is a measure of our moral evolution.

4.6  Conclusion If the proposed explanatory and interpretive hypothesis is supported by future primatology field research verifying the pre- and post-ritual behavioral contexts, the chimpanzee stone accumulation ritual at special trees would provide the first evidence for a non-human primate homologue to the evolution of human ‘deep moral behavior’, the creative resilience to injustice, unfairness, inequity and abuse and resistance to redirection of retaliatory aggression into scapegoating the innocent. This has implications for homologous behaviors descending from the common ancestor of humans and chimpanzees, ca. 7–12 million years ago, and for hypothesizing stages in the evolution of morality and ethics in human and other species. With respect to my opening remark about the urgency for humans to respond to the conservation and preservation of chimpanzees facing immanent extinction, I

84

J. B. Harrod

conclude that in stimulating and teaching this author an understanding and way to express deep moral behavior, a chimpanzee claim on our responsibility has arisen as a mutual moral obligation for the conservation and protection of both our species.

References Bell, C. (1992). Ritual theory, ritual practice. New York: Oxford University Press. Bell, C. (1997). Ritual: Perspectives and dimensions. New York: Oxford University Press. Bloch, M. (1974). Symbols, song, dance and features of articulation: Is religion an extreme form of traditional authority? Archives Européennes de Sociologie/European Journal of Sociology, 15(1), 55–81. https://doi.org/10.1017/s0003975600002824 Boesch, C. (1991). Symbolic communication in wild chimpanzees? Human Evolution, 6(1), 81–89. https://doi.org/10.1007/BF02435610 Boesch, C., & Boesch-Achermann, H. (2000). The chimpanzees of the Taï Forest: Behavioural ecology and evolution. New York: Oxford University Press. Boyer, P., & Liénard, P. (2006). Why ritualized behavior? Precaution systems and action parsing in developmental, pathological and cultural rituals. Behavioral and Brain Sciences, 29(6), 595–613. https://doi.org/10.1017/S0140525X06009332 Bräuer, J., Call, J., & Tomasello, M. (2005). All great ape species follow gaze to distant locations and around barriers. Journal of Comparative Psychology, 119(2), 145–154. https://doi. org/10.1037/0735-­7036.119.2.145 Burkart, J.  M., Brügger, R.  K., & van Schaik, C.  P. (2018). Evolutionary origins of morality: Insights from non-human primates. Frontiers in Sociology, 3, 17. https://doi.org/10.3389/ fsoc.2018.00017 Bygott, J. D. (1974). Agonistic behaviour and dominance in wild chimpanzees. Unpublished doctoral dissertation, Cambridge University. Call, J., Hare, B., Carpenter, M., & Tomasello, M. (2004). ‘Unwilling’ versus ‘unable’: Chimpanzees’ understanding of human intentional action. Developmental Science, 7(4), 488–498. https://doi.org/10.1111/j.1467-­7687.2004.00368.x Call, J., & Tomasello, M. (2008). Does the chimpanzee have a theory of mind? 30 years later. Trends in Cognitive Sciences, 12(5), 187–192. https://doi.org/10.1016/j.tics.2008.02.010 Canetti, E. (1962). Crowds and power. New York: Farrar, Straus and Giroux. Cromie, W.  J. (1999). Chimpanzee behaviors surprise scientists. Harvard University Gazette, June 17. de Waal, F. (1996). Good natured: The origins of right and wrong in humans and other animals. Cambridge, MA: Harvard University Press. de Waal, F. (2006). Primates and philosophers. How morality evolved. Princeton, NJ: Princeton University Press. de Waal, F. (2019). Mama's last hug: Animal emotions and what they tell us about ourselves. New York: W.W. Norton & Company. Ettinger, B. L. (2006). Matrixial trans-subjectivity. Theory, Culture & Society, 23(2–3), 218–222. https://doi.org/10.1177/026327640602300247 Ettinger, B.  L. (2011). Uncanny awe, uncanny compassion and matrixial transjectivity beyond uncanny anxiety. French Literature Series, 38(1), 1–30. Ettinger, B. L. (2016). Art as the transport-station of trauma. In Y. Ataria, D. Gurevitz, H. Pedaya, & Y.  Neria (Eds.), Interdisciplinary handbook of trauma and culture (pp.  151–160). Cham, Switzerland: Springer. https://doi.org/10.1007/978-­3-­319-­29404-­9_10 Flemming, T. M., Beran, M. J., Thompson, R. K. R., Kleider, H. M., & Washburn, D. A. (2008). What meaning means for same and different: Analogical reasoning in humans (Homo sapiens), chimpanzees (Pan troglodytes), and rhesus monkeys (Macaca mulatta). Journal of Comparative Psychology, 122(2), 176–185. https://doi.org/10.1037/0735-­7036.122.2.176 Gay, V. P. (1992). Freud on sublimation: Reconsiderations. Albany, NY: SUNY Press.

4  The Chimpanzee Stone Accumulation Ritual and the Evolution of Moral Behavior

85

Girard, R. (1986). The scapegoat (Y. Freccero, Trans.). Baltimore: Johns Hopkins University Press. Goodall, J. (1986). The chimpanzees of Gombe: Patterns of behavior. Cambridge, MA: Harvard University Press. Goodall, J. (1989). Glossary of chimpanzee behaviors. Tucson, AZ: Arizona: Jane Goodall Institute. Harrod, J. B. (2011). A trans-species definition of religion. Journal for the Study of Nature, Culture and Religion, 5(3), 327–353. https://doi.org/10.1558/jsrnc.v5i3.32 Harrod, J. B. (2014a). The case for chimpanzee religion. Journal for the Study of Nature, Culture and Religion, 8(1), 8–45. https://doi.org/10.1558/jsrnc.v8i1.8 Harrod, J. B. (2014b). Palaeoart at two million years ago? A review of the evidence. Arts, 3(1), 135–155. https://doi.org/10.3390/arts3010135 Harrod, J.  B. (2018). A revised Weil-Lévi-Strauss transformation formula for combinatorial processing in conceptual-value space. Sign Systems Studies, 46(2/3), 255–281. https://doi. org/10.12697/SSS.2018.46.2-­3.03 Herrmann, E., Call, J., Hernández-Lloreda, M. V., Hare, B., & Tomasello, M. (2007). Humans have evolved specialized skills of social cognition: The cultural intelligence hypothesis. Science, 317, 1360–1366. https://doi.org/10.1126/science.1146282 Hirata, S. G., Yamakoshi, S., Fujita, S., Ohashi, G., & Matsuzawa, T. (2001). Capturing and toying with hyraxes (Dendrohyrax dorsalis) by wild chimpanzees (Pan troglodytes) at Bossou, Guinea. American Journal of Primatology, 53(2), 93–97. https://doi.org/10.1002/1098-­234 5(200102)53:23.0.CO;2-­X Hopper, L. M., Schapiro, S. J., Lambeth, S. P., & Brosnan, S. F. (2011). Chimpanzees’ socially maintained food preferences indicate both conservatism and conformity. Animal Behaviour, 81(6), 1195–1202. https://doi.org/10.1016/j.anbehav.2011.03.002 Kehoe, L. (2016). Mysterious chimpanzee behavior may be evidence of “sacred” rituals. The Conversation, February 29, 2016. Scientific American. https://theconversation.com/ mysterious-­new-­behaviour-­found-­in-­our-­closest-­living-­relatives-­55512 Kühl, H. S., Boesch, C., Kulik, L., Haas, F., Arandjelovic, M., Dieguez, P., et al. (2019). Human impact erodes chimpanzee behavioral diversity. Science, 363(6434), 1453–1455. https://doi. org/10.1126/science.aau4532 Kühl, H. S., Kalan, A. K., Arandjelovic, M., Aubert, F., D’Auvergne, L., Goedmakers, A., et al. (2016). Chimpanzee accumulative stone throwing. Scientific Reports, 6, 22219. https://doi. org/10.1038/srep22219 Liénard, P., & Boyer, P. (2006). Whence collective rituals? A cultural selection model of ritualized behavior. American Anthropologist, 108(4), 814–827. https://doi.org/10.1525/ aa.2006.108.4.814 Long, C. H. (1986). Significations: Signs, symbols, and images in the interpretation of religion. Philadelphia: Fortress Press. Lorenz, K. (1963). On aggression. New York: Bantam. Lyn, H., Franks, B., & Savage-Rumbaugh, E. S. (2008). Precursors of morality in the use of the symbols “good” and “bad” in two bonobos (Pan paniscus) and a chimpanzee (Pan troglodytes). Language & Communication, 28(3), 213–224. https://doi.org/10.1016/j.langcom.2008.01.006 Nishida, T., Kano, T., Goodall, J., McGrew, W. C., & Nakamura, M. (1999). Ethogram and ethnography of Mahale chimpanzees. Anthropological Science, 107(2), 141–188. https://doi. org/10.1537/ase.107.141 Notman, H., & Rendall, D. (2005). Contextual variation in chimpanzee pant hoots and its implications for referential communication. Animal Behaviour, 70, 177–190. https://doi.org/10.1016/j. anbehav.2004.08.024 Osvath, M. (2009). Spontaneous planning for future stone throwing by a male chimpanzee. Current Biology, 19(5), R190–R191. https://doi.org/10.1016/j.cub.2009.01.010 Osvath, M., & Karvonen, E. (2012). Spontaneous innovation for future deception in a male chimpanzee. PLoS One, 7(5), e36782. https://doi.org/10.1371/journal.pone.0036782 Osvath, M., & Osvath, H. (2008). Chimpanzee (Pan troglodytes) and orangutan (Pongo abelii) forethought: Self-control and pre-experience in the face of future tool use. Animal Cognition, 11(4), 661–674.

86

J. B. Harrod

Perera, S. B. (1986). The scapegoat complex: Toward a mythology of shadow and guilt. Toronto, ON: Inner City Books. Pika, S., & Mitani, M. (2006). Referential gestural communication in wild chimpanzees (Pan troglodytes). Current Biology, 16(6), R191–R192. https://doi.org/10.1016/j.cub.2006.02.037 Povinelli, D.  J., Bering, J.  M., & Giambrone, S. (2000). Toward a science of other minds: Escaping the argument by analogy. Cognitive Science, 24(3), 509–541. https://doi.org/10.1016/ S0364-­0213(00)00023-­9 Rest, J. D. (1984). The major components of morality. In W. M. Kurtines & J. L. Gewirtz (Eds.), Morality, moral behavior, and moral development (pp. 24–38). New York: Wiley. Rilling, J. K., Barks, S. K., Parr, L. A., Preuss, T. M., Faber, T. L., Pagnoni, G., et al. (2007). A comparison of resting-state brain activity in humans and chimpanzees. Proceedings of the National Academy of Sciences U S A, 104(43), 17146–17151. https://doi.org/10.1073/pnas.0705132104 Russon, A. E. (2004). Great ape cognitive systems. In A. E. Russon & D. R. Begun (Eds.), The evolution of great ape intelligence (pp. 76–100). Cambridge, UK: Cambridge University Press. Savage-Rumbaugh, S., & Lewin, R. (1994). Kanzi: The ape at the brink of the human mind. New York: Wiley. Slocombe, K. E., & Zuberbühler, K. (2005). Functionally referential communication in a chimpanzee. Current Biology, 15, 1779–1784. https://doi.org/10.1016/j.cub.2005.08.068 Slocombe, K. E., & Zuberbühler, K. (2006). Functionally referential calls in chimpanzees. Abstract #103, XXI Congress of the International Primatological Society. http://www.asp.org/ips/ IPS2006/abstractDisplay.cfm?abstractID=1476&confEventID=1308. Accessed 8 May 2007. Swaner, L. E. (2005). Educating for personal and social responsibility: A review of the literature. Liberal Education, 91(3), 14–21. https://eric.ed.gov/?id=EJ720379 Tomasello, M., Call, J., & Hare, B. (2003). Chimpanzees understand psychological states – The question is which ones and to what extent. Trends in Cognitive Sciences, 7(4), 153–156. https:// doi.org/10.1016/S1364-­6613(03)00035-­4 Turner, V. (1974). Liminal to liminoid, in play, flow, and ritual: An essay in comparative symbology. Rice Institute Pamphlet-Rice University Studies, 60(3), 53–92. Turner, V.  W. (1969). The ritual process: Structure and anti-structure. London: Transaction Publishers. Whiten, A. (2005). The second inheritance system of chimpanzees and humans. Nature, 437(7055), 52–55. https://doi.org/10.1038/nature04023 Whiten, A., & Erdal, D. (2012). The human socio-cognitive niche and its evolutionary origins. Proceedings of the National Academy of Sciences U S A, 367, 2119–2129. https://doi. org/10.1098/rstb.2012.0114 Whiten, A., Goodall, J., McGrew, W. C., Nishida, T., Reynolds, V., Sugiyama, Y., et al. (1999). Cultures in chimpanzees. Nature, 399(6737), 682–685. https://doi.org/10.1038/21415 Whiten, A., Horner, V., & de Waal, F. (2005). Conformity to cultural norms of tool use in chimpanzees. Nature, 437(7059), 737–740. https://doi.org/10.1038/nature04047 Whiten, A., McGuigan, N., Marshall-Pescini, S., & Hopper, L. M. (2009). Emulation, imitation, over-imitation and the scope of culture for child and chimpanzee. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1528), 2417–2428. https://doi.org/10.1098/ rstb.2009.0069 Whiten, A., & Mesoudi, A. (2008). Establishing an experimental science of culture: Animal social diffusion experiments. Philosophical Transactions of the Royal Society B: Biological Sciences, 363(1509), 3477–3488. https://doi.org/10.1098/rstb.2008.0134 Wilson, E.  O. (1980). Sociobiology: The abridged edition. Cambridge, MA: Harvard University Press. Winnicott, D. W. (1951). Transitional objects and transitional phenomena – A study of the first not-me possession. International Journal of Psycho-Analysis, 34, 89–97. Winnicott, D. W. (1971). Playing and reality. London: Tavistock.

Part II

The Evolution of Moral Cognition

Chapter 5

Morality as an Evolutionary Exaptation Marcus Arvan

Abstract  The dominant theory of the evolution of moral cognition across a variety of fields is that moral cognition is a biological adaptation to foster social cooperation. This chapter argues, to the contrary, that moral cognition is likely an evolutionary exaptation: a form of cognition where neurobiological capacities selected for in our evolutionary history for a variety of different reasons—many unrelated to social cooperation—were put to a new, prosocial use after the fact through individual rationality, learning, and the development and transmission of social norms. This chapter begins with a brief overview of the emerging behavioral neuroscience of moral cognition. It then outlines a novel theory of moral cognition that I have previously argued explains these findings better than alternatives. Finally, it shows how the evidence for this theory of moral cognition and human evolutionary history together suggest that moral cognition is likely not a biological adaptation. Instead, like reading sheet music or riding a bicycle, moral cognition is something that individuals learn to do—in this case, in response to sociocultural norms created in our ancestral history and passed down through the ages to enable cooperative living. Keywords  Adaptation · Cognition · Ethics · Evolution · Exaptation · Fairness · Mental time-travel · Morality-as-cooperation · Moral foundations theory · Neuroscience · Other-perspective-taking · Prudence

What did moral cognition evolve for—that is, what is its evolutionary function? The dominant answer to this question across anthropology (Curry, 2016; Cosmides & Tooby, 1992; Henrich & Henrich, 2007), evolutionary biology (Alexander, 1987; de Waal, 2006), philosophy (Carruthers & James, 2008; Joyce, 2007, chapter  6; Kitcher, 1998; Kitcher, 2005; Prinz, 2007, p 185; Sinclair, 2012, p 14; Sterelny & Fraser, 2016; Wisdom, 2017), and psychology (Casebeer, 2003; Greene, 2015; Tomasello & Vaish, 2013) is that moral cognition is a biological adaptation to foster social cooperation. This chapter argues, to the contrary, that moral cognition is likely an evolutionary exaptation (Gould, 1991): a form of cognition where M. Arvan (*) The University of Tampa, Tampa, FL, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. De Smedt, H. De Cruz (eds.), Empirically Engaged Evolutionary Ethics, Synthese Library 437, https://doi.org/10.1007/978-3-030-68802-8_5

89

90

M. Arvan

neurobiological capacities selected for in our evolutionary history for a variety of different reasons—many unrelated to social cooperation—were put to a new, prosocial use after the fact through individual rationality, learning, and the development and transmission of social norms. My argument has three steps. First, I provide a brief overview of the emerging behavioral neuroscience of moral cognition. I then outline a theory of moral cognition that I have argued explains these findings better than alternatives (Arvan, 2020). Finally, I demonstrate how the evidence for this theory of moral cognition and human evolutionary history together suggest that moral cognition is likely not a biological adaptation. Instead, like reading sheet music or riding a bicycle, moral cognition is something that individuals learn to do—in this case, in response to sociocultural norms created in our ancestral history and passed down through the ages to enable cooperative living. This chapter thus aims to set evolutionary ethics on a new path, identifying the evolutionary function of moral cognition with a complex interplay between neurobiological and cultural evolution.

5.1  H  ow to Do Evolutionary Ethics: Four Sequential Questions Evolutionary ethicists standardly use the method of telling plausible evolutionary stories of how capacities seemingly involved in moral cognition—such as altruism (Kitcher, 1998), caring for others (Churchland, 2011), ‘moral emotions’ such as empathy, spite, shame, and guilt (Frank, 1988), ‘universal human values’ (Curry, 2016; Haidt, 2012; Haidt & Joseph, 2004), or particular judgment-types such as moral beliefs (May, 2018, chapter 3)—were likely selected for in our ancestral history. Evolutionary ethicists thus generally assume that they have a clear enough idea of what moral cognition involves (e.g. altruism, particular emotions or values, etc.) in order to theorize properly about its evolutionary origins and function. However, this is a mistake. First, as Smyth (2017, p. 1127) argues: The collection of practices, beliefs, and dispositions we call ‘morality’ is far more functionally complex than the standard story would have us believe. Morality may indeed reduce social tensions in certain contexts, but it may also inflame them in others, and it probably plays a variety of other distinct roles in human societies.

To take just one example, moral language and beliefs are often used in ways, such as moral grandstanding (Tosi & Warmke, 2016), that are conducive to group polarization—a phenomenon linked to less cooperative (and immoral) behaviors ranging from war to genocide and other forms of mass violence (Arvan, 2019). Second, there is a deeper problem. Meta-ethicists disagree substantially over what constitutes morality and by extension moral cognition. For example, Kantians hold that morality properly understood involves conformity to the Categorical Imperative—a principle that Kant argues is normatively binding due to transcendental freedom, not ‘moral emotions’, ‘universal human values’, or any empirical effects of morality, such as social cooperation (Kant, 1785, 4:387–4:392, 4:394;

5  Morality as an Evolutionary Exaptation

91

Wood, 2008; cf. Korsgaard, 2008, 2009; Luco, 2016). Moral cognition for Kantians thus involves a very specific kind of reasoning: namely, cognizing (at least implicitly) the Categorical Imperative, and acting upon universalizable maxims that respect the ‘humanity’ of oneself and others (Kant, 1785, 4:421, 4:429). However, other metaethicists defend very different pictures of morality and moral cognition. For example, moral realists often argue that morality involves having and conforming to moral intuitions: immediate, non-inferential, and potentially affectively-laden (Haidt, 2001) judgments that X (an action, action-type, etc.) is right, wrong, good, or bad (Audi, 2015; Prichard, 1912; Ross, 1930). However, some apparently universal types of intuitions (involving norms of purity and respect for authority) may foster social cooperation in some contexts yet profoundly undermine it in others (Greene, 2013). For example, Hitler and the Nazis were obsessed with racial purity, regarding it as a moral imperative (Hitler, 1925). Yet this belief, along with the belief that Germans should respect Hitler’s absolute authority as Führer (Trueman, 2020), served to further genocide and World War II—immoral actions antithetical to social cooperation. Still other meta-ethicists argue that morality is reducible to prudence, that is, to what makes an individual’s life tend to go better over the course of life as a whole (Aristotle, 1984, Book II sections 6–9, Book IV sections 5, 11–13, and Book X; Arvan, 2016, 2020). Yet, as we will see, if this is correct, then moral cognition fundamentally involves long-term planning capacities that may be used to foster social cooperation, but also to undermine it—including anti-social behaviors antithetical to social cooperation that would have plausibly increased the evolutionary fitness of our ancestors. Consequently, the theory that morality is a biological adaptation for social cooperation appears to be based upon highly uncertain foundations. There are not only many different metaethical theories of the nature of morality and moral cognition (see Arvan, 2016, pp. 30–35 and Arvan, 2020, pp. 106–118 for overviews of influential accounts). On at least some such theories, ‘the function’ of moral cognition may not be social cooperation, but rather something else entirely: long-term prudential planning, transcendental freedom, conformity to categorical normative reasons, and so on. Accordingly, in order to determine what the evolutionary function of moral cognition really is, we must be more careful. First, we must determine which metaethical criterion of morality has the best evidence in its favor. That criterion—if it exists—will enable us to determine with greater certainty what counts as moral (as opposed to non-moral) cognition. Second, once we determine what moral cognition is, we must establish which brain regions and associated cognitive capacities are involved in it.1 Third, we need evidence of how the brain regions and capacities involved in moral cognition function within it: specifically, whether particular brain regions engage in moral cognition innately, or whether moral cognition is instead something we learn to do in response to features of our surrounding environment. Finally, we need evidence of how the brain regions and capacities involved in moral 1  I do not mean to endorse neuroessentialism here, the view that specific capacities are located in or identical to the functions of particular brain regions. I merely affirm scientific findings that particular brain regions are associated with particular cognitive functions.

92

M. Arvan

cognition were likely selected for in evolutionary history. Were particular brain regions involved in moral cognition selected as biological adaptations to foster social cooperation, or were they selected for in evolutionary history for very different reasons and only harnessed later (via learning and constructed sociocultural norms) for a prosocial use, qua exaptation? In sum, to determine the true evolutionary function of moral cognition, we must carefully address the following four issues in order: 1. What morality is, and by extension what counts as moral cognition. 2. Precisely which brain regions and associated capacities are implicated in moral cognition. 3. How they function in moral cognition. 4. How and why they were selected for in evolutionary history. As we have seen, there is widespread metaethical disagreement about the very first issue: what morality is. One possible response to this problem is to try to provide such a broad definition of morality (as altruism, etc.) that the definition will seem uncontroversial (see Frank, 1988; Kitcher, 2011). However, we have seen that any such definition will offend the metaethical sensibilities of those who defend a narrower definition (e.g. as conformity to Kant’s Categorical Imperative, etc.). The lesson here, I believe, is that in doing evolutionary ethics, there is no way around taking controversial metaethical stances on the nature of morality and moral cognition. Accordingly, this will be my approach. I will outline an account of morality and moral cognition that I have defended and refined across two books (Arvan, 2016, 2020), and which I have argued to be the best explanation of a variety of empirical and normative data (Arvan, 2016, chapter 8; Arvan, 2020, chapter 4). I will then argue that on this theory, moral cognition is likely not a biological adaptation but rather a form of learned cognition that individuals engage in due to sociocultural norms originally created in our ancestral past on the basis of rational deliberation, which have been subsequently transmitted and enforced in stable cultures to this day.

5.2  Morality as Prudential Risk-Aversion Across two books, I have argued that moral philosophy should be based on (A) empirical psychology and (B) a simple ‘means-ends’, instrumental theory of normativity according to which people rationally ought to adopt the best means for achieving their ends (Arvan, 2016, chapters 1–3; Arvan, 2020, chapters 2–3). The basic rationale for this approach is as follows. First, whereas traditional philosophical methods have been argued to face serious epistemic problems (Arvan, 2016, chapter 1; Brennan, 2010), empirical psychology promises demonstrable knowledge of human cognition (Arvan, 2020, chapters 1 and 4), recent replication issues aside (Maxwell, Lau, & Howard, 2015). Second, whereas other forms of normativity—such as categorical normativity (Kant, 1785), metaphysically primitive moral

5  Morality as an Evolutionary Exaptation

93

reasons (Parfit, 2011; Scanlon, 1998, 2014), and so on—are deeply controversial, instrumental normativity enjoys virtually universal acceptance across academic theorizing (Anand, 1995; Hansson, 2005; Peterson, 2017) and everyday life (Arvan, 2016, pp. 24–27, Arvan, 2020, pp. 37–45, 66, 104, 132–133). The typical person recognizes that if X is their goal (or end), and Y is the best means to achieve X, then there is a clear sense (a ‘means-end’ sense) in which they ought to do Y. For example, students can recognize that if they want to perform well on an exam and studying hard is the best means to do well, then they ought to study hard. This is not only true of the typical person. Importantly, it is true even of individuals who may be skeptical of or otherwise insensitive to moral norms. For example, young children who misbehave, wanton criminals, and psychopaths all routinely recognize normative requirements of instrumental rationality. A thief or murderer can recognize that if they have committed a crime, want to avoid detection, and the best means to avoid detection is to take careful steps to hide evidence, then there is a clear sense in which they should take those steps. Similarly, even very young children can understand that if they want to stay out of trouble with their parents or other authority figures (such as schoolteachers), there are things they should and shouldn’t do (such as not get into schoolyard fights). Finally, instrumentalism and empirical psychology together promise a uniquely strong, unified, and parsimonious explanation of a wide variety of phenomena, normatively reducing morality to prudence and descriptively reducing moral cognition to prudential cognition (Arvan, 2020, chapters 2–4). Allow me to explain. My theory of prudence and morality begins with these assumptions, as well as the further assumptions—also widely accepted in the philosophical literature (Bricker, 1980; Bruckner, 2003, pp. 34–35; Haybron, 2011, Section 1; Price, 2002; Pettigrew, 2020) and in ordinary life (Aristotle, 1984; Arvan, 2020, pp. 27–28)— that because human beings normally want to live happy lives, prudence (for humans) is a matter of making instrumentally optimal choices that maximize one’s own expected lifetime utility (Arvan, 2020, chapter 2, section 1). I then argue that prudence involves mental time-travel, the capacity to mentally simulate different possible pasts and futures—as this is vital to learning from past prudential errors and deliberating about the future (Arvan, 2020, pp.  32–50). Third, following Donald Bruckner (2003), I argue that because life is profoundly uncertain over the long run, prudent individuals learn to act in ways that treat life this way: as consisting of decisions made in radical ignorance of lifetime probabilities (Arvan, 2020, pp. 30–32). Importantly, I contend that we learn this primarily from socialization: from seeing risky violations of social norms punished by others around us, including authority figures such as parents, school officials, and law-enforcement (Arvan, 2020, pp.  37–45). Fourth, I argue that once we learn from socialization to treat life as highly uncertain, the internalized attitudes this generates (‘moral risk-aversion’) make it instrumentally rational to engage in other-perspective taking (OPT). We learn it is prudent to imaginatively simulate how our actions might affect others— including how others might reward or punish us, and how we might feel guilt or remorse—as a long-term strategy for minimizing severe regret: an end that prudent individuals have grounds to want to avoid given radical lifetime uncertainty

94

M. Arvan

(Bruckner, 2003; Arvan, 2020, pp. 63–65; Arvan, 2016, pp. 118–128). Fifth, I argue that this form of prudential other-perspective-taking makes it rational to obey Four Principles of Fairness: a deontological principle of coercion-minimization, a consequentialist principle of mutual assistance, a contractualist principle of fair negotiation, and virtue-theoretic principle of internalizing the first three principles as standing cognitive and behavioral dispositions (Arvan, 2020, pp.  68–72; Arvan, 2016, chapters 5 and 6). While I cannot summarize these principles or their derivation here in detail, I have argued that they plausibly unify the moral domain, reconciling the competing insights of traditional moral frameworks, while also supporting Rawlsian frameworks for domestic, international, and global justice, both in ‘ideal theory’ and ‘nonideal theory’ (Arvan, 2020, pp. 83–87). Sixth, I argue that once a person fully internalizes moral risk-aversion and the above principles of fairness (through socialization), the person comes to treat moral norms as though they are ‘categorical’ normative requirements, with categorical moral attitudes coming to comprise our ‘conscience’ (Arvan, 2016, pp. 110–111, 122, 177–180; Arvan, 2020, pp. 42–50, 71). Notice that my account is broadly Hobbesian. In Leviathan, Hobbes argues that moral cognition is not naturally instilled in us biologically (Hobbes, 1651, chapter XIII). Although Hobbes allows that people in nature may have various ‘pre-moral’ capacities—such as concern for kin, empathy, and so on (Hobbes, 1651, chapters XIII and X)—for Hobbes our ‘natural condition’ revolves around purely instrumental planning, or seeking to effectively satisfy our desires (Hobbes, 1651, chapter VI). Hobbes then argues that moral cognition (viz. Laws of Nature) is an achievement of instrumental reasoning and sociopolitical enforcement, as he holds that moral norms are only rational to obey when enforced by a sovereign authority (Hobbes, 1651, chapters XIV–XV). Importantly, Hobbes argues that even when enforced, moral laws are ultimately prudential laws—that they are merely “conclusions or theorems concerning what conduceth to [a person’s] conservation and defense of themselves” (Hobbes, 1658, p. 47) My account is similar. It holds that sociocultural norms—originally learned in our ancestral past to enable social cooperation, and transmitted and incentivized in stable societies to this day—make it rational for children, adolescents, and adults to learn to use a variety of ‘pre-moral’ capacities that were not biologically selected for social cooperation in a novel, prosocial way. We can begin to see this more clearly by first considering some evidence sometimes taken to favor the hypothesis that moral cognition is an innate biological adaptation. First, human infants, adults, and a variety of nonhuman animals demonstrate a rudimentary sense of fairness (Brosnan, 2006; Brosnan & de Waal, 2014; Geraci & Surian, 2011; Schmidt & Sommerville, 2011). Second, human infants and children display preferences for altruism (Barragan, Brooks, & Meltzoff, 2020; Schmidt & Sommerville, 2011) and retribution for antisocial behavior (Hamlin, 2013). Third, five ‘moral foundations’ (values of care, fairness, loyalty, respect for authority, and purity) have been argued to be universal across human societies (Doğruyol, Alper, & Yilmaz, 2019)—though serious questions have been raised about these claims (Graham et  al., 2013; Suhler & Churchland, 2011). All of these findings

5  Morality as an Evolutionary Exaptation

95

might appear to suggest that moral cognition is innate and social cooperation its evolutionary function. However, this is a spurious inference. Although dogs, mice, and human infants all display a rudimentary sense of fairness, infants have other prosocial dispositions, and dogs can cooperate in small packs, we do not treat any of these creatures as morally responsible agents, blaming them for unfair or selfish behavior. Why? The answer is twofold. First, they lack the mental time-travel capacities necessary for appreciating the long-term consequences of their actions (Kennett & Matthews, 2009; Levy, 2007; Suddendorf & Corballis, 2007). Second, genuine moral responsibility also requires recursion: the capacity to apply moral rules to a potentially infinite variety of new cases, including cases where they individual is not inclined altruistically or fairly—as people are when tempted to behave immorally (Arvan, 2016, pp. 5–7, 96, 109). Crucially, only human adults appear to have either of these capacities—mental time-travel and recursion—in any robust degree (Corballis, 2007; Suddendorf & Corballis, 2007). It is important to underscore here just how much evidence there is for the centrality of mental time-travel to moral cognition and responsibility. First, human adults— who we ordinarily consider to be morally-responsible agents—typically have robust mental time-travel capacities (Suddendorf & Corballis, 2007). Second, sub-classes of humans exhibiting diminished moral capacities—children, adolescents, and psychopaths—have underdeveloped mental time-travel capacities and neural-circuitry (Blair, 2003; Casey, Jones, & Hare, 2008; Giedd, Blumenthal, & Jeffries, 1999; Kennett & Matthews, 2009; Levy, 2007; Stuss, Gow, & Hetherington, 1992; Weber, Habel, Amunts, & Schnieder, 2008; Yang & Raine, 2009), making them less able to appreciate the consequences of their actions (Baskin-Sommers, Stuppy-Sullivan, & Buckholtz, 2016; Hare, 1999; Hart & Dempster, 1997; Litton, 2008; Moffitt, 1993; Moffitt et al., 2011; Shoemaker, 2011). Third, mental time-travel is directly linked to moral performance: (1) lack of imaginative vividness of the future predicts psychopathy (Hosking et al., 2017) and criminal delinquency (Van Gelder, Hershfield, & Nordgren, 2013), (2) the ability to project oneself into the future is negatively related to unethical behavior (Hershfield, Cohen, & Thompson, 2012), (3) experimental interventions priming imagination of the future decrease willingness to violate moral norms (Van Gelder et  al., 2013), and (4) experimental inhibitions of mental time-travel (via transcranial magnetic stimulation) result not only in greater impulsivity but also greater egocentricity, selfishness, deficits in other-perspective-­ taking, and less-prosocial behavior (Soutschek, Ruff, Strombach, Kalenscher, & Tobler, 2016). Finally, nonhuman animals in general—who we do not treat as morally responsible agents—appear to lack any robust mental time-travel capacities (Suddendorf & Corballis, 2007). Although some evidence suggests that other hominids (great apes) and crows may possess some mental time-travel capacities, these capacities appear to be far more limited than ours (Kabadayi & Osvath, 2017). The point then is this: although human infants, dogs, mice, and other animals have certain prosocial inclinations (viz. fairness, altruism, etc.), they are simply not moral agents. They lack cognitive capacities (mental time-travel, recursion, etc.) necessary for genuine moral agency and moral cognition. First, they lack capacities necessary for understanding why they should avoid immoral behavior in cases

96

M. Arvan

where they lack dispositions to behave morally (which is what mental time-travel and OPT enable in humans via long-term instrumental planning). Second, animals lack the ability to represent and extend moral principles to new cases (viz. recursion). To put it more simply, children and nonhuman animals are not moral agents— they do not engage in genuine moral cognition—because they lack capacities to regulate their behavior according to moral norms in cases where they lack prosocial inclinations (viz. temptations to behave selfishly rather than fairly or altruistically). Similar considerations show why cross-cultural ‘moral foundations’ (or ‘universal values’) are insufficient for full-fledged moral cognition. Even if humans have evolved to naturally value care, fairness, loyalty, respect for authority, and purity, none of these are sufficient for moral responsibility or moral cognition. Adult human beings are morally responsible for our actions because, in addition to valuing particular things, we possess robust capacities for regulating our behavior according to moral norms via mental time-travel, OPT, and recursion (see May, 2018). It is thus simply a mistake to infer from the universality of values or prosocial dispositions in infants or animals that moral cognition is an innate biological capacity.

5.3  The Elements of Moral Cognition If genuine moral cognition involves more than innate beliefs or values, then what exactly does it involve? The emerging behavioral neuroscience coheres extremely well with my theory of prudence and morality outlined above. On my account, moral cognition involves (i) mental time-travel, (ii) other-perspective-taking, and (iii) risk-aversion. We learn to care about other people’s perspectives and interests in a distinctly moral way by learning (across childhood, adolescence, and adulthood) that other people typically reward us in the future for treating them well, and punish us for treating them poorly. These patterns of social reward and punishment—embodied in culturally-evolved norms (including laws)—lead us to worry instrumentally about violating social norms, viz. risk-aversion (Arvan, 2020, chapter 2). This form of risk-aversion then leads us to mentally simulate how others are likely to react to our actions (viz. mental time-travel), leading to represent and care about how our actions affect others (viz. OPT)—all of which makes it rational to obey moral principles (Arvan, 2020, chapter 3; Arvan, 2016, chapters 3–6). Bearing this model of moral cognition in mind and the fact that evolution by natural selection is an incremental process wherein new biological capacities emerge and are selected for at different times in evolutionary history for different reasons, consider the following empirical findings. First, moral cognition has indeed been found to centrally involve mental time-travel (Kennett & Matthews, 2009; Levy, 2007), other-perspective-taking (Cf. Benoit, Gilbert, & Burgess, 2011; Daniel, Stanton, & Epstein, 2013; Peters & Büchel, 2010; Singer & Lamm, 2009; Singer & Tusche, 2014; Viganò, 2017, pp.  219–221), and risk-aversion (Baumeister, Bratslavsky, Finkenauer, & Vohs, 2001; Ito, Larsen, Smith, & Cacioppo, 1998;

5  Morality as an Evolutionary Exaptation

97

Kahneman & Tversky, 1979). Second, prudential and moral cognition have been found to be neurofunctionally intertwined in the ways my theory hypothesizes. Stimulating forward-looking mental time-travel results in greater prudential saving behaviors and greater fairness toward others via other-perspective-taking (Ersner-­ Hershfield, Garton, Ballard, Samanez-Larkin, & Knutson, 2009; Ersner-Hershfield, Wimmer, & Knutson, 2009; Hershfield et  al., 2011; Van Gelder et  al., 2013), whereas inhibiting mental-time-travel degrades prudential behavior and fairness to others (Soutschek et  al., 2016). Third, all of the following regions of the brain’s Default Mode Network (DMN) have been implicated in moral judgment (i.e. moral belief) across a wide variety of tasks2: (a) Ventromedial prefrontal cortex (vmPFC): processes risk and uncertainty, and is involved in learning from mistakes and applying moral judgments to one’s own behavior (Fellows & Farah, 2007), as well as emotional regulation (Koenigs et al., 2007). Deficits lead to lack of empathy, irresponsibility, and poor decisionmaking (Motzkin, Newman, Kiehl, & Koenigs, 2011), causing patients to choose immediate rewards ignoring future consequences (Bechara, Tranel, & Damasio, 2000). Also implicated in ‘extinction’, wherein previously reinforced behaviors gradually cease when reinforcement no longer occurs (Milad et al., 2005). (b) Dorsomedial prefrontal cortex (dmPFC): involved in sense of self (Gusnard, Akbudak, Shulman, & Raichle, 2001) and theory of mind, i.e. understanding others’ mental states (Isoda & Noritake, 2013). (c) Temporoparietal junction (TPJ): involved in sympathy and empathy through representing different possible perspectives on a single situation (Decety & Lamm, 2007). Also implicated in ‘out of body experiences’, where one’s ­first-­personal perspective occupies what is ordinarily a third-personal standpoint (Blanke et al., 2005). Also involved in mental time-travel and empathy with one’s own future selves, viz. representing one’s own perspective and emotional-­affective reactions in possible future situations (Soutschek et  al., 2016). Also involved in processing the order of events in time (Davis, Christie, & Rorden, 2009). (a) Includes Wernicke’s area, associated with ‘inner monologue’ (Shergill et al., 2001). (b) Includes the angular gyrus, which processes attention to salient visual features of situations and mediates episodic memory retrieval to infer the intentions of other people (Seghier, 2013), and is involved in representing the mental states of individuals in cartoons and stories (Gallagher et al., 2000). 2  The following overview of DMN regions is from Arvan (2020), pp. 12–13. As I argue in Arvan (2020), chapter 4, although the DMN is involved in many cognitive tasks other than moral cognition, my account provides a powerful normative and descriptive explanation of why and how some of the main cognitive functions associated with these DMN regions should and do interact to generate moral cognition. Cf. Pascual, Gallardo-Pujol, and Rodrigues (2013); Sommer et al. (2014).

98

M. Arvan

(d) Middle temporal gyrus (MTG): involved in contemplating distance from oneself, facial-recognition, and word-meaning while reading (Acheson & Hagoort, 2013). (e) Superior temporal sulcus (STC): involved in social perception, including where others are gazing (viz. joint attention) and direction of others’ emotions (Campbell, Heywood, Cowey, Regard, & Landis, 1990). (f) Middle occipital gyrus (MOG): contains topographical maps of external world and engages in spatial processing (Renier et al., 2010). (g) Temporal pole (TP): involved in conceptual knowledge (Lambon Ralph, Pobric, & Jefferies, 2008), semantic memory of objects, people, words, and facts (Bonner & Price, 2013), and facial recognition, theory of mind, and visceral emotional responses (Olson, Plotzker, & Ezzyat, 2007). (h) Fusiform gyrus (FG): involved in facial and visual-word recognition (George et al., 1999; McCandliss, Cohen, & Dehaene, 2003). (i) Inferior temporal gyrus (ITG): involved in object recognition (Spiridon, Fischl, & Kanwisher, 2006) and facial recognition (Meadows, 1974; Purves et  al., 2001, p. 622). (j) Precunus (PC): a neural correlate of consciousness (Vogt & Laureys, 2005) involved in self-awareness (Kjaer, Nowak, & Lou, 2002), episodic memory (Lundstrom et  al., 2003) including past-events affecting oneself (Lou et  al., 2004), and visual imagery and attention, particularly representing other people’s points-of-view (Vogeley et al., 2004), which has been implicated in empathy and forgiveness (Farrow et al., 2001). Many of the same DMN regions are also implicated in moral sensitivity, the capacity to monitor and recognize morally salient details of a given situation (Han, 2017, p. 98) However, the following additional DMN regions are also involved in moral sensitivity: (k) Cingulate gyrus (CG): involved in emotion processing, memory, and learning, particularly the linking outcomes to motivational behavior (Hayden & Platt, 2010). (l) Orbitofrontal cortex (OFC): processes cross-temporal (i.e. diachronic) contingencies and representation of the relative subjective value of outcomes (Fettes, Schulze, & Downar, 2017). Is also involved in processing reward and punishment, and learning from counterfactual prediction errors (Kringelbach & Rolls, 2004), as well as reversing behavior (Walton, Behrens, Buckley, Rudebeck, & Rushworth, 2010). Also involved in autonomic nervous system regulation including heartbeat and sexual arousal (Barbas, 2007), and behavioral inhibition related to moral behavior (Fuster, 2001). Damage is known to produce extreme changes in personality, most famously associated with Phineas Gage, who dramatically transformed from a prudent and moral individual into a reckless person unable to resist morally base impulses (Damasio, Grabowski, Frank, Galaburda, & Damasio, 1994; Harlow, 1848).

5  Morality as an Evolutionary Exaptation

99

(m) Lingual gyrus (LG): involved in visual processing in memories and dreams (Bogousslavsky, Miklossy, Deruaz, Assal, & Regli, 1987), including memories of parts of faces (McCarthy, Puce, Belger, & Allison, 1999). (n) Cuneus: involved in visual processing and behavioral inhibition (Haldane, Cunningham, Androutsos, & Frangou, 2008), but also pathological gambling in those with high activity in the dorsal visual processing system (Crockford, Goodyear, Edwards, Quickfall, & el-Guebaly, 2005). (o) Amygdala: involved in long-term emotional-memory consolidation, specifically fear conditioning (Maren, 1999) but also positive, reward-based conditioning (Paton, Belova, Morrison, & Salzman, 2006). Also implicated in decision making involving fear, anger, sadness, and anxiety (Amunts et  al., 2005), as well as in using emotional valence (positive or negative) to motivate behavior more generally (Nieh, Kim, Namburi, & Tye, 2013). The behavioral neuroscience thus indicates that moral cognition involves a truly wide variety of capacities—capacities that, in the broadest sense, are capacities useful for long-term planning, in conformity with my theory of prudence and morality. I will now argue that none of the above capacities are distinctly ‘moral’ or inherently conducive to social cooperation, and that they were each plausibly selected in evolutionary history for amoral reasons: as capacities that enable fitness advantages irrespective of whether they are used to general moral actions conducive to social cooperation. Consequently, I will conclude that moral cognition is almost certainly not a biological adaptation for social cooperation.

5.4  T  he Diverse Evolutionary Advantages of Our Moral Capacities As we have seen, at least seventeen brain regions and capacities are involved in moral judgment and sensitivity across a wide variety of tasks. I will now argue that (1) there is good historical evidence that different capacities involved in moral cognition emerged at different times in our evolutionary history, some long before the emergence of robust social cooperation; and (2) each brain region and capacity involved in moral cognition would have conferred particular kinds of fitness advantages on our ancestors. These two types of facts together should enable us to pin down each brain region’s likely etiological function, or reason why each region and its associated capacities were selected and retained in evolutionary history (Millikan, 1989). This finally, should enable us to determine whether moral cognition is a biological adaptation for social cooperation. Let us begin with mental time-travel, the capacity (associated with several regions of the Default Mode Network) to imaginatively simulate different possible pasts and futures. Mental time-travel is neither sufficient for moral cognition, nor plausibly ‘for’ social cooperation. Considered by itself, it is an amoral capacity: one that confers obvious fitness advantages on organisms irrespective of the moral status

100

M. Arvan

of their actions. This is because mental time-travel serves as a long-term planning capacity. It would have enabled our ancestors to imaginatively recall the effects of their past actions—such as which types of plants are poisonous, and how to catch prey—and to imagine different possible future outcomes for their actions (such as what will happen if one eats a particular plant in the future). None of these obvious fitness advantages (avoiding poisonous things, solving problems, etc.) concern social cooperation per se: they merely would have enabled our distant ancestors to plan more effectively in general. Second, although people can learn how to use mental time-travel in distinctly moral ways conducive to social cooperation (see Arvan, 2020, chapters 2 and 3), mental time-travel equally enables individuals— and would have enabled our distant ancestors—to harmfully exploit other people, contrary to morality and cooperation. This is true even today. Consider a capitalist exploiting sweatshop labor, a tyrannical dictator maintaining their power through mass murder, or a spouse engaging in infidelity. All of these immoral actions are enabled by mental time-travel, as the ability to imagine different possible futures enables people to plan how to harm others for one’s own personal advantage. Third, the kinds of immoral behavior that  mental time-travel can give rise to plausibly generated fitness advantages for our ancestors, as those who gain or maintain power through immoral means (e.g. despots, warlords, etc.) can plausibly sire more offspring than those they dominate or murder. Fourth, mental time-travel appears to have emerged in evolutionary history far before evidence of robust social cooperation. Mental time-travel appears to have emerged at least 400,000  years ago (Suddendorf & Corballis, 2007, p. 312), as it appears necessary for inventing complex tools and using fire, both of which archaeological evidence suggests first emerged during the middle Pleistocene period (Boaz, Ciochon, Xu, & Liu, 2004; Hallos, 2005). Robust forms of social cooperation (e.g. stable groups and societies), on the other hand, appear to have emerged only in the last 200,000 years (Apicella and Apicella & Silk, 2019). Fifth, the use of complex tools, fire and social cooperation all appear to presuppose the development of norms, which requires normative capacities to represent how things should or shouldn’t be done (Braddock & Rosenberg, 2012, pp. 65–71. Cf. Arvan, 2020, Chapter 2 and Hobbes, 1651). Yet, the capacity to flexibly extend normative judgments to new cases requires recursion, which appears to have emerged in our ancestral history between 150 and 200 million years ago (Barceló-Coblijn, 2012, especially p. 178)—far before the emergence of mental time-travel or robust social cooperation. Given that mental time-travel (A) is demonstrably critical to moral cognition, (B) afforded our ancestors plausible fitness advantages independent of and prior to social cooperation, and (C) using it in pro-social, cooperative ways appears to be predicated upon the development and transmission of sociocultural norms (Arvan, 2020, Chapter 2) that emerged only the past 200,000 years or perhaps even in the last 50,000 years (Kitcher, 2011, p. 97, fn. 37), it follows that a central feature of moral cognition—mental time-travel—was likely not selected for in evolutionary history as a biological adaptation for social cooperation. Now turn to other-perspective taking (OPT). When combined with well-­ developed capacities for empathy—the kind we are socialized across childhood,

5  Morality as an Evolutionary Exaptation

101

adolescence, and adulthood to engage in (via. Mental time-travel)—OPT plays a central role in moral cognition (see Arvan, 2020, chapters 2–4). However, is that what the capacity evolved for? On its own, OPT is most plausibly construed, just like mental time-travel, as a planning capacity. It enables us (and would have enabled our ancestors) to understand how other people experience a single situation, including our role in that situation and how they might react to our actions. To this extent, OPT is clearly not naturally  a ‘moral capacity.’ Being able to understand other people’s perspectives is something that we can equally use to exploit them— as a con man does when he exploits other people’s trust for personal gain. Further, it is easy to imagine numerous ways in which other-perspective-taking would confer fitness-advantages upon individuals (and populations they are part of) regardless of whether it is used in moral or immoral ways. Again, consider infidelity, an individual-­level behavior that can increase the fitness of the individual who engages in it by enabling them to sire a larger number of offspring. Other-perspective-taking can be used to enable infidelity by enabling the person to understand and exploit the other person’s trust. Consequently, OPT is also unlikely to have been biologically selected in our evolutionary history ‘for’ social cooperation: it plausibly increased the fitness of our ancestors when used in moral and immoral ways. Now turn to risk-aversion and the underlying neurobiology that leads people to overweight negative outcomes relative to positive outcomes. As noted in Section 2, on my account of morality risk-aversion plays a central role in moral cognition—at least when we are socialized to avoid risking violating moral norms. Yet, as others have pointed out, risk-aversion per se is not a moral capacity: rather, it is something that helps a person preserve themselves, enabling them to survive, bear offspring, and so on (Viganò, 2017, pp. 218, 222). Now turn to the ventromedial prefrontal cortex (vmPFC), which is associated with processing risk and uncertainty. On my model, moral cognition emerges out of prudential calculations of risk and reward. Prudent individuals learn through socialization to engage in forms of mental time-travel and other-perspective-taking that make conformity to moral principles rational. The vmPFC thus plays a clear role in moral cognition on this picture. However, the vmPFC is clearly not a ‘moral capacity’ in and by itself. To see how, consider the behavior of warlords and gangs in failed states or the behavior of religious extremists (such as the members of I­SIS/ Islamic State). These individuals can have fully functioning vmPFC’s, and can weigh risks and rewards. Yet, many of them appear to lack a moral conscience, and are instead willing to murder and oppress others with abandon. Why? Because, given their environment, they have learned they can personally benefit from it, obtaining more resources for themselves, siring more survivable offspring than those they murder or oppress. The difference between people of ‘moral conscience’—people who engage in moral cognition—and warlords or political dictators (who don’t) thus appears to be environmental and a matter of learning and reasoning. You and I have learned through socialization to care about how our actions might negatively affect others. Dictators and religious extremists learn the opposite: that they can benefit from using the vmPFC in immoral ways. Consequently, the vmPFC was not plausibly selected in evolutionary history for social cooperation either. It offers and

102

M. Arvan

would have offered our ancestors plausible fitness advantages both to those who cooperate in prosocial ways, but also to individuals who use its associated capacities in immoral, anti-cooperative ways for their own reproductive benefit. Now consider the dorsomedial prefrontal cortex, which is involved in the sense of the self and theory of mind (i.e. understanding the mental states of others), and the temporoparietal junction (TPJ), which is involved in sympathy and empathy, specifically the ability to represent different possible perspectives on a single situation, as well as ‘out of body experiences’ and processing the order of events in time. Are these ‘moral capacities’? Although they are involved in sympathy and empathy, they do so by way of a general mechanism: the capacity to represent a single situation from multiple perspectives. So we need to ask: is that mechanism, by itself, a ‘moral’ capacity? The answer is no. Being able to understand other people’s mental states and appreciate many different perspectives on a single situation is a predictive planning capacity—one that enables us to predict how others around us will respond to our actions. Well-socialized individuals learn to use this capacity to empathize with others. However, dictators, wanton criminals, or warlords learn differently. They learn to use the capacity to represent the same situation from many different perspectives to exploit or murder people. For example, Hitler’s capacity to represent multiple perspectives plausibly enabled him to take advantage of Neville Chamberlain. Chamberlain bet that the Munich Pact would appease Hitler—yet Hitler understood and exploited these expectations to do the opposite, enabling the Nazis’ murderous march across Europe. Now consider other brain regions implicated in moral cognition: (k) Middle temporal gyrus (MTG): involved in contemplating distance from oneself, facial-recognition, and word-meaning while reading. (l) Superior temporal sulcus (STC): involved in social perception, including where others are gazing (viz. joint attention) and direction of others’ emotions. (m) Middle occipital gyrus (MOG): contains topographical maps of external world and engages in spatial processing. (n) Temporal pole (TP): involved in conceptual knowledge, semantic memory of objects, people, words, and facts, facial recognition, theory of mind, and visceral emotional responses. (o) Fusiform gyrus (FG): involved in facial and visual-word recognition. None of the capacities associated with these brain regions are distinctly ‘moral’ capacities plausibly selected in evolutionary history for social cooperation. Rather, they are perceptual and conceptual capacities—the kinds of capacities that well-­ socialized individuals like you and I learn to use to construct moral principles in language and thought, apply them to concrete situations, and so on, but which ill-­ socialized individuals may again use to exploit others. These capacities, like others discussed above, are thus better understood as naturally ‘amoral’ capacities: capacities which almost certainly provided a wide variety of non-moral fitness-advantages to our ancestors. In sum, moral cognition involves a wide variety of different brain regions and associated capacities ranging from long-term planning capacities to perceptual

5  Morality as an Evolutionary Exaptation

103

capacities. Second, because evolution is a gradual process, those brain regions were likely selected at different times in evolutionary history for different reasons. Third, as we have seen, some capacities central to moral cognition—including mental time-travel, other-perspective-taking, and recursion—would have offered our ancestors fitness advantages irrespective of social cooperation. Fourth, some of the above capacities (mental time-travel, recursion, etc.) appear to have emerged in our evolutionary history long before social cooperation. Fifth, the theory of moral cognition that I have argued best explains the behavioral neuroscience holds that moral cognition is something we learn to do through socialization.

5.5  Conclusion These facts indicate that moral cognition is unlikely to be a biological adaptation for social cooperation. First, moral cognition involves a variety of long-term planning capacities that would have conferred fitness-advantages on our ancestors irrespective of whether our ancestors used those capacities for cooperation. Second, moral cognition (viz. social cooperation) is something people learn to do via individual-­ level rationality and socialization. Stable groups and large-scale societies were the result of individuals learning to cooperate in our ancestral past (Braddock & Rosenberg, 2012). These groups then developed, transmitted, and enforced norms that socialize individuals to grasp the rationality of obeying moral principles (Arvan, 2020, chapters 2 and 3). If this is correct, then moral cognition is not a biological adaptation for social cooperation but instead an exaptation. Indeed, moral cognition is not a ‘biological capacity’ at all. Rather, it is something we learn to engage in as a result of individual-level reasoning and socialization—processes that put capacities selected in evolutionary history for many reasons to a novel, prosocial use.

References Acheson, D. J., & Hagoort, P. (2013). Stimulating the brain’s language network: Syntactic ambiguity resolution after TMS to the inferior frontal gyrus and middle temporal gyrus. Journal of Cognitive Neuroscience, 25(10), 1664–1677. Alexander, R. D. (1987). The biology of moral systems. New York: Routledge, 2017. Amunts, K., Kedo, O., Kindler, M., Pieperhoff, P., Mohlberg, H., Shah, N.  J., et  al. (2005). Cytoarchitectonic mapping of the human amygdala, hippocampal region and entorhinal cortex: Intersubject variability and probability maps. Anatomy and Embryology, 210(5–6), 343–352. Anand, P. (1995). Foundations of rational choice under risk. Oxford, UK: Oxford University Press. Apicella, C. L., & Silk, J. B. (2019). The evolution of human cooperation. Current Biology, 29(11), R447–R450. Aristotle. (1984). Nicomachean ethics. In J. Barnes (Ed.), The complete works of Aristotle: The revised Oxford translation. Princeton, NJ: Princeton University Press. Arvan, M. (2016). Rightness as fairness: A moral and political theory. Basingstoke, UK: Palgrave Macmillan.

104

M. Arvan

Arvan, M. (2019). The dark side of morality: Group polarization and moral epistemology. The Philosophical Forum, 50(1), 87–115. Arvan, M. (2020). Neurofunctional prudence and morality: A philosophical theory. New  York: Routledge. Audi, R. (2015). Intuition and its place in ethics. Journal of the American Philosophical Association, 1(1), 57–77. Barbas, H. (2007). Flow of information for emotions through temporal and orbitofrontal pathways. Journal of Anatomy, 211(2), 237–249. Barceló-Coblijn, L. (2012). Evolutionary scenarios for the emergence of recursion. Theoria et Historia Scientarium, IX, 171–199. Barragan, R. C., Brooks, R., & Meltzoff, A. N. (2020). Altruistic food sharing behavior by human infants after a hunger manipulation. Scientific Reports, 10(1785). https://doi.org/10.1038/ s41598-­020-­58645-­9 Baskin-Sommers, A., Stuppy-Sullivan, A. M., & Buckholtz, J. W. (2016). Psychopathic individuals exhibit but do not avoid regret during counterfactual decision making. Proceedings of the National Academy of Sciences, 113(50), 14438–14443. Baumeister, R.  F., Bratslavsky, E., Finkenauer, C., & Vohs, K.  D. (2001). Bad is stronger than good. Review of General Psychology, 5(4), 323–370. Bechara, A., Tranel, D., & Damasio, H. (2000). Characterization of the decision-making deficit of patients with ventromedial prefrontal cortex lesions. Brain, 123(11), 2189–2202. Benoit, R. G., Gilbert, S. J., & Burgess, P. W. (2011). A neural mechanism mediating the impact of episodic prospection on farsighted decisions. The Journal of Neuroscience, 31(18), 6771–6779. Blair, R. J. R. (2003). Neurobiological basis of psychopathy. The British Journal of Psychiatry, 182(1), 5–7. Blanke, O., Mohr, C., Michel, C.  M., Pascual-Leone, A., Brugger, P., Seeck, M., et  al. (2005). Linking out-of-body experience and self processing to mental own-body imagery at the temporoparietal junction. Journal of Neuroscience, 25(3), 550–557. Boaz, N. T., Ciochon, R. L., Xu, Q., & Liu, J. (2004). Mapping and taphonomic analysis of the Homo erectus loci at Locality 1 Zhoukoudian, China. Journal of Human Evolution, 46(5), 519–549. Bogousslavsky, J., Miklossy, J., Deruaz, J.  P., Assal, G., & Regli, F. (1987). Lingual and fusiform gyri in visual processing: A clinico-pathologic study of superior altitudinal hemianopia. Journal of Neurology, Neurosurgery & Psychiatry, 50(5), 607–614. Bonner, M. F., & Price, A. R. (2013). Where is the anterior temporal lobe and what does it do? Journal of Neuroscience, 33(10), 4213–4215. Braddock, M., & Rosenberg, A. (2012). Reconstruction in moral philosophy? Analyse & Kritik, 34(1), 63–80. Brennan, J. (2010). Scepticism about philosophy. Ratio, 23(1), 1–16. Bricker, P. (1980). Prudence. The Journal of Philosophy, 77(7), 381–401. Brosnan, S. F. (2006). Nonhuman species’ reactions to inequity and their implications for fairness. Social Justice Research, 19(2), 53–185. Brosnan, S. F., & de Waal, F. B. (2014). Evolution of responses to (un)fairness. Science, 346(6207), 1251776. Bruckner, D. (2003). A contractarian account of (part of) prudence. American Philosophical Quarterly, 40(1), 33–46. Campbell, R., Heywood, C.  A., Cowey, A., Regard, M., & Landis, T. (1990). Sensitivity to eye gaze in prosopagnosic patients and monkeys with superior temporal sulcus ablation. Neuropsychologia, 28(11), 1123–1142. Carruthers, P., & James, S. M. (2008). Evolution and the possibility of moral realism. Philosophy and Phenomenological Research, 77(1), 237–244. Casebeer, W.  D. (2003). Natural ethical facts: Evolution, connectionism, and moral cognition. Cambridge, MA/London: MIT Press. Casey, B. J., Jones, R. M., & Hare, T. A. (2008). The adolescent brain. Annals of the New York Academy of Sciences, 1124, 111–126.

5  Morality as an Evolutionary Exaptation

105

Churchland, P. (2011). Braintrust: What neuroscience tells us about morality. Princeton, NJ: Princeton University Press. Corballis, M. C. (2007). The uniqueness of human recursive thinking: The ability to think about thinking may be the critical attribute that distinguishes us from all other species. American Scientist, 95(3), 240–248. Cosmides, L., & Tooby, J. (1992). Cognitive adaptations for social exchange. In J.  Barkow, L. Cosmides, & J. Tooby (Eds.), The adapted mind: Evolutionary psychology and the generation of culture (pp. 163–228). New York: Oxford University Press. Crockford, D. N., Goodyear, B., Edwards, J., Quickfall, J., & el-Guebaly, N. (2005). Cue-induced brain activity in pathological gamblers. Biological Psychiatry, 58(10), 787–795. Curry, O. S. (2016). Morality as cooperation: A problem-centred approach. In T. K. Shackelford & D.  Hansen (Eds.), The evolution of morality (pp.  27–51). New  York/Cham, Switzerland: Springer. Damasio, H., Grabowski, T., Frank, R., Galaburda, A. M., & Damasio, A. R. (1994). The return of Phineas Gage: Clues about the brain from the skull of a famous patient. Science, 264(5162), 1102–1105. Daniel, T. O., Stanton, C. M., & Epstein, L. H. (2013). The future is now: Comparing the effect of episodic future thinking on impulsivity in lean and obese individuals. Appetite, 71(1), 120–125. Davis, B., Christie, J., & Rorden, C. (2009). Temporal order judgments activate temporal parietal junction. Journal of Neuroscience, 29(10), 3182–3188. de Waal, F. (2006). Primates and philosophers: How morality evolved. Princeton, NJ: Princeton University Press. Decety, J., & Lamm, C. (2007). The role of the right temporoparietal junction in social interaction: How low-level computational processes contribute to meta-cognition. The Neuroscientist, 13(6), 580–593. Doğruyol, B., Alper, S., & Yilmaz, O. (2019). The five-factor model of the moral foundations theory is stable across WEIRD and non-WEIRD cultures. Personality and Individual Differences, 151, 109547. Ersner-Hershfield, H., Garton, M. T., Ballard, K., Samanez-Larkin, G. R., & Knutson, B. (2009). Don’t stop thinking about tomorrow: Individual differences in future self-continuity account for saving. Judgment and Decision making, 4, 280–286. Ersner-Hershfield, H., Wimmer, G. E., & Knutson, B. (2009). Saving for the future self: Neural measures of future self-continuity predict temporal discounting. Social Cognitive and Affective Neuroscience, 4(1), 85–92. Farrow, T.  F., Ying Zheng, Y., Wilkinson, I.  D., Spence, S.  A., Deakin, J.  F., Tarrier, N., et  al. (2001). Investigating the functional anatomy of empathy and forgiveness. Neuroreport, 12(11), 2433–2438. Fellows, L. K., & Farah, M. J. (2007). The role of ventromedial prefrontal cortex in decision making: Judgment under uncertainty or judgment per se? Cerebral Cortex, 17(11), 2669–2674. Fettes, P., Schulze, L., & Downar, J. (2017). Cortico-striatal-thalamic loop circuits of the orbitofrontal cortex: Promising therapeutic targets in psychiatric illness. Frontiers in Systems Neuroscience, 11(25), 1–23. Frank, R. H. (1988). Passions within reason: The strategic role of the emotions. New York: WW Norton & Co.. Fuster, J. M. (2001). The prefrontal cortex—An update: Time is of the essence. Neuron, 30(2), 319–333. Gallagher, H. L., Happé, F., Brunswick, N., Fletcher, P. C., Frith, U., & Frith, C. D. (2000). Reading the mind in cartoons and stories: An fMRI study of ‘theory of mind’ in verbal and nonverbal tasks. Neuropsychologia, 38(1), 11–21. George, N., Dolan, R. J., Fink, G. R., Baylis, G. C., Russell, C., & Driver, J. (1999). Contrast polarity and face recognition in the human fusiform gyrus. Nature Neuroscience, 2(6), 574–580. Geraci, A., & Surian, L. (2011). The developmental roots of fairness: Infants’ reactions to equal and unequal distributions of resources. Developmental Science, 14, 1012–1020.

106

M. Arvan

Giedd, J. N., Blumenthal, J., & Jeffries, N. O. (1999). Brain development during childhood and adolescence: A longitudinal MRI study. Nature Neuroscience, 2(10), 861–863. Gould, S. J. (1991). Exaptation: A crucial tool for an evolutionary psychology. Journal of Social Issues, 47(3), 43–65. Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., et al. (2013). Moral foundations theory: The pragmatic validity of moral pluralism. In Advances in experimental social psychology (Vol. 47, pp. 55–130). Amsterdam: Academic. Greene, J. D. (2013). Moral tribes: Emotion, reason, and the gap between us and them. New York: Penguin Books. Greene, J. D. (2015). The rise of moral cognition. Cognition, 135, 39–42. Gusnard, D. A., Akbudak, E., Shulman, G. L., & Raichle, M. E. (2001). Medial prefrontal cortex and self-referential mental activity: Relation to a default mode of brain function. Proceedings of the National Academy of Sciences, 98(7), 4259–4264. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834. Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New York: Pantheon Books. Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133(4), 55–66. Haldane, M., Cunningham, G., Androutsos, C., & Frangou, S. (2008). Structural brain correlates of response inhibition in Bipolar Disorder I. Journal of Psychopharmacology, 22(2), 138–143. Hallos, J. (2005). “15 minutes of fame:” Exploring the temporal dimension of Middle Pleistocene lithic technology. Journal of Human Evolution, 49, 155–179. Hamlin, J. K. (2013). Moral judgment and action in preverbal infants and toddlers: Evidence for an innate moral core. Current Directions in Psychological Science, 22(3), 186–193. Han, H. (2017). Neural correlates of moral sensitivity and moral judgment associated with brain circuitries of selfhood: A meta-analysis. Journal of Moral Education, 46(2), 97–113. Hansson, S. O. (2005). Decision theory: A brief introduction. https://people.kth.se/~soh/decisiontheory.pdf. Retrieved 7 December 2020. Hare, R.  D. (1999). The Hare psychopathy checklist-revised: PLC-R. North Tonawanda, NY: Multi-Health Systems. Harlow, J. M. (1848). Passage of an iron rod through the head. The Boston Medical and Surgical Journal (1828–1851), 39(20), 0_1. Hart, S.  D., & Dempster, R.  J. (1997). Impulsivity and psychopathy. In C.  D. Webster & M. A. Jackson (Eds.), Impulsivity: Theory, assessment, and treatment (pp. 212–232). New York/ London: The Guilford Press. Haybron, D. (2011). Happiness. The Stanford Encyclopedia of Philosophy. E.  N. Zalta (Ed.). https://plato.stanford.edu/archives/fall2011/entries/happiness/. Hayden, B. Y., & Platt, M. L. (2010). Neurons in anterior cingulate cortex multiplex information about reward and action. Journal of Neuroscience, 30(9), 3339–3346. Henrich, N., & Henrich, J. (2007). Why humans cooperate: A cultural and evolutionary explanation. Oxford, UK: Oxford University Press. Hershfield, H.  E., Cohen, T.  R., & Thompson, L. (2012). Short horizons and tempting situations: Lack of continuity to our future selves leads to unethical decision making and behavior. Organizational Behavior and Human Decision Processes, 117, 298–310. Hershfield, H. E., Goldstein, D. G., Sharpe, W. F., Fox, J., Yeykelis, L., Carstensen, L. L., et al. (2011). Increasing saving behavior through age-progressed renderings of the future self. Journal of Marketing Research, 48(SPL), S23–S37. Hitler, A. (1925). Mein Kampf. R. Manheim (trans), New York: Houghton Mifflin, 1999. Hobbes, T. (1651). Leviathan. In Sir W. Molesworth (Ed.), The English works of Thomas Hobbes: Now first collected and edited (Vol. 3, p. ix-714). London: John Bohn, 1839-45. Hobbes, T. (1658). De Homine. In B. Gert (Ed.), Man and citizen. Anchor Books, 1972.

5  Morality as an Evolutionary Exaptation

107

Hosking, J. G., Kastman, E. K., Dorfman, H. M., Samanez-Larkin, G. R., Baskin-Sommers, A., Kiehl, K. A., et al. (2017). Disrupted prefrontal regulation of striatal subjective value signals in psychopathy. Neuron, 95(1), 221–231. Isoda, M., & Noritake, A. (2013). What makes the dorsomedial frontal cortex active during reading the mental states of others? Frontiers in Neuroscience, 7, 232. Ito, T. A., Larsen, J. T., Smith, N. K., & Cacioppo, J. T. (1998). Negative information weighs more heavily on the brain: The negativity bias in evaluative categorizations. Journal of Personality and Social Psychology, 75(4), 887–900. Joyce, R. (2007). The myth of morality. Cambridge, UK: Cambridge University Press. Kabadayi, C., & Osvath, M. (2017). Ravens parallel great apes in flexible planning for tool-use and bartering. Science, 357(6347), 202–204. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 4, 263–291. Kant, I. (1785). Groundwork of the metaphysics of morals. In M. J. Gregor (Ed.), The Cambridge edition of the works of Immanuel Kant: Practical philosophy (pp. 38–108). Cambridge, UK: Cambridge University Press, 1996. Kennett, J., & Matthews, S. (2009). Mental timetravel, agency and responsibility. In M. Broome & L.  Bortolotti (Eds.), Psychiatry as cognitive neuroscience: Philosophical perspectives (pp. 327–350). Oxford, UK: Oxford University Press. Kitcher, P. (1998). Psychological altruism, evolutionary origins, and moral rules. Philosophical Studies, 89(2–3), 283–316. Kitcher, P. (2005). Biology and ethics. In D. Copp (Ed.), The Oxford handbook of ethical theory (pp. 163–185). Oxford, UK: Oxford University Press. Kitcher, P. (2011). The ethical project. Cambridge, UK: Harvard University Press. Kjaer, T. W., Nowak, M., & Lou, H. C. (2002). Reflective self-awareness and conscious states: PET evidence for a common midline parietofrontal core. NeuroImage, 17(2), 1080–1086. Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., et al. (2007). Damage to the prefrontal cortex increases utilitarian moral judgements. Nature, 446(7138), 908–911. Korsgaard, C. M. (2008). The constitution of agency. Oxford, UK: Oxford University Press. Korsgaard, C. M. (2009). Self-constitution: Agency, identity, and integrity. Oxford, UK: Oxford University Press. Kringelbach, M.  L., & Rolls, E.  T. (2004). The functional neuroanatomy of the human orbitofrontal cortex: Evidence from neuroimaging and neuropsychology. Progress in Neurobiology, 72(5), 341–372. Lambon Ralph, M. A., Pobric, G., & Jefferies, E. (2008). Conceptual knowledge is underpinned by the temporal pole bilaterally: Convergent evidence from rTMS. Cerebral Cortex, 19(4), 832–838. Levy, N. (2007). The responsibility of the psychopath revisited. Philosophy, Psychiatry, and Psychology, 14(2), 129–138. Litton, P. (2008). Responsibility status of the psychopath: On moral reasoning and rational self-­ governance. Rutgers Law Journal, 39(349), 350–392. Lou, H. C., Luber, B., Crupain, M., Keenan, J. P., Nowak, M., Kjaer, T. W., et al. (2004). Parietal cortex and representation of the mental self. Proceedings of the National Academy of Sciences, 101(17), 6827–6832. Luco, A. C. (2016). Non-negotiable: Why moral naturalism cannot do away with categorical reasons. Philosophical Studies, 173(9), 2511–2528. Lundstrom, B.  N., Petersson, K.  M., Andersson, J., Johansson, M., Fransson, P., & Ingvar, M. (2003). Isolating the retrieval of imagined pictures during episodic memory: Activation of the left precuneus and left prefrontal cortex. NeuroImage, 20(4), 1934–1943. Maren, S. (1999). Long-term potentiation in the amygdala: A mechanism for emotional learning and memory. Trends in Neurosciences, 22(12), 561–567. Maxwell, S. E., Lau, M. Y., & Howard, G. S. (2015). Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? American Psychologist, 70(6), 487–498.

108

M. Arvan

May, J. (2018). Regard for reason in the moral mind. Oxford, UK: Oxford University Press. McCandliss, B. D., Cohen, L., & Dehaene, S. (2003). The visual word form area: Expertise for reading in the fusiform gyrus. Trends in Cognitive Sciences, 7(7), 293–299. McCarthy, G., Puce, A., Belger, A., & Allison, T. (1999). Electrophysiological studies of human face perception. II: Response properties of face-specific potentials generated in occipitotemporal cortex. Cerebral Cortex, 9(5), 431–444. Meadows, J. C. (1974). The anatomical basis of prosopagnosia. Journal of Neurology, Neurosurgery & Psychiatry, 37(5), 489–501. Milad, M. R., Quinn, B. T., Pitman, R. K., Orr, S. P., Fischl, B., & Rauch, S. L. (2005). Thickness of ventromedial prefrontal cortex in humans is correlated with extinction memory. Proceedings of the National Academy of Sciences, 102(30), 10706–10711. Millikan, R. G. (1989). In defense of proper functions. Philosophy of Science, 56(2), 288–302. Moffitt, T. E. (1993). Adolescence-limited and life-course persistent antisocial behavior: A developmental taxonomy. Psychological Review, 100, 674–701. Moffitt, T. E., Arseneault, L., Belsky, D., Dickson, N., Hancox, R. J., Harrington, H., et al. (2011). A gradient of childhood self-control predicts health, wealth, and public safety. Proceedings of the National Academy of Sciences, 108(7), 2693–2698. Motzkin, J. C., Newman, J. P., Kiehl, K. A., & Koenigs, M. (2011). Reduced prefrontal connectivity in psychopathy. Journal of Neuroscience, 31(48), 17348–17357. Nieh, E. H., Kim, S. Y., Namburi, P., & Tye, K. M. (2013). Optogenetic dissection of neural circuits underlying emotional valence and motivated behaviors. Brain Research, 1511, 73–92. Olson, I. R., Plotzker, A., & Ezzyat, Y. (2007). The enigmatic temporal pole: A review of findings on social and emotional processing. Brain, 130(7), 1718–1731. Parfit, D. (2011). On what matters (Vol. Vols. 1&2). Oxford, UK: Oxford University Press. Pascual, L., Gallardo-Pujol, D., & Rodrigues, P. (2013). How does morality work in the brain? A functional and structural perspective of moral behavior. Frontiers in Integrative Neuroscience, 7(65), 1–8. Paton, J.  J., Belova, M.  A., Morrison, S.  E., & Salzman, C.  D. (2006). The primate amygdala represents the positive and negative value of visual stimuli during learning. Nature, 439(7078), 865–870. Peters, J., & Büchel, C. (2010). Episodic future thinking reduces reward delay discounting through an enhancement of prefrontal-mediotemporal interactions. Neuron, 66(1), 138–148. Peterson, M. (2017). An introduction to decision theory (2nd ed.). Cambridge, UK: Cambridge University Press. Pettigrew, R. (2020). Choosing for changing selves. Oxford, UK: Oxford University Press. Price, B.  W. (2002). The worthwhileness theory of the prudentially rational life. Journal of Philosophical Research, 27, 619–639. Prichard, H. A. (1912). Does moral philosophy rest on a mistake? Mind, 21(81), 21–37. Prinz, J. J. (2007). The emotional construction of morals. Oxford, UK: Oxford University Press. Purves, D., Augustine, G. J., Fitzpatrick, D., Hall, W. C., LaMantia, A., McNamara, J. O., et al. (2001). Neuroscience (2nd ed.). Sunderland, MA: Sinauer Associates. Renier, L. A., Anurova, I., De Volder, A. G., Carlson, S., VanMeter, J., & Rauschecker, J. P. (2010). Preserved functional specialization for spatial processing in the middle occipital gyrus of the early blind. Neuron, 68(1), 138–148. Ross, W. D. (1930). The right and the good. Oxford, UK: Oxford University Press, 2002. Scanlon, T. M. (1998). What we owe to each other. Cambridge, MA: Harvard University Press. Scanlon, T. M. (2014). Being realistic about reasons. Oxford, UK: Oxford University Press. Schmidt, M.  F., & Sommerville, J.  A. (2011). Fairness expectations and altruistic sharing in 15-month-old human infants. PLoS One, 6(10), e23223. Seghier, M.  L. (2013). The angular gyrus: Multiple functions and multiple subdivisions. The Neuroscientist, 19(1), 43–61. Shergill, S. S., Bullmore, E. T., Brammer, M. J., Williams, S. C. R., Murray, R. M., & McGuire, P. K. (2001). A functional study of auditory verbal imagery. Psychological Medicine, 31(2), 241–253.

5  Morality as an Evolutionary Exaptation

109

Shoemaker, D.  W. (2011). Psychopathy, responsibility, and the moral/conventional distinction. Southern Journal of Philosophy, 49(s1), 99–124. Sinclair, N. (2012). Metaethics, teleosemantics and the function of moral judgments. Biology and Philosophy, 27(5), 639–662. Singer, T., & Lamm, C. (2009). The social neuroscience of empathy. Annals of the New  York Academy of Sciences, 1156, 81–96. Singer, T., & Tusche, A. (2014). Understanding others: Brain mechanisms of theory of mind and empathy. In P. W. Glimcher & E. Fehr (Eds.), Neuroeconomics: Decision making and the brain (2nd ed., pp. 249–266). London: Academic. Smyth, N. (2017). The function of morality. Philosophical Studies, 174(5), 1127–1144. Sommer, M., Meinhardt, J., Rothmayr, C., Döhnel, K., Hajak, G., Rupprecht, R., et al. (2014). Me or you? Neural correlates of moral reasoning in everyday conflict situations in adolescents and adults. Social Neuroscience, 9(5), 452–470. Soutschek, A., Ruff, C. C., Strombach, T., Kalenscher, T., & Tobler, P. N. (2016). Brain stimulation reveals crucial role of overcoming self-centeredness in self-control. Science Advances, 2(10), e1600992. Spiridon, M., Fischl, B., & Kanwisher, N. (2006). Location and spatial profile of category-specific regions in human extrastriate cortex. Human Brain Mapping, 27(1), 77–89. Sterelny, K., & Fraser, B. (2016). Evolution and moral realism. The British Journal for the Philosophy of Science, 68(4), 981–1006. Stuss, D. T., Gow, C. A., & Hetherington, C. R. (1992). ‘No longer gage’: Frontal lobe dysfunction and emotional changes. Journal of Consulting and Clinical Psychology, 60(3), 349–359. Suddendorf, T., & Corballis, M. C. (2007). The evolution of foresight: What is mental time travel, and is it unique to humans? Behavioral and Brain Sciences, 30(3), 299–313. Suhler, C.  L., & Churchland, P. (2011). Can innate, modular “foundations” explain morality? Challenges for Haidt’s moral foundations theory. Journal of Cognitive Neuroscience, 23(9), 2103–2116. Tomasello, M., & Vaish, A. (2013). Origins of human cooperation and morality. Annual Review of Psychology, 64, 231–255. Tosi, J., & Warmke, B. (2016). Moral Grandstanding. Philosophy & Public Affairs, 44(3), 197–217. Trueman, C. N. (2020). The Fuhrer Principle. History Learning Site. https://www.historylearningsite.co.uk/nazi-­germany/the-­fuehrer-­principle/. Retrieved 7 December 2020. Van Gelder, J. L., Hershfield, H. E., & Nordgren, L. F. (2013). Vividness of the future self predicts delinquency. Psychological Science, 24(6), 974–980. Viganò, E. (2017). Adam Smith’s theory of prudence updated with neuroscientific and behavioral evidence. Neuroethics, 10(2), 215–233. Vogeley, K., May, M., Ritzl, A., Falkai, P., Zilles, K., & Fink, G. R. (2004). Neural correlates of first-person perspective as one constituent of human self-consciousness. Journal of Cognitive Neuroscience, 16(5), 817–827. Vogt, B.  A., & Laureys, S. (2005). Posterior cingulate, precuneal and retrosplenial cortices: Cytology and components of the neural network correlates of consciousness. Progress in Brain Research, 150, 205–217. Walton, M.  E., Behrens, T.  E., Buckley, M.  J., Rudebeck, P.  H., & Rushworth, M.  F. (2010). Separable learning systems in the macaque brain and the role of orbitofrontal cortex in contingent learning. Neuron, 65(6), 927–939. Weber, S., Habel, U., Amunts, K., & Schnieder, F. (2008). Structural brain abnormalities in psychopaths—A review. Behavioral Sciences & the Law, 26(1), 7–28. Wisdom, J. (2017). Proper-function moral realism. European Journal of Philosophy, 25(4), 1660–1674. Wood, A. (2008). Kantian ethics. Cambridge, UK: Cambridge University Press. Yang, Y., & Raine, A. (2009). Prefrontal structural and functional brain imaging findings in antisocial, violent, and psychopathic individuals: A meta-analysis. Psychiatry Research, 174(2), 81–88.

Chapter 6

Social Animals and the Potential for Morality: On the Cultural Exaptation of Behavioral Capacities Required for Normativity Estelle Palao

Abstract  To help bridge the explanatory gap of how normativity branched off into morality in the course of evolutionary history, I claim that morality is a form of social normativity, specifically a form of cultural normativity. Furthermore, with the origins of its behavioral capacities rooted in normative practice, morality should be considered as an exaptation, a secondary adaptation shaped through cultural selection and evolution. Cultural selection pressures differ across social groups, as well as various species. Empirical evidence has shown that animals other than humans are capable of normative behavior, and that they can also be subject to processes of cultural transmission. With an inclusive approach to defining social behaviors, I argue that non-human, socio-cultural animals can engage in moral behavior. Keywords  Evolution of morality · Normativity · Animal culture · Social norms · Exaptation · Secondary adaptation · Behavioral capacities · Cultural evolution · Conformity · Social learning

6.1  Introduction One of the main benefits to an evolutionary understanding of morality is the ability to further define our relationship to other animals in virtue of our behavioral similarities. Rather than focusing on what separates humans from other animals and using morality as a distinguishing feature, researchers should instead look at what humans and other animals have in common, and investigate what would allow for the development of morality to begin with. To help bridge the explanatory gap between morality and normativity in evolutionary history, I claim that morality is built on normativity, and that the ability to E. Palao (*) Department of Philosophy, Osgoode Hall Law School, York University, Toronto, Canada e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. De Smedt, H. De Cruz (eds.), Empirically Engaged Evolutionary Ethics, Synthese Library 437, https://doi.org/10.1007/978-3-030-68802-8_6

111

112

E. Palao

behave morally is developed from the behavioral capacities which allow for engagement with social norms – capacities which we, as humans, share with other non-­ human animals (henceforth, ‘animals’). The development of morality from normativity relies upon cultural selection and evolution, and as a result I consider morality as an exaptation (a secondary adaptation). In order to argue this, I dedicate sections of this paper canvassing the terminology used in discussions of various social behaviors, and show that an inclusive approach to relevant definitions would allow us to understand how animals can exhibit moral behaviors. As such, I clarify how normativity, culture, and conformity (among other terms in close proximity) should be understood. I work to untangle the relationships between these concepts within the context of morality’s evolution. I then outline the empirical works which confirm the ability of animals to exhibit such complex social behaviors, and claim that as a result, animals possess the capacities needed to engage in moral behavior. Afterwards, I examine more closely how morality was formed from its roots in normative behavior: morality should be treated as an exaptation, specifically, a secondary adaptation of which cultural evolution plays an essential part. Since animals are also capable of cultural behavior and transmission, their status as potentially moral agents and their capacity to perform moral behavior are both maintained. I conclude by raising some possible avenues through which morality, as a cultural-based exaptation of the behavioral capacities required for normativity, can be applied to debates within moral philosophy, and urge that these inquiries be included in future empirical research on animals.

6.2  M  orality and Biology: Evolution, Natural Selection, and Adaptation I start with the assumption that any evolutionary theory of morality “will be connected to sociality. An evolutionary theory offers an explanation by reference to an interaction with, adaption to, and/or selection by the environment. [For] social creatures, that environment is a social one” (Edwards, 2017, p. 3). I will include ‘culture’ within a social environment as a selection pressure for morality. I also suggest that, so long as a group can engage in socio-cultural normative behavior, there is indeed the potential for its individual members to be moral agents. ‘Adaptation’ can either refer to the process of phenotypic change via natural selection (directly affecting the propagation of a certain gene), or the specific product of such a process (Sosis, 2009, p. 321, citing Gould & Vbra, 1982, Andrews, Gangestad, & Matthews, 2002). A trait is ‘adaptive’ if it provides its bearer, as set in a particular environment, with reproductive benefits – however, just because a trait happens to be adaptive does not necessarily mean it is an adaptation or that it has gone through the process of natural selection (Sosis, 2009, p. 321, citing Laland & Brown, 2002). Some authors have claimed that there is a difference, for example,

6  Social Animals and the Potential for Morality: On the Cultural Exaptation…

113

between (a) claiming normative cognition evolved, thus possessing a ‘phylogenetic history’, and (b) claiming that normative cognition is an adaptation, or that it is a trait which evolved specifically as a result of natural selection (Machery & Mallon, 2012, p. 13). To suggest that morality evolved through natural selection also implies that it is an adaptation. To apply the mechanisms of natural selection to morality requires many assumptions, all of which can be debated: that ‘being moral’ is  a phenotypic trait which can be observed through behaviors or products of behavior; that there can be differences in morality or moral outputs amongst a population; that these differences are significant enough in a particular environment to cause a variance in reproduction rate. Claiming that morality is an adaptation means accepting that one’s level of morality positively correlates with one’s fitness, at least on an aggregate, statistical level. Moral behavior is indeed commonly understood to affect our inclusive fitness in various ways, either directly, leading to the increase or decrease of one’s own chances to reproduce, or indirectly, perhaps having an effect on the likelihood that others will aid or reciprocate based on one’s reputation (Ohtsuki & Iwasa, 2004). However, there is always the possibility of other factors being involved  in an evolutionary change, such as genetic drift or mutations. While ongoing debates in metaethics and normative ethics make it difficult to offer a specific definition of ‘morality’, it can be understood broadly as encompassing behaviors that societies consider as a guide to conduct, or specific patterns that a group consistently engages in, often involving (perhaps unconsciously) an appeal to what is ‘right’ or ‘wrong’ (Gert & Gert, 2017). I also include in this definition of morality everything that may contribute to the behavioral outputs, such as the capacity to feel sentiments or make judgements that could be considered to motivate ‘moral’ acts. While ‘acting morally’ may require the presence of some sort of moral code or rule, I discuss morality regardless of what the specific contents of moral norms are in any given socio-geographical area. I assume that one can act morally whether or not they understand or are aware of what moral norms they are adhering to, and I do not attempt to define what it takes for one to be conscious of a moral norm to begin with. In looking at the evolutionary history of morality, I think it would instead be more useful to start with what behaviors were necessary for morality to even come to be. For a group to engage in moral behavior, the individuals must first, at the least, have the capacity to engage in normativity.

6.3  An Inclusive Understanding of Social Behaviors Since an analysis of morality’s history necessarily requires reference to the underpinning concepts rooted in sociality, it would be helpful to use those definitions and methods of understanding behaviors that are comprehensive enough to have an application to both humans and animal groups alike. This is the preferable approach to take, instead of automatically assuming that an exhibited behavior is indicative of a certain method of cognition, namely methods which are often deemed (and oftentimes, wrongfully so) to be limited to, or only attainable by, modern adult humans.

114

E. Palao

In this section, I outline the various ways of best using these behavioral concepts in an inclusive manner, and describe the empirical work on animal behaviors in light of these conceptual understandings. I conclude this section by showing that animals are capable of the necessary complex social behaviors that morality draws upon, and that as a result, animals can indeed be recognized as moral beings.

6.3.1  Normativity, Culture, and Conformity (Among Others) ‘Normativity’ is the broader concept of deciding what ought to or should be done, applicable in any act of decision-making within a social context, where the content of the decision may or may not involve ‘morality’ per se (Vincent, Ring, & Andrews, 2018, pp. 1–2). What matters in normative practice is that there is a valuation of certain ways of doing things as opposed to others, as evidenced by the behavioral choices of those in a community (Vincent et  al., 2018, pp.  3–4). These practices work in tandem with social norms, which may or may not be spoken or necessarily physically codified. I use Kristin Andrews’ definition of ‘animal social norms’, which is offered in the spirit of Christina Bicchieri’s account but with an expanded range of cognitive agents it could be applicable to. Animal social norms (hereby just ‘social norms’) operate under three conditions: “(1) [there exists] a pattern of behavior demonstrated by community members; (2) individuals choose to conform to the pattern of behavior; and (3) individuals expect that community members will also conform and will sanction those who do not conform” (Andrews, 2019, p. 5). It is this understanding that best encompasses the behavior of animals as operating under ‘social norms’. Such norms are essential to normative practices, and the patterns of behavior that form these norms can be socially learned or transmitted. While ‘norms’ have content, ‘normative behavior’ refers to how agents act given the presence of the norm. Further, it is important to consider ‘culture’ as used throughout this paper. In line with the use of concepts and definitions that do not limit their extension or application to humans, I consider ‘culture’ as referring to the body of “information transmitted between individuals or groups, where this information flows through and brings about the reproduction of, and a lasting change in, [a] behavioral trait” (Ramsey, 2017, p.  348, in response to the definition by Laland & Janik, 2006). Culture can affect and be affected by both environment and genetics – for example, the cultural behavior of licking by a mother can affect a rat pup’s epigenetics (Ramsey, 2017, p. 351, citing Weaver et al., 2004). Cultural transmission can employ (but not be limited to) the use of physical tools or artifacts. In addition, animal behaviorists posit that a behavior only need to persist amongst even a subset of a species to count as evidencing culture (Sapolsky, 2006, p. 218). Previous understandings of culture made use of such a characterization, but put specific methods of transmission and specific requirements of information within the definition. For example, Mesoudi, Whiten, and Laland (2006), define ‘culture’ as “information capable of affecting individuals’ behavior that they acquire from

6  Social Animals and the Potential for Morality: On the Cultural Exaptation…

115

other members of their species through teaching, imitation, and other forms of social transmission”, with ‘information’ as a term that includes “ideas, knowledge, beliefs, values, skills, and attitudes” (Mesoudi et al., 2006, p. 331, citing Richerson & Boyd, 2005). Ramsey’s (2017) definition of culture encapsulates the main factor of information transmitted about a behavior, adds the requirement of having to bring about lasting change, and is broad enough to include any of the listed methods of social transmission or learning, among others. It is very much similar to the definition offered by Keith and Bull, with ‘culture’ referring to “‘information or behavior – shared by a population or subpopulation – which is acquired from conspecifics through some form of social learning’, which can occur either across or within generations” (Keith and Bull, 2017, p.  297 citing Rendell & Whitehead, 2001; Whitehead, Rendell, Osborne, & Würsig, 2004). These conceptions of culture are more inclusive and captures any species that has the capacity to transmit information in this way. Studies concerning animal behaviors previously used the term ‘social traditions’ but now more commonly employ the use of ‘culture’ instead (Sapolsky, 2006, p.  218), and these two terms are used interchangeably. ‘Social learning’ is often referred to as the primary driving force of cultural transmission – defined as “learning that is influenced by observation of, or interaction with, another animal (typically a conspecific) or its products” (Heyes, 1994, p. 207, citing Box, 1984; Galef, 1988). Modern uses of ‘social learning’ also specifically include ‘the acquisition of behavior’ as a consequence of the learning done (Aplin et  al., 2017, p.  7830). However, it should be noted that while social learning is necessary for culture, it is not sufficient to explain culture on its own (Ramsey, 2017, p. 4). While sociality is a requirement, the information shared must also result in generationally lasting behavioral changes within the community (and not just an individual). The language of ‘conformity’ is also frequent in these discussions, as it often considered as justification for changing an individual’s conduct. Specifically, ‘conformity’ has been defined as “the fact that individuals are influenced by the most frequent behavior they witness in others” (Claidière & Whiten, 2012, p. 142). There are two types of conformity: (a) informational, where individuals strive to act in an accurate manner but hold a high level of uncertainty, thus observing others for answers, and (b) normative, in contexts where individuals are required to manage their social interactions (Claidière & Whiten, 2012, p. 142). As such, conformity is the act of change itself, and the reason for why an individual is influenced to change their action can be considered in these two ways. Informational conformity is more concerned with problem-solving, while normative conformity puts the justificatory emphasis for action on simply acting in accordance with group (or in-group) behaviors rather than attaining accuracy (and perhaps receiving rewards). While the name ‘normative conformity’ as a type of conformity is confusing, and seems narrower than how ‘normative’ is considered for our purposes, the centrality of social interaction is what conceptually links the two. In both cases, there is no doubt a valuation done by the community, in which a preference is made for one performance or act over another, and the individual must act to either conform to the behavior or not in an effort to manage their interactions with and/or within this same community.

116

E. Palao

Conforming is essential as an action in the operation of social norms, and should be considered as evidencing normative behavior in the group. Conformity also employs social learning, as in order to change one’s behavior based on another’s actions, one must first learn of what to do by observing them. At this point, it is clear that culture and normativity both make use of social norms: culture deals with the specific information that is transmitted by observing, making inferences from, or practicing norms, and normativity is concerned with what an individual actor ought to do in light of the specific content of the norms at play. There could be multiple aspects of a particular behavior pattern (meaning, the cultural information behind it could range from single-layered to multi-faceted), and there could be multiple behavior patterns that exist within a specific group. Social norms, culture, and normativity all overlap, as the three concepts are distinct, yet draw upon one another. The development of culture relies on the existence of social norms, and the contents of such norms are dictated by, and in turn, reinforced by, normative behavior. Further, social learning is considered an integral part of cultural development, and conforming behavior (of either kind) also requires learning in a social manner. The relationship between the concepts canvassed in this section are in constant interaction with each other, yet are each distinct (although in subtle ways), and, importantly, are defined in such a way that allows us to understand how animals are capable of engaging in each aspect.

6.3.2  The Relationship Between Various Types of Norms What then, is the distinction between ‘social norms’ and ‘moral norms’? Is there a difference to begin with? There has been debate on what counts as a ‘social norm’, and many opinions on whether or not ‘conventional norms’ are different from ‘moral norms’. Conventional norms are usually thought to be completely contingent on environment or authority, whereas moral norms are often more widely applicable or justified in different ways (mostly with reference to harm or fairness) (Tiberius, 2015, p. 224). Studies done on children work with this understanding. For example, Josephs and Rakoczy (2016) assume that moral and conventional norms have a clear distinction, and use this in their analysis of what norms children deem acceptable to opt out of. They consider the regulations of the game Sudoku as an example of a social-conventional norm set that has its validity dependent upon a person’s commitment to adhere to its rules from the start (Josephs & Rakoczy, 2016, p. 198). However, I would argue that the commitment to adhere to a set of rules is simply a general consequence of normative reaction to a social norm rather than something specifically ‘conventional’. ‘Conventions’ can be reasoned with moral terms, and vice versa: even if someone warns others that they will not use a chess board as it is normally used (perhaps by drawing pictures on the squares rather than using the designated chess pieces on it accordingly), one could argue that they are being inconsiderate with the resource and unfair to those who actually want to play the game properly. From the

6  Social Animals and the Potential for Morality: On the Cultural Exaptation…

117

perspective of an avid chess fan, it could be seen as ‘wrong’ for such a board to go to waste – this would be an example of a ‘conventional norm’ to play by the game rules being reasoned with using moral language. In responding to supporters of the moral/conventional distinction, specifically those who commonly rely upon the work of Elliot Turiel and his collaborators to justify such a distinction, Joseph Heath points out that the difference between ‘moral’ and ‘conventional’ is actually much less rigid than what philosophers assume Turiel to be claiming (Heath, 2017, p. 277). Instead, the two terms were just meant to be used as conceptual domains rather than considered as types (Heath, 2017, p. 290). Heath also claims that the “majority of social norms, including both rules of etiquette and moral rules,” are “norms with conventional elements” (Heath, 2017, p. 290). Things like etiquette or game rules can be understood as “morality operating at a reduced level of seriousness” (Heath, 2017, p.  290). However, to Heath there still exists ‘pure’ conventions and morals, although the amount of each is significantly lower than what Turiel supporters have previously assumed (Heath, 2017, p. 282). Humans, especially young children, are especially adept at inferring norms, and sometimes assume the presence of a social norm even when they do not exist (Schmidt, Butler, Heinz, & Tomasello, 2016). In addition to this tendency for humans to be norm-happy, it is also difficult to firmly define or distinguish types of norms that do exist. Overall, the moral and conventional distinction is not as useful as may have been previously assumed by moral philosophers. The main takeaway is that understanding the two concepts as being on a scale of normativity, rather than as two distinct domains, is compatible with the understanding that cultural animals, which engage in normative behavior, have the potential to develop both conventional and moral norms. I am of the view that moral philosophers should follow the apparent tradition of anthropologists and sociologists, and believe that “moral rules are simply a species of the more general genus ‘social norm’” (Heath, 2017, p. 277). Doing so would allow animals that engage in social norms to be deemed as having the potential to participate in ‘moral’ behavior. In fact, biologists and cognitive ethologists have  already firmly maintained that animals are indeed capable of engaging in morality (Bekoff, 2007; Pierce & Bekoff, 2009). However, as Heath suggests, one motivation for moral philosophers to accept a distinction between moral and conventional is that a blurring of this distinction would supposedly lead to the conclusion of moral relativism (Heath, 2017, p. 277). Where culture and social norms are assumed to be extremely variable, there is an assumption that what is ‘morally’ acceptable or unacceptable is universal – moral norms represent ‘objectively valid’ principles (Brennan, Eriksson, Goodin, & Southwood, 2013, p. 5). For this reason, philosophers assume that allowing moral rules to be understood as a subset of social norms (as considered by scientists) would remove the element of universality that seems essential to morality, resulting in the possibility of relativism (Heath, 2017, p. 277). While Heath does not directly respond to this line of argument, I object to it. Allowing moral and conventional to both be considered as ‘social norms’ should not

118

E. Palao

necessarily lead to relativism at all, since there can exist, at the same time, a universal moral principle, with differing ways of expressing that same principle depending on the cultural group. There is an important difference between a specific, abstract universal moral value and its particular representation at any given time period, cultural context, or geographical place. As Joseph Raz states, “multiculturalism lies in the recognition that universal values are realized in a variety of different ways in different cultures, and that they are all worthy of respect” (Raz, 1998, p. 204). Each culture [here, understood as a social group that can exhibit ‘culture’] might have errors in how a universal value is displayed, but pointing out mistakes (such as oppressive practices) and areas of improvement for the display should not mean the total condemnation or complete rejection of a culture and its underlying core values (Raz, 1998, p. 205). In this way, understanding morality as a form of normativity, and moral norms as a form of social norms, should not automatically lead to cultural relativism. Our varying practices or displays of, and our subjective access to, moral principles (whatever those may be), do not necessarily dictate the content of the principles themselves, nor take away from their universal nature. Another question to ask is, “what differentiates a social norm from a cultural norm”? In response, one must point out that any description of a moral, conventional, or cultural ‘norm’ should be considered as a form of social normativity first and foremost. Each of these necessarily fulfill all the factors and basic requirements of a social norm: a pattern of behavior in the community, the choice to conform to the behavior, an expectation of conformity by others, and some form of sanctioning if it is not followed. What makes something a ‘cultural’ norm would mean that the pattern of social behavior is a result of the transmission of information about the behavior, and that there is lasting change in the community over generational time as a result of the norm. Further, anything that is a ‘moral’ norm would fall under a type of cultural norm in turn, as moral behaviors (distinct from moral truths or values) are developed through cultural means. Any sort of categorization of a social norm is made for explanatory ease rather than to create rigid distinctions, as the borders between them are necessarily fluid, and each are nonetheless heavily reliant on the basics of their shared parent/umbrella term, ‘social norm’.

6.3.3  C  omplexity of Animal Social Behaviors: Prerequisites of Morality Fulfilled At this point, I can take note of what animals have been shown to be capable of. As seen from previous works, animals can indeed engage in normative practices. As Frans de Waal puts it: “[d]efining normativity as adherence to an ideal or standard, there is ample evidence that animals treat their social relationships in this manner. In other words, they pursue social values” (de Waal, 2014, p. 185). In particular, Vincent et al. (2018) use organized tables to list empirical studies which show how many chimpanzee and cetacean behaviors can fall into norms relating to obedience,

6  Social Animals and the Potential for Morality: On the Cultural Exaptation…

119

reciprocity, care, social responsibility, or solidary, categories which were all inspired by Jonathan Haidt’s moral foundations theory. For example, in the identification of behaviors that fall under norms of reciprocity (Vincent et al., 2018, p. 29), chimps refused to continue tasks where they witnessed other conspecifics receiving a better reward in comparison to their own for doing the same action (Brosnan, Talbot, Ahlgren, Lambeth, & Schapiro, 2010). Not only does this show they have the ability to form abstract thoughts or predictions of certain outcomes, but there was also a clear negative reaction, or an inequity aversion made to the rewards given  – the same task should result in the same reward. This behavior, among the many others listed (consoling, policing, demonstrative teaching with correction, sharing food, etc.), evidences the ability of animals to engage in normativity. In fact, there has been support for a new field of studies on the ‘theology of normativity’, with special focus on animals (Lorini, 2018, p. 18). All in all, social animals have indeed been shown to be able to value specific ways of behaving over others, acting accordingly with expectations held of them by others in their community. There is also a great amount of anthropological literature that considers animals as being capable of cultural tradition. For example, in observing the foraging behaviors of bottlenose dolphins and the song of humpback whales, Cantor and Whitehead (2013) consider culture as a driving element of observed group changes – specifically, social organization. The distribution and interaction of various behavioral phenotypes (such as vocalizations by whales or through the feeding act of sponging in dolphins) can be dictated by the social structure in place, and the behaviors themselves can in turn produce a “cultural context for the population that can drive patterns of social interactions and relationships” (Cantor & Whitehead, 2013, p. 8). In addition, cultural conformity is thought to explain the social transmission of innovative foraging techniques by wild birds, specifically, subpopulations of great tits (Aplin et al., 2015). When new feeding behaviors were introduced by specific individuals into population networks, they spread to other members via social ties and responsive characteristics of the members, forming a stable yet arbitrary tradition over time that lasted over two generations (Aplin et al., 2015, p. 540). More common is the analysis of cultural practice in primates (Horner, Proctor, Bonnie, Whiten, & de Waal, 2010; Jaeggi et al., 2010; van Schaik et al., 2003; Whiten, 2012). There is, for example, evidence of dominance hierarchies by rhesus monkeys, harems in hamadryan social structures, and stratification systems based on aggression in savanna baboons, as well as evidence of the capacity to change these patterns in response to novel social behaviors, which can then develop and be transmitted on a multi-generational scale (Sapolsky, 2006, p. 228). Using Ramsey’s (2017) inclusive understanding of ‘culture’, each of the above attributions of cultural behavior are appropriately placed. Although cultural practices in humans can be complex in nature (for example, religiosity, or multi-use of modified technology and artificial intelligence), the detail of the tools or technique in the behavior does not determine whether the activity counts as a cultural practice, and thus the action of animals in nature should be included in this definition. In their analysis of cetaceans, Cantor and Whitehead (2013) consider ‘culture’ to be socially transmitted behavior that both shapes, and is shaped by, social structure (pp. 1–2).

120

E. Palao

While the focus should be slightly widened to encompass any information about the behavior being transmitted rather than solely the act itself (as per Ramsey’s (2017) definition, which would include anything relevant to the behavior), this is nonetheless consistent with the consideration of animal behaviors being labelled as ‘cultural’. In addition, the fact that novel or introduced behaviors among wild birds (great tits) and among various primates can be adopted and persist over generations shows that information about behaviors are indeed transmitted by individuals in the group to others in the community, and producing a lasting change to a previous particular pattern (of foraging, organization, or otherwise). Putting away any notion of human ‘uniqueness’, researchers should no doubt be able to understand these socio-­ biological studies, and many others like it, as evidencing cultural behavior in animals. Conformity is often a significant factor in the discussion of culture, especially within anthropological studies. Some main elements of culture are the abilities to engage in conformity as well as the creation of, and propagation of traditions (with or without physicals tools), and there is indeed evidence of such behaviors found in multiple species. The importance of conformity and social decisions has been examined in foraging stickleback fish (Webster & Laland, 2012), the group behaviors of vervet monkeys (van Leeuwen, Acerbi, Kendal, Tennie, & Haun, 2016), or even trunk use by Asian elephants (Yasui & Idani, 2017). In addition to this, animals are also capable of balancing the benefits and drawbacks of learning through social channels. Aplin, Sheldon, and McElreath (2017), for example, show that not only are great tits capable of engaging in conforming behavior, but they can combine it with individual learning via reinforcement (and update one’s body of experience). As a result, individuals and populations “could both acquire adaptive behavior and track environmental change” (Aplin et al., 2017, p. 7830). This is where concepts of adaptiveness and conformity come in. Such studies that involve variation of food rewards appear to evidence informational, rather than normative conformity, as there is a focus on getting an ‘accurate’ result or a preferred food as a result of using a particular technique rather than another. Recall that ‘normative conformity’ refers to the justification of change in one’s behavior as a result of observing the behavior of others because of a pressure or desire to maintain a social relationship. Although such experiments involving foraging tasks are likely to involve only informational conformity, there is still the impression that experiments using different, more inherently social tasks (rather than just the involvement of social learning) might “uncover the presence of normative conformity in animals” (Claidière & Whiten, 2012, p. 134). Yet, some researchers (for example, Gruber, Zuberbüher, Clément, & van Schaik, 2015) believe that due to their cognitive differences from humans, animals may simply not be capable of normative conformity at all. In any case, if there is evidence that animals can change their behaviors due to a group influence and engage in conforming behavior regardless of the nature of justification (accuracy or maintenance of social relationships or otherwise), this is all that is needed to establish that animals can indeed engage in social norms and normative behavior. There is still a preferred way of doing things within the community, an individual still chooses to act within the limits of the patterned behavior, and

6  Social Animals and the Potential for Morality: On the Cultural Exaptation…

121

there is an expectation of conformity and sanction (either via getting a lesser reward if the behavior is not followed or by some consequence to the social relationships held by the non-conforming individual, etc.). Thus, resting the debate of whether animals can engage in normative behavior or culture on the determination of whether animals can engage in normative conformity alone is an ineffective approach to take. With this in mind, empirical evidence of conformity in animals has been proposed in fish, birds, and primates (Aplin et al., 2017, p. 7830), and each of the above behaviors evidencing culture could also be understood as showing conformity. Even more interesting is the element of adaptive information: conforming behavior “may evolve as a means of providing naïve animals with a quick way of ascertaining locally adaptive information” (Aplin et al., 2017, p. 7830). In the case of wild great tits, the ability of individuals to evaluate mechanisms that result in counterintuitive outcomes worked to keep conforming pressures in check, and social learning in subpopulations of this bird resulted in the population adapting to and retaining high pay-off behavior (Aplin et al., 2017, p. 7835). Animals are capable of evaluating socially transmitted information in light of what they learn from their own individual experiences when deciding whether or not to conform to a social norm. However, if members of the group act in exclusively conformist ways without positing individual analysis, then “any new environmental change may result in a mismatch with the majority behavior, leading to a perpetuation of suboptimal or maladaptive traditions overtime and exaggerating the disadvantages of social information use” (Aplin et al., 2017, p. 7830). This may or may not lead to a population collapse, and could occur, for example, when “matching group patterns is more important than the absolute adaptive value of a behavior, where the adaptive value of a behavior is obtuse or delayed” (Aplin et al., 2017, p. 7835). Given this, animals are at least capable of avoiding the propagation of maladaptive traditions by using individual methods of learning and analysis to help evaluate the content of what is being socially transmitted, as done in order to eventually conform or not conform to the behavior at hand. This, in itself, shows the intricacy of normative behavior in animals. So, taking stock: animals are capable of performing a multitude of social behaviors. Animals can engage in social learning, social norms and normativity, and conforming behaviors. We have also seen that animals are capable of evaluating their individual experiences against the behavioral information given to them through social means, which is a procedural part of choosing to conform to a pattern of behavior in a community. Animals are also capable of culture, which in turn requires the use of social norms. It has also been noted that social norms are defined broadly and encompass various types of norms, including moral norms. Morality is built upon normativity in this way. As a result, it is the case that cultural animals no doubt have the capacity to engage in morality, since they are able to participate in any social behavior that ‘morality’ would require of its agents.

122

E. Palao

6.4  From Normativity to Morality Following the line of argument, it is clear that animals indeed have the potential to be moral agents. The question at this point, one might ask, is whether they actually participate in ‘moral behavior’. If one were trying to maintain a ‘higher-order’ distinction between humans and other animals, it is at this juncture where one would appeal to differences in cognitive abilities, rather than the similarities between them. However, in looking at the evolution of morality, cutting the inquiry off early by questioning whether or not animals engage in morality is not necessarily beneficial to our understanding of its origins or history, as it requires an onerous meaning of ‘morality’ in order to effectively categorize beings as ‘moral’ or not. Using such an exclusionary definition results in an unfair evaluation of the potential for moral agency in animals against sophisticated theories meant for human adults. Instead, since normativity is required for morality, it would be more beneficial to determine if animals can engage in the behaviors required of morality – and, as seen in previous sections above, animals are indeed capable of complex sociality that, with an inclusive approach to defining behavioral concepts, may be recognized as moral behavior. A further step would be to show how normativity branches off into morality, and evaluate whether or not animals are capable of such a process. In this section, I argue that cultural evolution is the main driving factor of distinguishing moral behavior as an exapted type of normative behavior, and that even with this understanding of how morality stemmed from normativity in evolutionary history, cultural animals can still be considered as moral beings.

6.4.1  Morality as Secondary Adaptation (Exaptation) This section starts by expanding the discussion on whether morality is an adaptation. If we only focus on current adaptive value, and end up conceiving of morality only as a primary, biological adaptation, this would fail “to distinguish current utility from reasons for origin” (Gould & Lewontin, 1979, p. 58). Conflating present uses of a trait or behavior for the main purpose of its presence to begin with can seriously hamper the investigation of evolutionary histories (Gould, 1997, p. 10750). I agree with Stephen Gould that it is a mistake to “treat a proven current utility for any individual feature as prima facie evidence of its adaptive origin” (Gould, 1997, p. 10750). In this vein, I urge that morality’s current practical use in varying societies should not be mistaken for its original function. My position in this section can thus be simply stated: the adaptive origin of moral behavior is in normativity, and it is a mistake to assume that morality exists because it was primarily adaptive  – morality is indeed an exaptation, and specifically, a secondary adaptation. In recent debates, authors have already argued that “human moral judgement is not a discrete adaptation but rather a by-product of other psychological traits” (Joyce, 2014, p.  262), with examples of such ‘spandrel theorists’ being Shaun

6  Social Animals and the Potential for Morality: On the Cultural Exaptation…

123

Nichols (2005), or Jesse Prinz (2008). There is an idea that, just like the coincidental spandrels or triangular spaces between two arches found in the architecture of the San Marco cathedrals (Gould & Lewontin, 1979), a ‘moral sense’ is simply a consequence of the cognitive capacities that were already present. Using the term ‘exaptation’ can also provide more causal detail. While an ‘adaptation’ (in addition to the discussion in sections above on its definition relative to other relevant terms) is understood as a feature shaped by selection for its current role (Gould & Vbra, 1982), an ‘exaptation’ is a “preexisting trait that acquires a new role for which it was not originally designed by natural selection” (Sosis,  2009, p. 323 citing Gould & Vbra, 1982). Exaptations can emerge as an unintended consequence/byproduct, or as a result of a trait evolving for one effect, but then co-­ opted for a different effect (Sosis, 2009, p. 323). Although they can have functional effects, exapted traits “are not modified when taking on their new role; if they are, the adaptive modifications are known as secondary adaptations” (Sosis, 2009, p. 323). If an exaptation turns into this second form and provides a secondary adaptation, then “the new function may replace the older function or coexist together with it. Feathers seem to have evolved first for conserving temperature, but were later co-opted in birds for flying” (Ayala, 2010, p. 9019). When looking at the history of morality’s evolution, the question instead should be, “which is the primary structural decision, and which is the non-adaptive by-­ product coopted for utility?” (Gould, 1997, p.  10752), where the latter refers to something that starts out as an innocent consequence and subsequently is treated, in a secondary nature, as an adaptive trait for a separate purpose. Indeed, morality has been imagined in different ways, given this definition of ‘exaptation’. Benjamin Fraser uses this term to directly reply to Prinz, who claims that there are no psychological capacities specifically for morality (Prinz, 2007). Fraser disagrees that the trait of making moral judgements should, as Prinz suggests, just be seen as a sort of by-product or ‘spandrel’ of the mental capacities required for non-moral emotions, meta-emotions, and perspective taking, among others. The trait of making moral judgements should instead be considered as an exaptation, meaning that moralizing can still have adaptive purpose whether or not there are cognitive machineries dedicated solely to morality (Fraser, 2010, p.  227). Specifically, Fraser assumes that moralizing is an exaptation of the faculties that Prinz refers to, that their combination and re-use explains the ability to make moral judgements. He also goes a step further and suggests that, with this new understanding using the language of exaptations, Prinz could still be considered as offering an adaptationist account of moralizing, despite his intention to prove the opposite (Fraser, 2010, p.  227). This argument by Fraser echoes and builds upon the objection Richard Joyce puts forward against Prinz. Joyce pushes against such spandrel theorists and argues that even if morality is a by-product (as Prinz says explicitly, 2008, p. 367), the fact that the ability to make moral judgments on its own can contribute to the ‘reproductive relevance’ of an agent means that it has become an adaptation nonetheless (Joyce, 2014, p. 264). In later sections, I will address the impact of ‘exaptation’ language on the adaptationist versus spandrel theorists debate that is at the forefront here. For

124

E. Palao

now, it is important to see how authors, like Fraser, use the concept of exaptation with reference to morality in order to demonstrate how my argument differs. Francisco Ayala also argues that the moral sense of right and wrong found in humans is an exaptation, but of “advanced intellectual abilities”, with such abilities being a set of traits “directly promoted by natural selection” (Ayala, 2010, p. 9015). Ayala specifically separates moral/ethical behavior from moral codes (for him, ‘morality’ refers to moral behavior/a moral sense rather than the rules/norms established), and he pinpoints the development of creative language as a potential beginning of morality through the process of hominid evolution (Ayala, 2010, p. 9019–20). I agree that morality is an exaptation, but where I differ is in the idea of what exactly morality is an exaptation of, as well as what ‘morality’ refers to. In talking about morality, Fraser refers to the trait of making moral judgements, and Ayala refers to moral behavior. However, I have considered morality in a broad sense, which can encompass any cognitive abilities, the observable outputs, as well as the existence of codes and norms that the culminated actions are (consciously or unconsciously) evaluated against; thus, my scope is wider than that of Fraser and Ayala. More importantly, and also differing from what Fraser and Ayala argue, I suggest that morality’s original use was for normative participation, rather than it simply existing as a cognitive exaptation of improved intellect or a collection of emotion related mental capacities. In this sense, normativity includes any action that is meant to follow a group behavior. To help explain what normative participation means in a general sense, I refer back to Vincent et al.’s (2018) idea of (naïve) normativity as a set of cognitive capacities that establishes what ought to be the case, as established by social context. This includes actions like using chopsticks to eat noodles, due to a recognition that this is a culinary expectation of people in one’s current environment. The cognitive capacities that were historically present for non-moral normative behavior (perhaps to use a food-gathering tool in a manner that imitates conspecific practices) have been co-opted for moral behavior (defined as ‘table manners’ or ‘polite dining’). Such rule-adhering behaviors (‘rule’ in a rigid or loose sense) are not limited to morality and are still present today. Vincent et al. (2018) describe the common attribution of ‘moral’ traits and behaviors: “a moral person helps others and refrains from harming others out of her concern for wellbeing or the greater good. Or a moral person recognizes the intrinsic value of others and treats them accordingly. Likewise, when we call someone ‘immoral,’ we place them into the sphere of morality, but we do so in order to offer condemnation or at least correction” (Vincent et al., 2018, p. 2). While it had its beginnings in the capacities required for general normativity, morality (being an exaptation), ultimately developed through cultural forces and is characterized by the interaction between behavioral competencies and the socio-culturally produced ethical rules, depending on what ‘ethical’ means according to the in-group in question. A cause of confusion may be due to the relationship between adaptation and exaptation. As known from Sosis (2009), exaptations could also be adaptive, but in a secondary sense. There can first exist a trait that is adaptive for a certain purpose, followed by the exaptation of that trait for a new purpose (basically, a new use for old tool), and over time the trait can be sharpened via  selection for better

6  Social Animals and the Potential for Morality: On the Cultural Exaptation…

125

functionality within its new role and environment, making a second wave of adaptation. For this reason, sometimes ‘exaptation’ is referred to as ‘secondary adaptation’ (Sosis 2009; Gould & Lewontin, 1979; Gould & Vbra, 1982). Furthermore, spandrels themselves are, by definition, capable of becoming exaptations. Although they arise by being by-products of other primary changes (where primary should not be understood as more important, and instead as the original reason for, or initial purpose of, a structure) and are adaptive at face value, spandrels themselves can eventually be co-opted for another use, making them secondary adaptations (Sosis, 2009 and Gould, 1997, p. 10752). It would seem then, that Fraser’s critique of Prinz is misguided – stating that the ability to make moral judgements should not be understood as a spandrel but instead as an exaptation (Fraser, 2010, p. 226) is redundant. Ayala, in contrast, does not refer to any of Gould’s work in this regard, and makes no mention of the impact of spandrels. However, he does argue that making moral judgements on their own, without acting upon them in one way or another, does not seem likely to have an increasing or decreasing effect in fitness (Ayala, 2010, p. 9019). Since we have already established the idea that for morality to be adaptive it needs to be phenotypical/observable through behavior, thus allowing selective forces to act upon it, it seems that both this statement from Ayala and Fraser’s overall conclusion do not necessarily put doubt to the claim that morality is adaptive, and if anything support the claim that morality is a secondary adaptation. In saying morality is an exaptation of normativity, I am not outright objecting to the idea that morality is an adaptation; I am simply saying it was a secondary adaptation. Of course, as Richard Joyce puts it, “one can still maintain that [the capacity for moral judgement] originally appeared as a by-product, but this is true of virtually everything that counts as an adaptation” (Joyce, 2014, p. 264, citing Dennett, 1995, p. 281). However, I make this distinction to specifically outline how morality has developed from normativity in the evolutionary timeline using cultural means, and it is for this reason that the temporal nature of exaptations is important and relevant. Unlike Fraser and Ayala, I aim to include animals in my discussion of morality, and allow the possibility for animal behavioral norms to be understood as being morally informed, so long as they exhibit a sociality that allows for culture to develop in their group populations. If behavioral capacities for things such as learning from others, conformity, or other processes required by normativity and normative participation, are appealed to in describing the underpinnings of moral behavior, then it should be emphasized that these social behaviors are not limited to humans at all, and thus, morality should not be reserved for only humans. Looking at morality as an exaptation of the behavioral capacities underlying normativity in this way is aligned with considering animals as participating in morality and capable of moral behavior. As a result, I argue that morality is an exaptation, a secondary adaptation, aptus under the current pressures of respective cultural societies. Normativity as it exists today, which includes any sort of adherence to a group value that can be either unconscious or intentional, should be considered as the primary adaptation, across species. In/out group thinking, pattern following, and ‘ought-thought’ was what natural

126

E. Palao

selection first acted upon, making  normative behavior adaptive. Although, as discussed, the social pressure to conform must also be kept in check by individual analysis in order to accurately track changes in environment, which helps to propagate adaptive behaviors rather than suboptimal ones (Aplin et al., 2017). In addition, since a new function can co-exist with the old function (Ayala, 2010), general normativity and morality can still persist as related behaviors  – in fact, the latter, by nature, involves a community valuing one way of doing things or engaging in social norms, all of which are the defining features of normative behavior. Imagining morality as a secondary adaptation means that the broader capacity for normative ‘ought-thought’ is still preserved, but has also been repurposed and molded into the much narrower version of moral behavior through the forces of cultural selection. It is important to stress that the evolutionary force shaping the exaptation is a cultural one.

6.4.2  Cultural Evolution in Animals Since cultural evolution is so central to the idea of morality being a secondary adaptation that stemmed from general normative behavior, it would be useful to understand what means and know the parameters of its influence. It should be understood in the context of the inclusive manner in which ‘culture’ is defined, with necessary reference to a lasting change in group behaviors resulting from a variance to the content of socially transmitted information. Cultural evolution is a process that results in “selective retention of favorable culturally transmitted variants”, in addition to a variety of non-selective processes, like drift (Mesoudi et al., 2006, p. 331 citing Boyd & Richerson, 1985; Cavalli-Sforza & Feldman, 1981). Cultural evolution has many similarities to biological evolution, and shares concepts of evolutionary psychology, but is itself a distinct form of generational change (Mesoudi et al., 2006, p. 332). Variation, selection, and inheritance is essential to this process – there must be a variation in terms of the cultural knowledge exhibited (whether through beliefs, artifacts, or behaviors); there must be selection that occurs as a result of the environmental pressures (such as limited attention, memory or expression); and there must be an inheritance of these successful cultural traits (Mesoudi et al., 2006, pp. 331–332). Cultural traits can go extinct due to competition, or go through accumulations of modifications over time, and can be maintained despite being geographically distributed (Mesoudi et al., 2006, p. 332). Although cultural mutations are also more frequent, the variation of cultural traits is largely acquired and maintained by social forces, and these modes of transmission of information can cause cultural evolution to proceed much faster than genetic evolution (Laiolo & Tella, 2007). As shown from the previous section discussing what animals are capable of: animals are indeed capable of, and are currently engaing in, cultural transmission and cultural evolution (also see Laiolo & Tella, 2007; Laland, 2008; Gruber, Muller, Strimling, Wrangham, & Zuberbühler, 2009; Price, Caldwell, & Whiten, 2010; Nakamura & Nishida, 2013). With such an inclusive definition of culture and cultural evolution, it is now largely accepted to “be present across a diverse array of

6  Social Animals and the Potential for Morality: On the Cultural Exaptation…

127

taxa that include fish, reptiles, birds, and mammals” (Keith and Bull, 2017, p. 297 citing Laland & Hoppitt, 2003; Laland & Janik, 2006). Further, cumulative cultural evolution in animals has also been the subject of research. The ‘cumulative’ change is a result of a “‘ratchet-like’ process where design changes are retained at the population level until new improved designs arise” (Hunt & Gray, 2003, p.  867, citing Tomasello, Kruger, & Ratner, 1993; Tomasello, 1999). Although Hunt and Gray (2003) discuss the design of ‘tools’ specifically, the concepts of cumulative cultural  selection can be applicable to behaviors in general: certain behaviors can be transferred faithfully among individuals, meaning that “an individual does not need to reinvent or recapitulate past designs to obtain the new design [or behavior]” (Hunt & Gray, 2003, p. 867). While certain researchers submit that cumulative culture is reserved only for humans (Caldwell & Millen, 2009; Gefland & Jackson, 2016; Niu, 2015; Tennie, Call, & Tomasello, 2009), others posit that this is not the case at all (Aplin, 2019; Davis, Vale, Schapiro, Lambeth, & Whiten, 2016; Hunt & Gray, 2003; Sasaki & Biro, 2017; Whiten, 2019; Yamamoto, Humle, & Tanaka, 2013). Some authors have suggested that gene-culture co-evolution occurs in animals as well. Gene-culture co-evolution happens when a cultural practice results in a lasting genetic change in the population, where cultural evolution meets biological evolution (Whitehead, 2017). A classic example of this in humans is the gradual shrinking of digestive tracks in the course of human evolution. This allowed for more energy expenditure in the development of the human brain, because the digestive tract, like the brain, consists of energy-hungry tissue, and became possible because of the cultural practice of cooking food over a fire (Henrich, 2011, citing Wrangham, 2009). Another example of gene-culture co-evolution in humans is found in the trait of lactose tolerance, an adaptation for the ability to properly digest milk in societies that herd cows as a cultural practice (Beja-Pereira et  al., 2003). Like cumulative culture, the capacity for gene-culture co-evolution has been treated as specific to humans only and the reason for the ‘success’ of humanity (Henrich, 2016). However, other researchers claim that this phenomenon has also been observed in animals – for example, certain amounts of mtDNA diversity and patterns of haplotype distributions are argued to be the direct result of cultural transmission in whale hitchhiking behaviors, or in variations in dolphin foraging methods (Whitehead, 2017). More recently, researchers have found that the migration and assortative mating behavior of whales and birds can shape the structure and diversity of a population’s genetic pool (Whitehead, Laland, Rendell, Thorogood, & Whiten, 2019). Although more empirical study must be done to fully settle the question, it is clear that there is a steadily growing open-mindedness in considering animals to be capable of cumulative culture and gene-culture co-evolution. But what does all this (and especially the gene-culture relationship1) have to do with morality? 1  The application of epigenetics to morality has been made before (for example, Wilson, 1998), although future studies should also expand on the potential for a change in moral behavior (gene expression/phenotype) without affecting genotype, within non-human animal populations that engage in cultural practices.

128

E. Palao

6.4.3  Implications for Moral Universality and Diversity It is often understood that morality is specific to various ‘cultures’ – for example, scholars like Richard Kinnier, Jerry Kernes, and Therese Dautheribes (2000) have composed lists of moral values (held by humans) that are shared, although in a limited manner, by various religions, social groups or governmental organizations. These moral values include, among other things, self-discipline and acceptance of personal responsibility, as well as respect and care for living things or the environment (Kinnier et al., 2000, p. 10). It seems as though the shared ‘moral values’ are rooted by promotion of basic requirements for survival in group-living circumstances, over long periods of time. Thus, the capacity to form moral norms that are similar in content is simply a consequence of social organization, which, in turn, is a requirement of cultural development since it is a collective (rather than individualistic) endeavor. It should also be noted that while the underlying principles and values seem to be shared across cultures, the expression, application, or detail ascribed to these observed moral norms are dependent upon the historical, geographical, and societal upbringing of the group, and thus, dependent on their cultural history. As a result, moral norms can be universal as attributable to their roots in general normative behavior, and the varying expressions of such norms are due to the exaptive quality of morality – that it is based off of cultural evolution and thus, often context specific. In his considerations of morality as exaptation, Ayala (2010) does explore gene-­ culture co-evolutionary theories, but again narrows the discussion to the genus Homo. As he describes it, there is a proposition that variation in capacities such as sympathy or fidelity (considered as ‘moralizing behaviors’) among early hominin groups were subject to selection for genes that “endowed early humans with primitive moral emotions”, which would have in turn stimulated the evolution of increasingly complex moral codes (Ayala, 2010, p. 9020). According to Ayala, gene-culture co-evolutionary theorists suggest that morality’s evolution must “have been directly promoted by natural selection in a process whereby the moral sense and the moral norms would have coevolved” and moral behavior evolved because it essentially increased fitness (Ayala, 2010, p. 9020). Ayala distinguishes himself from such theorists, since he believes that if gene-culture co-evolution were true in this sense, it would result in a universal system of morality rather than the high variation of moral codes present across cultures today. Moral codes are what developed through cultural evolution, and moral behavior, as a separate category from moral codes, is the exaptation– there is no overlap between the two (Ayala, 2010, p. 9015). Here we can see clearly that I differ from Ayala in that I consider both moral behavior and moral codes to have undergone cultural evolution. Although there may be universal moral values used to justify behavior, what is considered to be moral behavior (and any codified rule based off it) is entirely dependent on the cultural framework in which the behavior took place. The consideration of morality as an exaptation/secondary adaptation of the behavioral tools used for normativity allows for various types of moral codes resulting from more than one instance of

6  Social Animals and the Potential for Morality: On the Cultural Exaptation…

129

gene-­culture coevolution  – it does not have to result in a universal expression of morality at all. There can still exist universal moral facts, but the various codes (and behaviors based on those codes) can all express the same moral value in different ways, contingent on cultural development unique to the socio-geographic area or to the species.

6.4.4  Animals and Moral Behavior I argue that morality has developed as exaptation that differs in varying lines of cultural evolution, resulting from the social transmission of specific normative behaviors which changed over time. The biological evolution of morality itself begins from a branching off the line of ongoing normative development onto different cultural lineages, where cultural selection resulted in the co-opted use of normative behavior into ‘moral’ behavior. Given the surrounding culture, specific expressions of morality should be understood as secondary adaptations. Of course, this leaves non-moral norms open to cultural selection pressures as well, but this is only natural, given that moral norms are simply a type of social norms to begin with. And, since animals are capable of engaging in the necessary social behaviors, including normativity and cultural change, these beings should be considered in discussions of morality. Constructions of moral theories should not be premised on requiring certain mental qualities, or purposefully work to make such intricate conditions of moral agency in an effort to exclude animals. These theories should instead focus on the behavioral similarities which resulted from shared normative tendencies. Seeing morality as a secondary adaptation, and a co-opting of the abilities needed for normative behavior, is all compatible with the notion that moral norms are universal even while the expression of such norms differs across cultural social groups (or cultural species). Further, even if cumulative culture or gene-­ culture co-evolution plays an essential factor in this evolution of morality, there is also positive evidence that animals are capable of such feats, and should not be counted out of the moral game based on these grounds. Although he also argues with the lines of cultural evolution, Ayala (2010) limits the potential of animals to participate in moral behavior. While he admits that other animals, such as chimpanzees, have what he calls ‘rudimentary’ cultures, he does not believe that they have the advanced intellectual capacities needed for moral behavior, stating that “moral codes [the norms which actions are judged against as good or bad in order to form moral behavior] are products of cultural evolution, a distinctive human mode of evolution” (Ayala, 2010, p. 9021). As we have seen in the section above, Ayala also does not agree that gene-culture co-evolution can be exhibited by animals. According to Ayala, the necessary mental conditions for ethical behavior require a degree of intelligence that is only present in the later hominid lineage, a realm in which the “formation of abstract concepts”, the “anticipation of the future”, or, as previously mentioned, the “development of creative language” are all possible (Ayala, 2010, p.  9021). However, as we have already seen by the

130

E. Palao

socio-­biological studies mentioned above, there are many other species that engage in conforming behavior, and the focus on just the difference of ‘mental conditions’ rather than the vast amount of social and ‘behavioral’ abilities that non-animals are capable of misleads Ayala into thinking that culture is exclusive only to humans. It is unclear why Ayala limits the mental conditions he outlined as requirements for moral behavior to human intelligence, since the cognitive behaviors he references as examples of such advanced intellect are observed in animals. Animals are capable of a vast number of social behaviors including cumulative (rather than just rudimentary) culture, and exhibit patterns that can fall into obedience, reciprocity, care, social responsibility, or solidary norms (Vincent et  al., 2018), all of which can exemplify degrees of intelligence that hit Ayala’s minimum requirement for ethical engagement. There are researchers who have also considered the evolutionary history of morality, or at least parts of morality thereof, as starting with non-human primates. For example, Judith Burkart, Rahel Brügger, and Carel van Schaik state that, “with regard to the phylogenetic origin […] even though full-blown human morality is unique to humans, several of its key elements are not” (Burkart, Brügger, & Schaik, 2018, p. 1). To them, full-blown human morality “includes explicit moral reasoning and evaluation” and they claim these mental activities “may well be unique to humans” (Burkart et al., 2018, p. 2). I agree with their recognition that the key elements of morality are shared with animal ancestors. However, if we imagine normativity and social norms as an umbrella term, encompassing moral, conventional, or cultural norms as suggested, combined with the evidence from other non-primate animals engaging in a multitude of the relevant social behaviors, we see that drawing a line of moral origins at only primate history might be premature. A focus on similarity of behavioral outputs, rather than separation by mental capacities, would allow animals to be considered as moral agents.

6.5  Conclusion In conclusion, this paper has argued that moral norms are a form of social norms, as rooted in normativity, and are specifically culturally-driven in nature. Since empirical study has shown that animals exhibit social learning, normative participation, conformity, and cultural evolution, these beings have all the behavioral prerequisites required of morality. Moreover, moral behavior is not a primary adaptation, but an exaptation, a secondary adaptation of normative behavior, stimulated by cultural development and evolution. This account opens up philosophical space for the history of morality’s evolution to be seen as rooted in normative beginnings, and such a process (moving from normativity to morality) is aligned with the view that cultural animals are capable of engaging, from time to time, in moral behaviors. Given this broad understanding of morality I have argued for, there are many different future studies to be done that I think would be meaningful. Aside from establishing that ‘moral’ norms exist in an animal group, the question of moral

6  Social Animals and the Potential for Morality: On the Cultural Exaptation…

131

truth-tracking would be interesting to pursue. Since I have maintained that universal moral facts can exist despite differing expressions of the moral value in terms of behavioral output, I allow for the possibility of moral truth-tracking capacities to be observed in the behavior of animals. All of this requires, again, an inclusive approach to defining ‘morality’, one based on normativity as I have suggested here. In addition, aside from the need for further phylogenetic analysis to substantiate the theoretical arguments made, researchers could take a look at how (and to what degree) differences in cultural development can directly shape the type of moral behavioral norms present in a given animal society. Overall, by specifically viewing morality as an exaptation of the behavioral abilities required for normativity, and shaped by cultural evolution, I create a unique understanding of morality’s origins. Most importantly, by using an inclusive definitional approach in evaluating the complex behavioral capacities of multiple species, I continue the discussion in which cultural animals can be understood as moral agents. Acknowledgements  I wish to express my sincere gratitude to those who have helped me make this paper possible. I am extremely grateful for the mentorship and guidance I received from Kristin Andrews, without whom I would not have found my passion for such research, nor the knowledge of where to start. I am also deeply appreciative of the encouragement from the editors, Helen De Cruz and Johan De Smedt, who provided me with direct support, especially during the final stages of this work. Finally, I am thankful for all those I was able to speak to about these ideas (in classes, at conference presentations, or otherwise), as well as for those who read previous drafts and were patient with my thinking process/writing progress.

References Andrews, K. (2019). Naïve normativity: The social foundation of moral cognition. Journal of the American Philosophical Association, 6(1), 36–56. Andrews, P. W., Gangestad, S. W., & Matthews, D. (2002). Adaptationism – How to carry out an exaptationist program. Behavioral and Brain Sciences, 25, 489–553. Aplin, L. M. (2019). Culture and cultural evolution in birds: A review of the evidence. Animal Behavior, 147, 179–187. Aplin, L. M., Farine, D. R., Morand-Ferron, J., Cockburn, A., Thornton, A., & Sheldon, B. (2015). Experimentally induced innovations lead to persistent culture via conformity in wild birds. Nature, 518(7540), 538–541. Aplin, L. M., Sheldon, B. C., & McElreath, R. (2017). Conformity does not perpetuate suboptimal traditions in a wild population of songbirds. PNAS, 114(30), 7830–7837. https://onlinelibrary. wiley.com/doi/epdf/10.1111/ecog.02481 Ayala, F. J. (2010). The difference of being human: Morality. Proceedings of the National Academy of Science in the USA 107/2 [In the Light of Evolution IV: The Human Condition], 9015–9022. Beja-Pereira, A., Luikart, G., England, P.  R., Bradley, D.  G., Jann, O.  C., Bertorelle, G., et  al. (2003). Gene-culture coevolution between cattle milk protein genes and human lactase genes. Nature Genetics, 35(4), 311–313. Bekoff, M. (2007). The emotional lives of animals: A leading scientist explores animal joy, sorrow, and empathy—And why they matter. Novato, CA: New World Library. Box, H. O. (1984). Primate behavior and social ecology. London: Chapman and Hall. Boyd, R., & Richerson, P. J. (1985). Culture and the evolutionary process. Chicago, IL: University of Chicago Press.

132

E. Palao

Brennan, G., Eriksson, L., Goodin, R. E., & Southwood, N. (2013). Explaining norms. Oxford, UK: Oxford University Press. Brosnan, S.  F., Talbot, C., Ahlgren, M., Lambeth, S.  P., & Schapiro, S.  J. (2010). Mechanisms underlying responses to inequitable outcomes in chimpanzees, Pantroglodytes. Animal Behavior, 79(6), 1229–1237. Burkart, J. M., Brügger, R. K., and van Schaik, C. P. (2018). Evolutionary origins of morality: Insights from non-human primates. Frontiers in Sociology 3/17, 1–12. Caldwell, C. A., & Millen, A. E. (2009). Social learning mechanisms and cumulative cultural evolution: Is imitation necessary? Psychological Science, 20(12), 1478–1483. Cantor, M., & Whitehead, H. (2013). The interplay between social networks and culture: Theoretically and among whales and dolphins. Philosophical Transactions of the Royal Society, B: Biological Sciences, 368(1618), 1–10. Cavalli-Sforza, L. L., & Feldman, M. W. (1981). Cultural transmission and evolution. Princeton, NJ: Princeton University Press. Claidière, N., & Whiten, A. (2012). Integrating the study of conformity and culture in humans and nonhuman animals. Psychological Bulletin, 138(1), 126–145. Davis, S.  J., Vale, G.  L., Schapiro, S.  J., Lambeth, S.  P., & Whiten, A. (2016). Foundations of cumulative culture in apes: Improved foraging efficiency through relinquishing and combining witnessed behaviors in chimpanzees (Pan troglodytes). Scientific Reports, 6, 112. de Waal, F.  B. M. (2014). Natural normativity: The ‘is’ and ‘ought’ of animal behavior. In F. B. M. de Waal, P. Churchland, T. Pievani, & S. Parmigiani (Eds.), Evolved morality: The biology and philosophy of human conscience (pp.  185–204). Leiden, the Netherlands: Brill Publishing. Dennett, D. (1995). Darwin’s dangerous idea. New York: Simon & Schuster. Edwards, L. (2017). Ought Ought Not Imply Can. GS/PHIL6420 Moral Psychology  – York University Fall Semester 2017, graduate student paper (draft provided by L. Edwards). Fraser, B. J. (2010). Adaptation, exaptation, by-products, and spandrels in evolutionary explanations of morality. Biological Theory, 5(3), 223–227. Galef, B. G. (1988). Imitation in animals: History, definition and interpretation of data from the psychological laboratory. In T. R. Zentall & B. G. Galef (Eds.), Social learning: Psychological and biological perspectives (pp. 3–28). Hillsdale, NJ: Erlbaum. Gefland, M. J., & Jackson, J. C. (2016). From one mind to many: The emerging science of cultural norms. Current Opinion in Psychology, 8, 175–181. Gert, B., & Gert, J. (2017). The Definition of Morality. In Zalta, E. N. (ed.) Stanford Encyclopedia of Philosophy, online: https://plato.stanford.edu/archives/fall2017/entries/morality-­definition/ Gould, S. J. (1997). The exaptive excellence of spandrels as a term and prototype. Proceedings of the National Academy of Science in the USA, 94, 10750–10755. Gould, S. J., & Lewontin, R. C. (1979). The spandrels of san Marco and the Panglossian paradigm: A critique of the adaptationist programme. Proceedings of the Royal Society B of London, 205, 581–598. Gould, S. J., & Vbra, E. S. (1982). Exaptation – A missing term in the science of form. Paleobiology, 8(1), 4–15. Gruber, T., Muller, M., Strimling, P., Wrangham, R., & Zuberbühler, K. (2009). Wild chimpanzees rely on cultural knowledge to solve an experimental honey acquisition task. Current Biology, 19, 1806–1810. Gruber, T., Zuberbüher, K., Clément, F., & van Schaik, C. P. (2015). Apes have culture but may not know that they do. Frontiers in Psychology, 6, 1–14. Heath, J. (2017). Morality, convention and conventional morality. Philosophical Explorations, 20(3), 276–293. Henrich, J. (2011). A cultural species: How culture drove human evolution – A multi-disciplinary framework for understanding culture, cognition and behavior. Psychological Science Agenda – Science Brief, online: https://www.apa.org/science/about/psa/2011/11/human-­evolution Henrich, J. (2016). The secret of our success. Princeton, NJ: Princeton University Press.

6  Social Animals and the Potential for Morality: On the Cultural Exaptation…

133

Heyes, C. M. (1994). Social learning in animals: Categories and mechanisms. Biological Reviews of the Cambridge Philosophical Society, 69(2), 207–231. Horner, V., Proctor, D., Bonnie, K. E., Whiten, A., & de Waal, F. B. M. (2010). Prestige affects cultural learning in chimpanzees. PLoS One, 5(5), 1–5. Hunt, G. R., & Gray, R. D. (2003). Diversification and cumulative evolution in new Caledonian crow tool manufacture. Proceedings of the Royal Society of London B, 270, 867–874. Jaeggi, A.  V., Dunkel, L.  P., Van Noordwijk, M.  A., Wich, S.  A., Sura, A.  A., & van Schaik, C. P. (2010). Social learning of diet and foraging skills by wild immature Bornean orangutans: Implications for culture. American Journal of Primatology, 72(1), 62–71. Josephs, M., & Rakoczy, H. (2016). Young children think you can opt out of social- conventional but not moral practices. Cognitive Development, 39, 197–204. Joyce, R. (2014). The origins of moral judgement. Behavior, 151, 261–278. Keith, S.  A. & Bull, J.  W. (2017). Animal culture impacts species’ capacity to realise climate-­ driven range shifts. Ecography, 40, 296–304. https://onlinelibrary.wiley.com/doi/epdf/10.1111/ ecog.02481 Kinnier, R., Kernes, J., & Dautheribes, T. (2000). A short list of universal moral values. Counselling and Values – Issues and Insights, 45, 4–16. Laiolo, P., & Tella, J. L. (2007). Erosion of animal cultures in fragmented landscapes. Frontiers in Ecology and the Environment, 5(2), 68–72. Laland, K. N. (2008). Animal Cultures. Current Biology, 18(9), R366–R370. Laland, K. N., & Brown, G. R. (2002). Sense and nonsense: Evolutionary perspectives on human behaviour. Oxford, UK: Oxford University Press. Laland, K. N., & Hoppitt, W. (2003). Do animals have culture? Evolutionary Anthropology, 12(/3), 150–159. Laland, K. N., & Janik, V. M. (2006). The animal cultures debate. Trends in Ecology & Evolution, 21(10), 542–547. Lorini, G. (2018). Animal norms: An investigation of normativity in the non-human social world. Law, Culture and the Humanities, 1–22. Machery, E., and Mallon, R. (2012). Evolution of Morality. In Doris, J.  M. (ed.) & Moral Psychology Research Group, The moral psychology handbook (Oxford University Press, Oxford, UK), 3–46. Mesoudi, A., Whiten, A., & Laland, K. N. (2006). Towards a unified science of cultural evolution. Behavioral and Brain Sciences, 29(4), 329–347. Nakamura, M., & Nishida, T. (2013). Ontogeny of a social custom in wild chimpanzees: Age changes in grooming hand-clasp at Mahale. American Journal of Primatology, 75, 186–196. Nichols, S. (2005). Innateness and moral psychology. In P. Carruthers, S. Laurence, & S. Stich (Eds.), The innate mind: Structure and contents (pp. 353–430). New York: Oxford University Press. Niu, W. (2015). Commentary on chapter 14: Conservatism versus innovation: The great ape story by Josep Call. In A. B. Kaufman & J. C. Kaufman (Eds.), Animal creativity and innovation: Explorations in creativity research (pp. 397–418). Cambridge, UK: Academic. Ohtsuki, H., & Iwasa, Y. (2004). How should we define goodness?—Reputation dynamics in indirect reciprocity. Journal of Theoretical Biology, 231(1), 107–120. Pierce, J., & Bekoff, M. (2009). Wild justice: The moral lives of animals. Chicago, IL: University of Chicago Press. Price, E.  E., Caldwell, C.  A., & Whiten, A. (2010). Comparative cultural cognition. Wiley Interdisciplinary Reviews: Cognitive Science, 1(1), 23–31. Prinz, J. (2007). The emotional construction of morals. Oxford, UK: Oxford University Press. Prinz, J. (2008). Is morality innate? In W. Sinnott-Armstrong (ed.) Moral Psychology 1: The evolution of morality – Adaptations and innateness (MIT Press, Cambridge, MA) 367–406. Ramsey, G. (2017). What is animal culture? In K. Andrews & J. Beck (Eds.), Routledge companion to the philosophy of animal minds (pp. 345–353). London: Routledge. Raz, J. (1998). Multiculturalism. Ratio Juris, 11(3), 193–205.

134

E. Palao

Rendell, L., & Whitehead, H. (2001). Culture in whales and dolphins. Behavioral and Brain Sciences, 24, 309–382. Richerson, P. J., & Boyd, R. (2005). Not by genes alone: How culture transformed human evolution. Chicago, IL: University of Chicago Press. Sapolsky, R.  M. (2006). Culture in animals: The case of a non-human primate culture of low aggression and high affiliation. Social Forces, 85(1), 217–233. Sasaki, T., & Biro, D. (2017). Cumulative culture can emerge from collective intelligence in animal groups. Nature Communications, 8, 1–6. Schmidt, M. F. H., Butler, L. P., Heinz, J., & Tomasello, M. (2016). Young children see a single action and infer a social norm: Promiscuous normativity in 3-year-olds. Psychological Science, 27(10), 1360–1370. Sosis, R. (2009). The Adaptationist-Byproduct debate on the evolution of religion: Five misunderstandings of the Adaptationist program. Journal of Cognition and Culture, 9, 315–332. https:// richard-­sosis.uconn.edu/wp-­content/uploads/sites/2243/2018/06/2009-­Sosis-­Adaptionist-­ Byproduct-­Debate.pdf Tennie, C., Call, J., & Tomasello, M. (2009). Ratcheting up the ratchet: On the evolution of cumulative culture. Philosophical Transactions of the Royal Society B, 364, 2405–2415. Tiberius, V. (2015). Moral psychology: A contemporary introduction. In P.  K. Moser (Ed.), Routledge contemporary introductions to philosophy. New York: Routledge. Tomasello, M. (1999). The cultural origins of human cognition. Boston, MA: Harvard University Press. Tomasello, M., Kruger, A. C., & Ratner, H. H. (1993). Cultural learning. Behavioral and Brain Sciences, 16, 495–552. van Leeuwen, E.  J. C., Acerbi, A., Kendal, R.  L., Tennie, C., & Haun, D.  B. M. (2016). A Reappreciation of ‘Conformity’. Animal Behavior, 122, e5–e10. van Schaik, C. P., Ancrenaz, M., Borgen, G., Galdikas, B., Knott, C. D., Singleton, I., et al. (2003). Orangutan cultures and the evolution of material culture. Science, 299, 102–105. Vincent, S., Ring, R., & Andrews, K. (2018). Normative practices of other animals. In A. Zimmerman, K. Jones, & M. Timmons (Eds.), The routledge handbook of moral epistemology (pp. 57–83). London: Routledge. Weaver, I. C. G., Cervoni, N., Champagne, F. A., D’Alessio, A. C., Sharma, S., Seckl, J. R., et al. (2004). Epigenetic programming by maternal behavior. Nature Neuroscience, 7, 847–854. Webster, M. M., & Laland, K. N. (2012). Social information, conformity and the opportunity costs paid by foraging fish. Behavioral Ecology and Sociobiology, 66, 797–809. Whitehead, H. (2017). Gene-culture coevolution in whales and dolphin. PNAS, 114(30), 7814–7821. https://www.pnas.org/content/pnas/114/30/7814.full.pdf Whitehead, H., Rendell, L., Osborne, R. W., & Würsig, B. (2004). Culture and conservation of nonhumans with reference to whales and dolphins: Review and new directions. Biological Conservation, 120, 427–437. Whitehead, H., Laland, K.  N., Rendell, L., Thorogood, R., & Whiten, A. (2019). The reach of gene–culture coevolution in animals. Nature Communications, 10(2405), 1–10. Whiten, A. (2012). Social learning, traditions and culture. In J.  Mitani, J.  Call, P.  Kappeler, P. Palombit, & J. Silk (Eds.), The evolution of primate societies (pp. 681–699). Chicago, IL: Chicago University Press. Whiten, A. (2019). Cultural evolution in animals. Annual Review of Ecology, Evolution, and Systematics, 50(1), 27–48. Wilson, E. (1998). The biological basis of morality. The Atlantic 4, online https://www.theatlantic. com/magazine/archive/1998/04/the-­biological-­basis-­of-­morality/377087/ Wrangham, R. W. (2009). Catching fire: How cooking made us human. New York: Basic Books. Yamamoto, S., Humle, T., & Tanaka, M. (2013). Basis for cumulative cultural evolution in chimpanzees: Social learning of a more efficient tool-use technique. PLoS One, 8(1), 1–5. Yasui, S., & Idani, G. (2017). Social significance of trunk use in captive Asian elephants. Ethology Ecology and Evolution, 29(4), 330–350.

Chapter 7

Against the Evolutionary Debunking of Morality: Deconstructing a Philosophical Myth Alejandro Rosas

Abstract  Evolutionary ethics debunks moral realism – or value realism in general (Street S, Philos Studi 127:109–166, 2006) – but this is not the same as debunking the authority of moral claims, for moral realism is not the only possible explanation of the source of moral authority. However, a few influential evolutionary philosophers do believe that evolution debunks not just moral realism, but morality, period (Joyce R, The evolution of morality. MIT Press, Cambridge, MA, 2006; Ruse M, Taking Darwin seriously. Basil Blackwell, Oxford, 1986). My main purpose in this paper is to highlight the difference between these two versions of debunking, and to extricate evolutionary theory from being publicly associated with debunking morality, period. Briefly summarized, the latter view is linked to the claim that unless one believes (however falsely) in the objectivity of moral injunctions, the experience of their peculiar authority will not be available. This claim is an unexpected survival of a basic tenet of moral realism, namely, that moral norms derive their authority from objective realities. It is unfortunate to see this claim survive in evolutionary ethicists. They should rather embrace the view that the universal authority of moral norms is vindicated via a set of evolved, socially conditioned, psychological constraints on self-interest, none of which include a belief in the mind-independent objectivity of value. Keywords  Antirealism · Authority (moral) · Cooperation · Darwin · Descriptive/ prescriptive · Evolution · Obligation · Objectivity · Projectivism · Realism · Values

A. Rosas (*) Department of Philosophy, Universidad Nacional de Colombia, Bogotá, Colombia e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. De Smedt, H. De Cruz (eds.), Empirically Engaged Evolutionary Ethics, Synthese Library 437, https://doi.org/10.1007/978-3-030-68802-8_7

135

136

A. Rosas

7.1  M  echanistic Explanation vs. Cognitive Capture of Objective Values A mechanistic or “nuts and bolts” approach is implicit in any attempt to naturalize morality. It must explain the authority of moral norms replacing those explanations based on objective moral value. According to naturalist and anti-realist views, mind structures are ultimately responsible for infusing natural features of actions (characters, practices or institutions) with moral relevance. “Objective moral value” would make such mind structures unnecessary, or at most a by-product of its existence. In contrast, naturalists must explain how morality and its peculiar authority emerge within our psychological constitution with no appeal to values as mind-independent realities. Among the reasons for believing that our minds were shaped by evolutionary history, high standing is granted to the view that value ascriptions, implicit or explicit, are influenced by biological and psychological facts about how organisms achieve individual fitness (Darwin, 1871, 73; Ruse & Wilson, 1986, 186; Street, 2006). This should certainly lead to “debunking” moral realism, or value realism in general, but not to debunking moral authority, period. However, a few evolutionary philosophers do believe that evolution debunks morality, period (Joyce, 2006; Ruse, 1986). My main purpose, therefore, is to argue that these debunking views are importantly different. In particular, evolutionary theory should not be associated in the mind of the public with debunking morality, period. I argue that it is possible to explain the special kind of authority that moral injunctions have over us humans – excepting psychopaths (Blair, 1995) – without any appeal to real or fictitious objective moral properties. This approach is by nature mechanistic and contrasts with the traditional philosophical approach that intends both to explain and to justify moral obligation by a detached and sovereign cognitive capture of objective moral properties like rightness or wrongness, conceived as intrinsic to actions, rules or institutions. In contrast, the mechanistic approach conceives of them as responses to natural features that are only salient and morally relevant because of the peculiar constitution of social minds like ours. My main aim, however, is not to present a full-fledged elaboration of the mechanisms behind moral authority. Rather, it is to reveal that evolution by natural selection does nothing to debunk morality; and to explain why a few evolutionary-minded philosophers have come to promote this mistaken idea. Evolution might debunk moral realism; but moral realism is probably a misguided interpretation of the grounds of moral obligation. Evolution, together with psychology, can render an important service by illuminating the real psychological sources of moral obligation. In this sense, this chapter welcomes the scientific attempts of evolutionary minded psychologists to contribute to this illumination (Tomasello, 2020). One way in which philosophers have paved the way leading to a naturalistic and mechanistic project about the source of moral obligation is by calling attention to the semantic contrast between descriptive and prescriptive statements. Descriptive statements have a mind-to-world direction of fit: what they say represents how the world is. Prescriptive statements have a world-to-mind direction of fit: what they

7  Against the Evolutionary Debunking of Morality: Deconstructing a Philosophical…

137

say represents what the world ought to be like, not what it is like. Prescriptive statements, similar to desires, push the agent towards a specific action or inaction that should make the world conform to what the statement says. They reveal not something about the world, but rather something about how we would like the world to be. They reveal the things we value. Valuing is something that living beings do; values exist only from their particular perspectives, and the things that organisms value as e.g. being apt for food, can be different from one organism to another. Because valuing depends in this way on the biological constitution of organisms, values are not part of the mind/life-independent furniture of the world. The force that values exert on our behavior does not arise from a sovereignly detached cognitive capture of properties out there. The source of values lies in the adaptive needs and interests of organisms in their dealings with their environment. When statements of value are not just about our individual preferences, but about how the world morally ought to be, they carry authority over our individual desires and aims, especially when they regulate conflicts between individuals. This authority transcends individual desires and arises from our commitment to a social way of life. Social life entangles us in a web of social relations and tensions. To get to the theoretical source of this kind of authority is not primarily to reveal something about how the world outside us is, but something about how we collectively want to shape our relations and solve the tensions. We can still say things like ‘“X is wrong” is true’, but this should be interpreted as in the deflationist theory of truth (Blackburn, 1998). “Deflationism” is just a way of stating in philosophical jargon that when we say such things in ordinary moral conversation, we do not intend to introduce a realist stance about objective moral value, but to endorse and reaffirm the prescriptive statement in question. Maybe moral realists would like to say that those relations and tensions between the personal and the collective are precisely the real moral properties they talk about. But those relations and tensions are just that: relations and tensions between our interests and those of others. For sure, those relations and tensions will be impacted by many of the actions and plans we carry out. Some of those actions will deteriorate relations and increase tensions and we will at least consider calling them “wrong”. And surely, there are factual objective differences in the way those relations are impacted by actions that we distinguish as being right in contrast to being wrong. We do discriminate right from wrong through those factual objective differences (e.g. equal or unequal distribution of goods for equal or unequal effort and resources invested). But the rightness or wrongness of equal or unequal distribution is not objective in the same way in which the distribution can be objectively measured as being equal or not. They are just particular responses of approval or disapproval, sometimes with the explicit support of community norms. These responses or attitudes arise because of the way our mind is designed. This design is the object of a “mechanistic explanation”, which targets the functional organization of psychological capacities and how their interaction results in the particular feeling of obligation that we normally experience in moral injunctions. I shall give a hypothetical sketch of such explanation in Sect. 7.4. There are other kinds of minds among living beings that either have no morality, meaning that their selfish desires

138

A. Rosas

are inescapable and overriding; or that could have a different kind of morality. As Darwin once said, intelligent social female insects would feel it their sacred duty to sacrifice their male siblings (Darwin, 1871, 73). The sense of moral obligation comes from the authority that moral injunctions have over desires directed solely at satisfying our individual well-being, when they conflict in particular ways with the interests of others or of the collective. In such cases, we experience moral obligation as having the capacity to silence our selfish desires. Kant often wrote of how the moral law “humbles” and “strikes down” self-­ conceit (Kant, 2002, 96ff). The moral law is: “…a law before which all inclinations fall silent even if they secretly work against it…” (Kant, 2002, 111; on moral clout as the ability to “silence further calculation” see Joyce, 2006, 111). Charles Darwin (1871, 70–71), deeply touched by Kant’s words, quoted them in full from a translation into English by J. W. Semple published in Edinburgh in 1836: Duty! Wondrous thought, that workest neither by fond insinuation, flattery, nor by any threat, but merely by holding up thy naked law in the soul, and so extorting for thyself always reverence, if not always obedience; before whom all appetites are dumb, however secretly they rebel; whence thy original?

Darwin chose Kant’s description of the subjective experience of the authority of moral norms to guide his speculative attempt to naturalize morality. It seemed obvious to him that this internal authoritative voice, expressed in the moral “ought”, is what a scientific explanation must be able to explain. Related evolutionary views on the psychology underlying the authority expressed by the moral “ought” are present in Trivers (1971) and Frank (1988).

7.2  M  oral Objectivity and Its Sequels in the Philosophical Tradition There is a tradition in philosophy that has tried to erase the semantic difference between descriptive and prescriptive statements by arguing that evaluations refer to values and that values are objective realities, to be captured in descriptive language. It goes back to Plato’s Form of the Good. Forms are realities that exist beyond the spatiotemporal world perceived through our senses, providing at the same time the archetypes imperfectly copied by the material things around us. This also applies, according to Plato, to the good we can realize in our actions, which is only a dim copy of the archetypal Form of the Good.

7  Against the Evolutionary Debunking of Morality: Deconstructing a Philosophical…

139

7.2.1  Hume and the Projection Mechanism When Hume tried to escape this tradition, he unwillingly preserved it in a disguised form. He suggested that the belief in objective moral properties – as matters of fact to be ascertained by our cognitive capacities – is an illusion of sorts, the product of the projection of sentiments onto objective features of actions or situations that are not themselves values, though they trigger in us pleasure or uneasiness (Hume, 1739/1978, p. 469, 471). He had similar philosophical views on sensible qualities and on causation, where he referred to the mind’s “propensity to spread itself on external objects” (Hume, 1739/1978, p. 167). He thus explained why we populate the world with illusory objective features that are only the projection of subjective responses on our part. In the same way he referred to taste as creative, which “gilding or staining all natural objects with the colours borrowed from internal sentiment, raises in a manner a new creation.” (Hume, 1902/1961, p.  294). However, Hume also knew that the case of values should be treated differently from the case of qualia, causation or beauty: the latter and the former lie on different sides of the descriptive/prescriptive divide. Indeed, he expressed his uneasiness at the observation that moral philosophers imperceptibly reason from “is” to “ought” propositions without giving any explanation (Hume, 1739/1978, p. 469). Intriguingly, the projection mechanism suggests, in the case of moral value, that the sense of moral obligation needs support from the cognitive capture of an objective property, even if only an illusory one. But Hume could have meant that only moral realists engage in this projection. He could have thought that moral authority can be felt in most humans without engaging in such projection. He would not then be guilty of paying a subtle tribute to moral realism. His view, however, is ambiguous and the mind’s projecting ability has been read as providing us with an illusion of objective moral value, presumably affecting humans as a species (Mackie, 1977, 42, Ruse, 1986, 253ff; Joyce, 2006, 126). The evolutionary philosophers that made popular the idea of evolutionary debunking (Joyce, 2006; Ruse, 1986) read Hume in this way and conceived the projection mechanism as innate and as a product of natural selection, and thus a human universal, or nearly one. I shall say something more about them in a later section. But since this “projection” might be confined only to a special group of individuals, particularly philosophers exposed to the influence of academic discussions of Plato’s mythical Forms, or of their modern and contemporary versions, I briefly explore in the next sub-section whether the “projection” of moral attitudes onto objective features should rather be interpreted  as restricted to a cultural-­ ideological tradition, and one that lacks any actual support in a biologically selected and hardwired mechanism with the special function of generating an experience of moral obligation.

140

A. Rosas

7.2.2  Mackie and the Claim to Objectivity in Moral Language Similar to Hume’s psychological hypothesis, Mackie (1977) famously argued that moral language has a claim to objectivity as part of its everyday use and meaning. He was also explicit and emphatic about the reach of this semantic claim to objectivity: it only reflects how moral language is used in our culture. One cannot infer from it that moral values are objective, but only that normal language users believe that they are. The standard uses of language only tell us about the beliefs that the community of users have come to widely share. Moral language use is not evidence that moral properties really exist out there. It is just evidence of users’ beliefs, as other bits of language are evidence that users believe in atoms, galaxies or gods. This belief in objective moral properties, however widespread, can be wrong and this is what Mackie attempted to prove. But the proof only affects the semantic-­ realist implications of moral language; it in no way affects the endorsement of first order moral judgments, or even commonsense talk about the truth of moral statements, i.e., in a deflationist sense (Blackburn, 1998). However, one can legitimately ask whether Mackie’s view about the objectivity claim in moral language does capture the usage of the folk, or just the usage of philosophers, or of some philosophers, particularly of the heirs to an ancient and revered tradition going back to Plato. It is arguable that the folk only gets from it the normative validity and authority of the moral claims and doesn’t really care much about – or even clearly understand – the claim to objectivity. All they could only care about is the authority that moral statements claim over our individual and personal plans, intentions and actions in some cases. Some researchers have done surveys to establish whether the folk have realist meta-ethical stances regarding moral judgments (Beebe & Sackris, 2016; Goodwin & Darley, 2008; Nichols & Folds-Bennett, 2003). They designed a number of measures of moral realism. But it is arguable that such measures are inadequate. It is not clear how to measure moral realism in the folk. If you present participants with moral claims (e.g. “It is morally wrong to discriminate on the basis of race”) and ask them whether the claim is true, false or a matter of opinion, and some answer “true”, does this mean these respondents endorse meta-ethical realism? Not necessarily, for they can answer in this way to express their endorsement of the moral claim as authoritative. The same can be said if you ask them whether two persons disagreeing about this claim can both be correct, or whether one must be mistaken. If some participants answer that one must be mistaken, they might just be expressing that this is a moral claim with authority over whatever racial likes or dislikes you might have (see Pölzler, 2018 for a more detailed discussion). It remains possible to interpret those answers to such questions as the deflationist theory of truth proposes, namely, that asserting a moral statement as true merely means reaffirming and endorsing the prescriptive statement, precisely as prescriptive, with no intention of declaring that it is also descriptive (Blackburn, 1998). The view that the normative validity and authority of moral claims must find support in a realist conception of moral predicates could just be an effect of the enculturation of academic research

7  Against the Evolutionary Debunking of Morality: Deconstructing a Philosophical…

141

within the philosophical tradition of moral realism. A recent paper with a carefully designed new methodology for establishing folk meta-ethical views has found that more than two thirds of the participants endorse anti-realist views (Pölzler & Wright, 2020). Noteworthy however, is that Pölzler and Wright (2020) had to introduce their participants to concise definitions of meta-ethical concepts in their new design. Hence, the claim to objectivity in moral language could be specific to its philosophical use in a widespread tradition, a tradition also familiar to philosophers who do not endorse it, and have tried to controvert it, as Mackie did. But Mackie was perhaps wrong in assuming that the folk also understand and endorse this philosophical usage. The folk might not distinguish between the claim to normative validity and authoritativeness on one hand, and the claim to truth on the other; in other words, the folk might be completely blind to, and unaware of, the meta-ethical issues. Deflationism, as I said, is just a philosophical way of expressing their attitude and in no way presupposes that the folk consciously adopt a meta-ethical stance, unless researchers explicitly introduce them to such stances, as done by Pölzler and Wright (2020). This view is supported by the fact that Mackie, even though he believed that the claim to objectivity is semantically present in ordinary usage, argued for excising it out. He did not doubt for a moment that moral language would survive after this excision. But this would not be possible if the claim to objectivity were a necessary component of moral language. But, if it is not a necessary component, what is it then? One alternative is to view it as a cultural tradition. This need not exclude there being something like a projection mechanism responsible for how this philosophical tradition persists in its claims. The projection mechanism would produce the illusion of objectivity and would influence the semantics of moral language, but only for those who project. But it would not be something built into the design of the human mind. Rather, the mechanism would only be generated in minds trained in specific philosophical traditions, more akin to a foible than an expertise (Lillehammer, 2003, 579–580). The projection involves a confusion that only philosophically trained minds could be subject to, namely, applying a notion of truth as correspondence, developed in a theoretical discussion about the (im)plausibility of skepticism about descriptive statements, to a different type of statement, with a different semantics (a different direction of fit), namely prescriptive ones. It is therefore a cultural idiosyncrasy, far from innate and therefore not something favored by natural selection.

7.3  E  volutionary Debunking: Selection for Illusory Objectivity In the last four or five decades, researchers interested in human morality have become aware of an interesting theoretical convergence between the evolutionary biology of human cooperation, the philosophical theories in modernity about the

142

A. Rosas

origins of justice and of the state, the interpretation of those theories by authors like Hampton (1986), Kavka (1986), Gauthier (1986) and others, and experiments in economics and social psychology about human cooperation in social dilemmas (Fehr & Fischbacher, 2004). They all have implicitly or explicitly applied game theory to understand how humans cooperate and create practices, norms and institutions of justice, e.g. to understand the origins of the modern state as a cooperative enterprise (see specially Hampton, 1986). From an evolutionary point of view, a social species that requires collective action to survive and thrive needs to make cooperation possible. Species that exhibit the cognitive capacities to flexibly coordinate collective action, will recurrently experience social dilemmas (e.g. the prisoner’s dilemma). Such recurrence must have been a crucial selection pressure in the evolutionary ancestry of the human lineage, favoring psychological variations capable of making cooperation work. Trivers (1971) was the first to explicitly introduce game-theoretic notions in his model of the evolution of reciprocal altruism. His application of this model to humans proved especially successful. This was followed by Axelrod and Hamilton (1981), and then, specifically for the human case, by Axelrod (1984), Boyd and Richerson (1992) and a vast evolutionary literature offering many a fruitful insight into the evolution of human cooperation and morality (Rosas, 2010, 2011). The obstacle for cooperation in social dilemmas is the inertial temptation in biological organisms to defect in cooperative enterprises. Plausibly, this temptation is based on a naturally selected drive to look out for oneself – at the expense of others if need be. Overcoming this temptation requires a strong opposition, which presents itself at the psychological level of felt drives in species with the appropriate psychological capacities. It seems reasonable to locate the force opposing a predominantly selfish psychology in the feeling of moral obligation expressed in moral concepts like “ought” and “wrong”. As noted above, the opposition between morality and selfishness (self-conceit) was a central insight in Immanuel Kant’s moral philosophy; and it is not a coincidence that this insight deeply impressed Darwin when he attempted to approach the subject as a naturalist. Darwin never cared to mention something like the objectivity of moral injunctions. When he explained the force they exert on us, he mentioned three sources: the force of the social instincts, which he understood as an instinctive regard for the good of others; the fear of punishment or disapprobation of our fellow men, which presupposes standing social norms for public praise and blame; and lastly the fear of divine punishment, if a belief in a moral God is available (Darwin, 1871, 92–93). Regarding the first two sources, Darwin formulated sketchy evolutionary explanations; for he believed that they were designed in our minds by natural selection in order to induce us to care for the common good. Interestingly, these sources possess no connection to the philosophical idea of moral objectivity. Rather, they go in the direction of a mechanistic, psychological explanation, meaning an explanation that replaces the detached cognitive capture of objective moral properties with a different explanation of the experience of moral authority. This experience arises internally in the mind, presumably from the interaction of psychological capacities that make the intuition of objective goodness unnecessary. Value cannot be objective for someone who believes in the evolution of life from inert matter. Value anti-realists

7  Against the Evolutionary Debunking of Morality: Deconstructing a Philosophical…

143

must eventually provide this mechanistic explanation, as indicated in Sect. 7.1. The third source  – the belief in a moral God  – struck Darwin as a cultural creation (Darwin, 1871, 394–395). The view that a belief in a moral God can reinforce a sense of moral obligation in large societies – where exhaustive mutual monitoring is impossible – has been recently advocated by Norenzayan and Shariff (Norenzayan & Shariff, 2008). This view, in its best version, conceives of God as the personification of an impartial, omniscient and omnipotent judge. Certainly, the existence of a Divine Judge must dissolve at the hands of most evolutionary thinking, but the other two sources remain in place and are powerful enough to create a sense of moral obligation. I offer a brief sketch of how this can happen in Sect. 7.4. We only need to resist the thought that the feeling of moral obligation needs to pass a test of correspondence to objective reality. Given other facts about the life of our ancestors, having a moral sense furthered the survival of the lineages that had it, relative to those that lacked it. The change to a moral way of life is comparable to the change to a biped way of life. Both happened for adaptive reasons, and asking for an objective backing to those changes is as pointless in morality as it is in bipedalism. The only possible justification lies in an explanation of their benefits for survival and reproduction, intrinsic to how natural selection works. Thus, if Darwin believed that morality evolved to help our ancestors survive in their specific circumstances, without any requirement of meeting any objective moral standard, he could not have any room in his mind for something like an evolutionary debunking of morality, period. Evolution simply promoted morality as a way of life for organisms like our ancestors. This way of life need not meet some objective standard to be authoritative and successful. The debunking of morality is not a thought that strikes the evolutionary naturalist; it can only be a legitimate concern for those who cannot shake away the view that morality has to meet some objective standard, i.e., for advocates of realist theories of value, as Street (2006) has argued (see also Lillehammer, 2003, 579–580; Kahane, 2011, 116). How shall we then explain that those philosophers who follow Darwin in claiming that evolution shaped our psychology and made us moral creatures have come to simultaneously endorse the claim that evolution debunks morality, period? We already mentioned that Hume, a naturalist who tried to escape from the philosophical tradition of moral realism, paid a subtle tribute to this tradition by suggesting that our minds produce an illusion of objective value through a “propensity of the mind to spread itself on external objects”, a projection mechanism. Ruse (1986) picked up this idea and suggested that natural selection shaped our human minds to produce this illusion in order to give authority to moral injunctions: “…morality simply does not work…unless we believe that it is objective” (Ruse, 1986, 253). More clearly than in Hume, this commits to the claim that only a belief in objectivity can give morality its peculiar authority. Ruse obviously rejects the truth of a belief in moral objectivity; but true or not, he preserves the connection between this

144

A. Rosas

belief and the experience of moral authority, and thus echoes the Platonic view1 that unless the predicates “just” and “pious” refer to some objective reality anteceding our minds, moral thought and language is only empty discourse and cannot have real authority over us. Joyce (2006) provokes in his readers a sense of philosophical wonder at the peculiar authority – “moral clout” – that moral judgments exert over us, something utterly different from the force of hypothetical imperatives or of the rules of etiquette (Joyce, 2006, 57ff). This different type of authority demands a special explanation. Joyce provides one when he answers this question: what did natural selection change in the minds of our ancestors in order to give them (and us) a moral sense, a sense of being compelled by the mere awareness of moral injunctions? In his answer, he recalls Hume’s projection mechanism: a moral attitude of approval or disapproval towards an action or character is projected as an objective feature of that action. This projection supports a cognitivist and illusory-objectivist understanding of moral judgments and predicates (Joyce, 2006, 126). I am here interested in debunking one element of this hypothesis, the element that says that the phenomenology, i.e., our everyday experience of values – is “one as of the emotional activity being a response to attributes instantiated in the world.” (Joyce, 2006, 129). Joyce means that our approval or disapproval is subjectively experienced as the effect (not the cause) of perceiving or intuiting the moral objective property as inherent in the action itself. This subjective experience is the characteristic achievement of the projection mechanism. But my claim that the connection between morality and the belief in moral objectivism is idiosyncratic to philosophers implies that Joyce’s claim about a naturally selected projection mechanism is wrong in the case of moral values. (I am not taking any stance in regard to colors or causality, where the phenomenology could be true, because we do commonsensically describe the world with those concepts). Consequently, the phenomenology is also circumscribed to philosophers or to any person trained or read up on the relevant philosophical theories. As I argued in Sect. 7.2 above, the projection mechanism is not a psychological universal but a phenomenon mainly present in philosophers and in academics familiar with the notion of a mind-independent moral objectivity. This mechanism, and the phenomenological experience it creates, are not universal features of the human mind. Rather, they arise from a philosophical confusion. Joyce adds support to the claim, with something that was unavailable to Hume. He backs this phenomenology with a nifty twist in the evolutionary hypothesis about the function of morality: since morality’s function is to encourage prosocial behavior against our selfish impulses, Joyce speculates that “moral judgments

1  My claim here is not about historical paths of influence, but about typological similarities in thought. That said, it seems clear to me that other historical sources of the view that moral realism (or a belief in it) explains the experience of moral obligation were influenced by Plato, however indirectly. Modern rationalists did view reason – with its (innate) dispositions to form ideas preordained to correspond to reality – as an entity that transcends the material, scientifically accessible world. But be that as it may, evolutionary debunkers of morality, period, are preserving a tenet of moral realism, unnecessarily burdening evolutionary views.

7  Against the Evolutionary Debunking of Morality: Deconstructing a Philosophical…

145

would serve this purpose only if they seem like they are depicting a realm of objective moral facts, at least in the sense of providing practical considerations with inescapable and authoritative force.” (Joyce, 2006, 131). In other words, the only way for natural selection to make us experience the authority of moral injunctions is by creating the illusion of objective moral facts. In this sense, Joyce is preserving a tenet of moral realism and burdening natural selection with tricking our minds into believing (falsely) in moral objectivity, because otherwise we would not be able to experience moral obligation. Joyce’s defense of a cognitivist view of moral judgment appears to provide additional support to the existence of the projection mechanism. But in fact, it is perfectly compatible with denying the existence of projection as a biologically designed, or hard-wired mechanism of the human mind. Our emotional or evaluative attitudes are truly a response to attributes instantiated in the world: not to attributes which we might call “moral rightness” or “moral wrongness”, but rather to the usual descriptive properties of whatever it is that we respond to evaluatively. We sometimes use the word “honest”, for example, to describe someone who does what she says she will do. But the categorical approval we express when we use the word “honest” to commend such a person, or that we could express by using more general words of commendation – like “right” or “good” – is an additional categorization beyond the bare descriptive fact of her keeping her word. This evaluative categorization is, in this case, based on responses that have a collective authority to them. They are not just our individual responses, but responses that have been constructed collectively to manage interpersonal relations where some manner of collective costs and benefits are at stake. The reference to real properties, independent of our attitudes, that lends support to cognitivist views, concerns only the descriptive facts. The evaluation, even if universal, or universal within some culture, is not objective in the same sense as the factual characteristics of the behavior we call “honest”. The evolutionary debunkers criticized in this paper claim that moral authority requires the support of a projection mechanism, which tricks us into believing in the existence of objective moral properties (good, bad) of actions (and/or characters, rules, etc.). This projection mechanism – and the fictional objectivity it creates – was, they say, favored by natural selection to give moral judgments their peculiar force and authority. Since the projection mechanism is a psychological mechanism, this view is genuinely naturalistic. But it still pays a subtle tribute to the philosophical tradition that introduced and kept alive the idea of objective values. It simply grants to this tradition that moral obligation should be explained by some form of moral objectivity, even if only a fictional one.

7.4  An Alternative Mechanistic Explanation In order for the projection mechanism to qualify as a projection, it needs to work precisely in the direction opposite to the one suggested by the supposed phenomenology (Joyce, 2006, 130). By effect of the projection, we experience that our

146

A. Rosas

response – the moral evaluation – follows the perception of real moral properties. But the hypothesis entails that the objective values are illusory, precisely a projection of subjective attitudes, which are mental events. This fact can be used to question the hypothesis of a projection mechanism. Since the direction of fit of evaluations is world-to-mind, we do not represent with evaluative attitudes how the world is (as with causation or colors), but rather how we want it to be, or how it ought to be. The mind’s evaluative response to the factual properties has no need to be projected onto the world. It can directly be used to guide our action. And while our evaluative responses are indeed triggered by objective properties, they are not triggered by prescriptive objective properties, as explained in the previous section. The prescriptive aspect of our response need not be projected; it can simply guide our action directly, while the objective properties to which we respond are not prescriptive but factual, thus having no need of a projection mechanism, because they are actually out there. In sum, the projection mechanism is superfluous; and a plausible explanation of its postulation is the said subtle tribute that philosophers keep paying to an ancient and revered tradition of moral realism. How can we explain the authority of moral injunctions without postulating real or illusory objectivity? My explanation will not be detailed or elaborate. Let us start by pointing out that, as expected in biological organisms, we have a healthy selfregarding drive to satisfy our desires and needs (though surely subject to distortions, as when the craving for sugar causes health problems in current environments with abundantly available, industrially produced sugary foods). Most of us feel no need or temptation to back up the authority of this drive through some philosophical conception of our own objective value. We just feel its normative force and accept it as a fact. And if we reflect about it, we see it as reasonable to have been designed this way. People who say that this natural normativity intrinsic to self-interest is no normativity at all, probably believe that there can be no normativity without “objective backing”. But this is to make all over again the philosophical mistake I have been here denouncing; only that in respect to self-interest it makes even less sense to make it. On the view defended here, this natural normativity is all there is to normativity. This simple observation can motivate sympathy towards the idea that the authority of moral injunctions could be yet another case where we have little need for an objective backing. A similar observation can be made in relation to our capacity for empathy. Empathy has both a cognitive and an action-guiding, motivational aspect. Cognitively, we understand through empathy the mental states of others. Motivationally, empathy produces a concern towards their well-being. We can express this by saying that we perceive, through empathy, the life of others as comparable in value to our own life. But even if we can and do express empathy-based care in terms of value, the thought of objective value plays here no role. The effect is purely motivational and directly affects our behavior towards others. It is only in special existential moods that we ask questions about what could ground concern for anything or anyone (including ourselves, for that matter). But from what I have said here, we have reason to doubt that these moods reflect a healthy mental state. But there is a further question that we need to answer in order to develop a mechanistic view; and the previous observations about self-regard and empathy

7  Against the Evolutionary Debunking of Morality: Deconstructing a Philosophical…

147

predispose us to take a particular stance towards it. Is moral authority a sui generis drive, originally unconnected to the rest of our psychology, or does it arise from the interaction of pre-existing capacities? It seems more likely that self-regard and empathy play some role in the emergence of moral authority. Besides, a problem with the sui generis account is that it makes it difficult to connect the force of moral authority to a specific content, while the evolutionary account connects morality to prosocial behavior and cooperation. It would be preferable if moral authority arises from a mechanism that makes the connection to cooperation intelligible. I shall then tell a short story about the interaction between three psychological capacities that could illuminate the emergence of moral authority connected to cooperation: a healthy self-regarding drive, a concern for others, and the ability to share mental states with others and have common knowledge about what is thus shared. It seems uncontroversial that a drive towards self-preservation has a biological source. It need not be represented in abstract form, but it will be implicit in most of the adaptive instincts and impulses in any organism. When we consider organisms with a complex psychology enabling them to learn and flexibly respond to contingent challenges, self-interest can be represented as a rule from which these psychologically complex organisms, like humans, reason to the appropriate means of achieving whatever promotes their best interests. But if they were purely self-interested, they would never come to establish Humean conventions supporting justice, like property rights and compliance with contracts and promises. Although some authors read Hume as if self-interest were a sufficient ground for such conventions (Harman, 1977, 103f), this view seems overly optimistic. It forgets the role that coercion and deception play in the social interactions of purely self-interested minds, and it forgets what Hume himself said of the incarnation of such a mind in the “sensible knave” (Hume, 1902, Sect. 9, §232–233). Self-interest will guard an agent against the exploitation by others, but it will not guard these others against his exploitation, at least not always. And therefore, if the agreement to bind oneself to a mutually beneficial pattern of behavior is held in place by self-interest alone, it will only last as long as purely self-interested individuals are unable to coerce or deceive others in order to exploit them as resources, as argued by Glaucon in Plato’s Republic (Plato, 1991, 358e–359b). Purely selfish minds have no dispositions to respect an agreement for its own sake. They can only be held in check by the prospect of punishment. But this prospect is contingent upon being caught red-handed breaking the agreement, and no society can effectively monitor the behavior of every member at all times. Hence the sensible knave’s insight: “That honesty is the best policy, may be a good general rule, but is liable to many exceptions: and he… conducts himself with most wisdom, who observes the general rule, and takes advantage of all the exceptions.” (Hume, 1902, Sect 9, §232). Establishing conventions of justice requires something more than self-interest. Hume called it “sympathy”, which comes very close to what today social psychologists call “empathy”. Empathy is also a basic and biologically shaped capacity, having its evolutionary origin in parental care in mammals and spreading in some species to non-related members of one’s group (Preston & De Waal, 2002). In virtue of empathy, one can escape the prison of an exclusive, selfish concern with one’s

148

A. Rosas

own well-being and harbor a concern for the well-being of other individuals. This capacity can antagonize and counterbalance selfishness; but can it generate a sense of moral obligation? It does not seem so. A being with both self-regarding and empathic capacities could still conceivably lack a moral point of view. Its actions could eventually override selfishness, but that would not be the outcome of thinking in terms of what is right, wrong or what ought to be done. It would be the outcome of the relatively stronger pull of empathy in some particular cases. The mere opposition of both inclinations does not seem suitable to generate norms that would have authority over whatever inclination is stronger. Can purely selfish individuals generate norms? As mentioned above, if they can flexibly adjust to changing contexts, they must generate an abstract rule of self-­ interest, for they need to reason from it in particular cases. The same would apply to flexibly empathic individuals. Adjustment of either selfish or altruistic dispositions to particular cases requires the ability to reason using a rule. But these are rules of individual behavior, directed to achieve whatever the strongest individual inclination mandates in a particular situation. Norms as in “moral norms” are different from the former: they must simultaneously instantiate at least two characteristics: (1) they are public rules of universal applicability to all individuals in similar situations; and (2) they should provide, at least in appearance, fair satisfaction to the interests of all involved. In so far as being public, norms require the “we”, the collective subject of cooperation; for cooperation consists in collectively coordinated action to achieve shared goals. Goals are shared when there is common knowledge of their being shared and of the fact that everyone’s interest is equally considered (Rosas & Bermúdez, 2018). Here is where the “we” of collective intentionality proves of fundamental importance in the emergence of the authority of moral norms. The “we” creates collective expectations known to all, and known to all that they are known to all, etc. (common knowledge). Moreover, these expectations are a serious issue in the group, because compliance with them is required to guarantee a fair distribution of the costs and benefits of cooperation. The violation of these expectations by one or some members of the group would considerably affect the interests of other members (and implicitly their reproductive success, which suggests an evolutionary pressure to curtail such violations). In particular, since the negative impact of norm violations is known to all (common knowledge), the group creates punishment for transgressors to deter them. Group punishment and internalization of this punishment in guilt-feelings are essential to create the distinctive sense of authority flowing from moral norms. Establishing a moral norm includes the thought that transgressors deserve public punishment. Moreover, saying that something is morally wrong is equivalent to publicly committing oneself not to do it, on penalty of punishment. These connections, or similar ones, have been made by several moral philosophers. Strawson’s influential treatment of the interpersonal and the moral (generalized in society) reactive attitudes points to natural psychological phenomena that contain the experience of obligation (Strawson, 2008, p.16). In his discussion, he was not concerned with the objectivity of moral predicates, nor was he trying to show that the experience of obligation would stand in the absence of objectivity. I

7  Against the Evolutionary Debunking of Morality: Deconstructing a Philosophical…

149

am claiming here that it would stand, because the obligations correlative to the demands present in the moral reactive attitudes come from our everyday relations to others in a public sphere of moral participation, where a belief in objective value need not even make a ghastly appearance. The same is true for recent elaborations of moral obligation from a psychological perspective (Tomasello, 2020). It is striking to realize that these partly psychological and partly conceptual points do not fall outside the scope of awareness of evolutionary debunkers of morality, period (see Joyce, 2006, 69; 122). They are aware of these resources, but not of the fact that they could – and should – be used to explain obligation without the need for postulating illusory objectivity. I have tried to explain this striking oversight by calling attention to a philosophical tradition that seems to have left a deep scar on the philosophical profession. It is one thing to see this tradition active in contemporary moral realists, but it is quite another, and in my view quite a surprising irony, to see it active in evolutionary ethicists that explicitly reject moral realism. My explanation could be wrong. But whatever the correct explanation, I hope this last section has given plausibility to the view that moral obligation need not be supported in moral objectivity, illusory or not, and consequently, that natural selection should not be burdened with designing minds that fall for such an illusion. In sum, if we believe that evolution shaped our minds, we should stop promoting the idea that evolution contains a threat to morality. Exactly the contrary is the case.

References Axelrod, R., & Hamilton, W. D. (1981). The evolution of cooperation. Science, 211, 1390–1396. Axelrod, R. (1984). The evolution of cooperation. New York: Basic Books. Beebe, J.  R., & Sackris, D. (2016). Moral objectivism across the lifespan. Philosophical Psychology, 29(6), 912–929. https://doi.org/10.1080/09515089.2016.1174843 Blackburn, S. (1998). Ruling passions. Oxford, UK: Oxford University Press. Blair, R. J. R. (1995). A cognitive developmental approach to morality: Investigating the psychopath. Cognition, 57, 1–29. Boyd, R., & Richerson, P. J. (1992). Punishment allows the evolution of cooperation (or anything else) in sizable groups. Ethology and Sociobiology, 13, 171–195. Darwin, Ch. (1871/1981). The descent of man, and selection in relation to sex. Princeton, NJ, Princeton University Press. Fehr, E., & Fischbacher, U. (2004). Social norms and human cooperation. Trends in Cognitive Sciences, 8(4), 185–190. Frank, R. (1988). Passions within reason. New York: W.W. Norton. Gauthier, D. (1986). Morals by agreement. Oxford, UK: Oxford University Press. Goodwin, P. G., & Darley, J. M. (2008). The psychology of meta-ethics: Exploring objectivism. Cognition, 106(3), 1339–1366. Hampton, J. (1986). Hobbes and the social contact tradition. Cambridge, UK: Cambridge University Press. Harman, G. (1977). The nature of morality. An introduction to ethics. New  York: Oxford University Press.

150

A. Rosas

Hume, D. (1902 [1777]). Enquiries concerning the principles of morals. In Selby-Bigge (Ed.), Essays and treatises on several subjects. Oxford, UK: Clarendon Press. Hume, D. (1978[1739]). A treatise of human nature (Selby-Bigge, Ed.). Oxford, UK: Clarendon. Joyce, R. (2006). The evolution of morality. Cambridge, MA: MIT Press. Kahane, G. (2011). Evolutionary debunking arguments. Nous, 45(1), 103–125. https://doi. org/10.1111/j.1468-­0068.2010.00770.x Kant, I. (2002 [1788]). Critique of practical reason. Indianapolis, IN: Hackett Publishing. Kavka, G. (1986). Hobbesian moral and political theory. Princeton, NY: Princeton University Press. Lillehammer, H. (2003). Debunking morality: Evolutionary naturalism and moral error theory. Biology and Philosophy, 18, 567–581. Mackie, J. L. (1977). Ethics: Inventing right and wrong. London, UK: Penguin. Nichols, S., & Folds-Bennett, T. (2003). Are children moral objectivists? Children’s judgments about moral and response-dependent properties. Cognition, 90(2), B23–B32. Norenzayan, A., & Shariff, A. (2008). The origin and evolution of religious prosociality. Science, 322, 58–62. Plato. (1991). The Republic (Allan Bloom, Trans). New York: Basic Books. Pölzler, T., & Wright, J.  C. (2020). Anti-realist pluralism: A new approach to folk Metaethics. Review of Philosophy and Psychology, 11, 53–82. Pölzler, T. (2018). How to measure moral realism. Review of Philosophy and Psychology, 9(3), 647–670. Preston, S., & De Waal, F. (2002). Empathy: Its ultimate and proximate bases. Behavioral and Brain Sciences, 25, 1–72. Rosas, A. (2010). Evolutionary game theory meets social science: Is there a unifying rule for human cooperation? Journal of Theoretical Biology, 246(2), 450–456. Rosas, A. (2011). Disentangling Social Preferences from Group Selection. Biol Theory, 6(2), 169–175. https://doi.org/10.1007/s13752-­012-­0013-­y Rosas, A., & Bermúdez, J. P. (2018). Viewing others as equals: The non-cognitive roots of shared intentionality. Review of Philosophy and Psychology, 9(3), 485–502. Ruse, M. (1986). Taking Darwin seriously. Oxford, UK: Basil Blackwell. Ruse, M., & Wilson, E.  O. (1986). Moral philosophy as applied science. Philosophy, 61(236), 173–192. https://doi.org/10.1017/S0031819100021057 Strawson, P. F. (2008 [1974]). Freedom and resentment and other essays. New York: Routledge. Street, S. (2006). A Darwinian dilemma for realist theories of value. Philosophical Studies, 127, 109–166. Tomasello, M. (2020). The moral psychology of obligation. Behavioral and Brain Sciences, 43, E56. https://doi.org/10.1017/S0140525X19001742 Trivers, R. (1971). The evolution of reciprocal altruism. The Quarterly Review of Biology, 46, 35–57.

Part III

The Cultural Evolution of Morality

Chapter 8

The Cultural Evolution of Extended Benevolence Andrés Luco

Abstract In The Descent of Man (1879), Charles Darwin proposed a speculative evolutionary explanation of extended benevolence—a human sympathetic capacity that extends to all nations, races, and even to all sentient beings. This essay draws on twenty-first century social science to show that Darwin’s explanation is correct in its broad outlines. Extended benevolence is manifested in institutions such as legal human rights and democracy, in behaviors such as social movements for human rights and the protection of nonhuman animals, and in normative attitudes such as emancipative values and a commitment to promote the rights or welfare of animals. These phenomena can be substantially explained by cultural evolutionary forces that trace back to three components of what Darwin called the human “moral sense”: (1) sympathy, (2) our disposition to follow community rules or norms, and (3) our capacity to make normative judgments. Extended benevolence likely emerged with “workarounds,” including political ideologies, that established an inclusive sympathetic concern for sentient life. It likely became as widespread as it is now due to recently arisen socio-economic conditions that have created more opportunities for people to have contact with and take the perspective of a broader cross-section of humanity, as well as other species. Keywords  Contact · Cultural evolution · Cultural variants · Democracy · Emancipative values · Extended benevolence · Human rights · Moral sense · Norms · Norm psychology · “Objective” morality · Perspective-taking · Second-­ personal morality · Sympathy · Transmission bias · Workarounds

A. Luco (*) School of Humanities, Nanyang Technological University, Singapore, Singapore e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. De Smedt, H. De Cruz (eds.), Empirically Engaged Evolutionary Ethics, Synthese Library 437, https://doi.org/10.1007/978-3-030-68802-8_8

153

154

A. Luco

8.1  Extended Benevolence in Darwin’s Descent of Man In The Descent of Man (1879),1 Charles Darwin theorizes the history of “the moral sense” and anticipates its future. He writes: As man advances in civilization, and small tribes are united into larger communities, the simplest reason would tell each individual that he ought to extend his social instincts and sympathies to all the members of the same nation, though personally unknown to him. This point being once reached, there is only an artificial barrier to prevent his sympathies extending to the men of all nations and races. (Darwin, 1879: 147).

Darwin ups the ante a few lines down the page. He suggests that human sympathies can and will extend beyond our own species: Sympathy beyond the confines of man, that is, humanity to the lower animals seems to be one of the latest moral acquisitions…This virtue, one of the noblest with which man is endowed, seems to arise incidentally from our sympathies becoming more tender and more widely diffused, until they are extended to all sentient beings. (Ibid.)

Darwin’s comments were mostly speculative. Yet as I shall argue, his account of the evolution of the “moral sense” has turned out to be remarkably prescient. My effort here will be to update Darwin’s outline of the emergence of a human sympathetic capacity that extends to all nations, races, and even to all sentient beings. I shall call this form of sympathy extended benevolence. In the following, I cite cultural evolutionary mechanisms to explain the emergence and spread of extended benevolence. I will discuss ways that extended benevolence could arise through forces of cultural evolution known as adapted transmission biases.

8.2  Darwin on the “Moral Sense” In trying to answer the question of how extended benevolence might have evolved, Darwin offers a worthwhile starting point. He famously suggests in the Descent that the human “moral sense,” as he called it, evolved via natural selection, emotionality, habit, community rules of conduct, instruction, and reason. Let us review the details of Darwin’s evolutionary account of the moral sense. In so doing, we will see how Darwin grasped many crucial insights that allow a cultural evolutionary theory to meet the challenge of explaining extended benevolence. Darwin describes the “moral sense” in the opening paragraph of Chap. 4 of the Descent: …the moral sense or conscience…is summed up in that short but imperious word ought, so full of high significance. It is the most noble of all the attributes of man, leading him without a moment’s hesitation to risk his life for that of a fellow-creature; or after due deliberation, impelled simply by the deep feeling of right or duty, to sacrifice it in some great cause. (Darwin, 1879: 120)  Full title: The Descent of Man, and Selection in Relation to Sex.

1

8  The Cultural Evolution of Extended Benevolence

155

In this passage, Darwin uses the term “moral sense” interchangeably with “conscience.” He links the moral sense to normative attitudes expressed (in English) through the word “ought,” and he cites the moral sense as a motivation for altruistic behavior. Darwin outlines four stages in the evolution of the moral sense, with natural selection most prominently driving the first stage. In the first stage, an animal acquires “social instincts” that cause it “to take pleasure in the society of its fellows, to feel a certain amount of sympathy with them, and to perform various services for them” (Darwin, 1879: 121). In Darwin’s view, sympathy is a chief motivation behind the altruistic “services” that animals perform for others. Various animals, including birds, dogs, monkeys, and humans, feel love and sympathy for others. In particular, they feel sympathetic pain when in the presence of the pain of another individual. For animals in the first stage, sympathy does not extend to all members of the same species, but only to others in the same “association” (Ibid.). Darwin tries to explain the evolution of the social instincts through an early appeal to group selection. Sympathy, he suggests, likely proliferated due to natural selection between different “communities” of the same species, since “those communities, which included the greatest number of the most sympathetic members, would flourish best, and rear the greatest number of offspring” (Darwin, 1879: 130). In the second stage of the evolution of the moral sense, some animals gain the ability to remember their past actions (Darwin, 1879: 121). With this ability, animals come to remember past moments in which they experienced a conflict between their social instincts and their “instincts of self-preservation,” such as instincts to pursue food and sex (Darwin, 1879: 136). Darwin mentions that human beings are unique in feeling regret and shame brought on by nagging memories of past instances when one acted against one’s social instincts (Darwin, 1879: 135–136, 138). Shame is a painful feeling prompted by the experience and memory of others’ disapproval of one’s own behavior. Such disapproval tends to be elicited by behavior that serves one’s own interests at the expense of others (Darwin, 1879: 136, 138). As a result, human beings have some inclination not to repeat past actions in which they satisfied their self-preserving instincts rather than their social instincts. This inclination, Darwin adds, is conscience: “for conscience looks backwards, and serves as a guide for the future” (Darwin, 1879: 138). Darwin explains the second stage with reference to at least two mechanisms— habit and natural selection. He maintains that conscience can be strengthened by habit into a capacity for “self-command” (Darwin, 1879: 139–140). An individual possessing self-command would be accustomed to acting in accordance with his or her social instincts “instantly,” and “without struggle” (Darwin, 1879: 139). Apart from being acquired through habit, Darwin emphasizes that self-command may also be inherited (Darwin, 1879: 140). Darwin’s rationale for this claim appears to be that conscience depends on shame, and shame in turn depends on sympathy. Sympathy, we saw, is theorized by Darwin to be a product of natural selection on groups (Darwin, 1879: 136, 138). The third stage in Darwin’s account of the evolution of the moral sense follows the advent of language. It occurs when “the common opinion of how each member

156

A. Luco

ought to act for the public good, would naturally become in a paramount degree the guide to action” (Darwin, 1879: 122). Darwin observes that the “imperious word ‘ought’” implies an awareness of a rule of conduct, the violation of which will be met with social disapproval (Darwin, 1879: 140). To avoid the shame elicited by this disapproval, humans will tend to comply with rules of conduct formulated and enforced by common opinion. The common opinion is expressed through language—at first in speech, later in writing (Darwin, 1879: 146). Accordingly, people can learn about rules of conduct through instruction: they can listen to or read the words of other people who explicitly articulate the rules. Further, rules of conduct can be learned by example: people can observe which specific behaviors performed by someone elicit approval and disapproval among others in the community (Darwin, 1879: 146, 149, 157). In the fourth and last stage of Darwin’s evolutionary history of the moral sense, “reason” brings about extended benevolence (Darwin, 1879: 141–143). Darwin proposes that as “small tribes are united into larger communities, the simplest reason would tell each individual that he ought to extend his instincts and sympathies” to unfamiliar strangers, and when this point is reached, there is only “an artificial barrier” to prevent human sympathies from reaching “beyond the confines of man…until they are extended to all sentient beings” (Darwin, 1879: 147, emphasis added). Darwin calls “[s]ympathy beyond the confines of man” a virtue; indeed, he says it is “one of the noblest with which man is endowed” (Ibid., cf. Darwin, 1879: 151). Additionally, as soon as extended benevolence “is honoured and practiced by some few men, it spreads through instruction and example to the young, and eventually becomes incorporated in public opinion” (Darwin, 1879: 147). Darwin’s account of the evolution of the moral sense ends with the emergence of extended benevolence. As we advance in “intellectual power,” as we become more adept at tracing the remote consequences of our actions, as we sympathize more with others, and as we learn more “from habit, following on beneficial experience, instruction and example,” our sympathies ultimately become “more tender and widely diffused, extending to men of all races, to the imbecile, maimed, and other useless members of society, and finally to the lower animals” (Darwin, 1879: 149). In this speculative history, Darwin characterizes the moral sense as an assemblage of components. It consists of (1) a capacity to make normative judgments (i.e., ought judgments); (2) a set of “social instincts,” particularly sympathy, which can motivate altruistic behavior; (3) a disposition to obey rules prescribed by community opinion; (4) a rational capacity to anticipate how the consequences of actions and practices affect the welfare of one’s social group; and (5) a rational capacity to extend one’s sympathies to unfamiliar others in spite of “artificial” or arbitrary differences one has with them. As I will discuss in Sects. 8.4 and 8.5 below, Darwin’s analysis of the components of the moral sense has fared remarkably well in the light of contemporary research.

8  The Cultural Evolution of Extended Benevolence

157

8.3  E  xtended Benevolence: Behaviors, Institutions, and Attitudes Before we consider how extended benevolence evolved, we could do with more clarity on what it is. Extendedly benevolent behaviors and institutions treat the good of all human beings, or even all sentient beings, as having some degree of moral significance. This section highlights several human behaviors and institutions which may be described as extendedly benevolent. Political regimes that practice equal respect for the legal human rights of individuals are one type of extendedly benevolent institution. Legal human rights are legal rights protected for all human beings within a jurisdiction. Legal human rights are equally respected when a governmental body protects them to the same degree for all human beings in the relevant jurisdiction. Equal respect for legal human rights is an instance of extended benevolence, since a state that practices such equal respect treats all human rights-bearers as having equal moral standing. Particularly since the twentieth century, there has been substantial progress in protections of human rights. Political scientist Christopher Fariss has shown that since the early 1980s, human rights to physical integrity have been increasingly respected by governments throughout the world (Fariss, 2014). Physical integrity rights include human rights not to be subjected to political kidnapping, arbitrary imprisonment, battery, torture, execution, politicide, and genocide. Another extendedly benevolent institution is democracy. Democracies exhibit extended benevolence to the degree that every adult citizen is able to influence political outcomes. Democracies distribute political power more equally among adult citizens than other systems of government do. More than other political regimes, democracies ensure free and fair elections, the freedom to organize political movements, the freedom to express political opinions, an independent and impartial judiciary, and most of all, the power to vote. In Freedom in the World, an annual report published by Freedom House, countries are ranked according to these characteristics and other measures of political equality. The report shows that democracies consistently outperform other political systems on these measures (see Freedom House, 2020).2 At the turn of the twentieth century, there were more autocracies than democracies in the world. By the turn of the twenty-first century, democracies outnumbered autocracies (Roser, 2020). Political scientist Daniel Treisman examined a composite of four authoritative measures3 used to classify a country’s political system for every year between 1800 and 2016 (Treisman, 2018). The proportion of the world’s

2  To be sure, existing democracies do not institute perfect political equality. All too often, the wealthy have disproportionate power to influence politicians’ decisions. Many democracies disenfranchise adult citizens convicted of a criminal offense. And, democracies typically prohibit minors and non-citizen adults from voting. The point is only that political equality among adult citizens is achieved to a greater extent in democracies than in nondemocracies. 3  Polity, Freedom House, the Boix-Miller-Rosato code, and V-DEM.

158

A. Luco

democracies underwent volatile growth through this period, with both steep rises and precipitous falls in the twentieth century. Despite those ups and downs, Treisman observed a clear overall pattern: a rising tide of democracy in which the global proportion of democracies reached “at or near an all-time high” of around 59% by 2016. Scholars are debating whether democracy is on the verge of decline.4 Some data are indeed troubling. The 2020 report of the Varieties of Democracy (V-Dem) Project found that, for the first time since 2001, democracies no longer made up a majority of countries (Lührmann et  al., 2020). In 2019, 48% of countries in the world were democracies, and democracies were home to only 46% of the world’s population. I am not in a position to speak to whether this is only a short-lived dip or a sustained backsliding of democracy.5 Instead, my concern will be to explain, from a cultural evolutionary perspective, how the form of extended benevolence manifested by democracy came to be as widespread as it is now. Extended benevolence can also be observed in the treatment of non-human animals. Since the nineteenth century, there has been a steady rise of laws prohibiting the exploitation of animals in dozens of countries (Waldau, 2011: 106–108). For instance, in 2005 Australia banned any experiment on nonhuman apes that is not in the interest of the animal itself. In 2000, the High Court of Kerala, India, ruled that under Article 21 of the Indian Constitution, circus animals were “beings entitled to a dignified existence” (Waldau, 2011: 108). In 2015, an Argentinian court declared an orangutan named Sandra a “nonhuman being” entitled to basic rights to life, freedom, and protection from harm (Giménez, 2015). Even though these events don’t quite amount to treating animals and humans equally, they nonetheless display a form of extended benevolence that treats animals as beings worthy of protection and concern. Many people possess normative attitudes that may be described as extendedly benevolent. There is, moreover, compelling evidence that these attitudes play a causal role in bringing about extendedly benevolent behaviors and institutions. Data from the United States suggest that people in the animal rights movement were driven by normative commitments to achieve legal protections for animals against suffering, death, and exploitation at human hands (Waldau, 2011). The sociologist James M.  Jasper found that “moral shocks” play a key role in recruiting people to join animal rights protests (Jasper, 1997). Moral shocks are events which raise “such a sense of outrage in people that they become inclined toward political action” (Jasper, 1997: 106). Jasper and his team collected questionnaires from over 300 protestors who attended two animal rights demonstrations in 1988. When asked to rate the importance of a list of factors that drew them into the animal rights movement, 72% of the respondents rated “Things you have read” as very important (Jasper, 1997: 175–176). Jasper observes that “[p]eople were recruited by an animal rights literature filled with powerful images designed to shock,” such as cats with 4  For a discussion of the debate surrounding the “new pessimism” about democracy, see Welzel, Inglehart, Bernhangen, and Haerpfer (2019). 5  The 2020 V-Dem Report also notes that the recent decline in democracy has mobilized resistance: pro-democracy protests reached an all-time high in 2019.

8  The Cultural Evolution of Extended Benevolence

159

electrodes planted in their heads and white rabbits with puss-filled eyes from cosmetics testing (Ibid.). For instance, Jasper’s team interviewed an animal rights activist who testified to being deeply affected by the texts and images documenting experiments done on animals. “[T]hat’s gotta stop,” he vowed (Jasper, 1997: 176). Other studies suggest that vegetarianism and veganism can be substantially attributed to people’s normative attitudes of concern for the rights or welfare of animals. In a 2002 telephone survey of 400 vegetarians in the U.S., 10% cited animal rights as their reason for being vegetarian. According to a 2012 survey of 145 vegetarians (aged 18–25) in the U.S., 67% of the respondents cited ethics as their reason (Cooney, 2014: loc. 1233). In addition, extendedly benevolent normative attitudes have been a powerful contributing cause of the institutionalization of democracy and human rights. This much has been shown by sociologists Christian Welzel, Ronald Inglehart, and their collaborators (see Inglehart, 2018; Inglehart & Norris, 2003; Inglehart & Welzel, 2005; Welzel, 2013). Welzel, in particular, found strong correlations between a cluster of normative attitudes that he calls emancipative values, on the one hand, and human rights and democracy, on the other. Generally, a person who accepts emancipative values will tend to emphasize the importance of freedom of choice and equality of opportunity for all persons (Welzel, 2013: loc. 4818–4831). To measure the acceptance of emancipative values in a given country’s population, Welzel relies on the World Values Survey (WVS).6 Welzel uses the following items on the WVS as indicators of whether the respondents hold emancipative values (Welzel, 2013: loc. 1989): • WVS respondents are taken to value freedom of choice if they agree that independence and imagination, but not obedience, are desirable qualities in children, or if they express tolerance of abortion, divorce, and homosexuality. • Respondents are taken to value equality of opportunity if they express disagreement with the idea that education is more important for a boy than for a girl; or they disagree that men should have priority over women to get a job when jobs are scarce; or they disagree that men make better political leaders than women. • WVS respondents’ normative attitudes are viewed as valuing equality of opportunity if they assign a high priority to protecting freedom of speech, or to giving people more say in important government decisions, or to giving people more say about how things are done at their jobs and in their communities. Welzel argues that changes in the popular acceptance of emancipative values are powerful causes of legal human rights and democracy. To support this thesis, Welzel cites strong and statistically significant correlations between his measure of emancipative values, on the one hand, and measures of institutional protections of human rights and democracy, on the other. To measure human rights and democracy, Welzel 6  The goal of the WVS is to collect data on the beliefs and attitudes of people around the world. Since it was launched in 1981, the WVS has polled 150,000 people in 100 countries containing 90 percent of the world’s population (Welzel, 2013: 58; Inglehart, 2018: 5). It collects statistically representative samples of all residents living in every country surveyed.

160

A. Luco

relies on a citizen rights index and a women’s rights index (Welzel, 2013: Appendix 8, 9). He examines approximately 50 countries which were surveyed at least twice by the World Values Survey over a period of at least 10 years. Ultimately, Welzel discovers a strong, positive, and significant association between (1) changes in the proportion of people in a country who accept emancipative values over a time period of at least a decade, and (2) the country’s scores on citizen rights and women’s rights measured at the end of that decade-long period (Welzel, 2013: loc. 7197–7395). Welzel’s work reveals that extendedly benevolent institutions such as legal human rights and democracy owe their existence in large part to certain normative attitudes—namely, emancipative values. Emancipative values themselves are properly regarded as extendedly benevolent attitudes, given their emphasis on equality of opportunity and freedom of choice for all persons (Welzel, 2013: loc. 5020, 5227). Summing up, behaviors and institutions that can be described as extendedly benevolent are widespread. The cases in point were legal human rights, democracy, and the protection of animal rights and welfare. These behaviors and institutions can be substantially explained by extendedly benevolent normative attitudes, such as emancipative values or a belief in the moral standing of animals. Altogether, these phenomena are emblematic of the capacity for human social instincts and sympathies to extend, as Darwin had predicted, to “all nations and races,” and even “beyond the confines of man.”7

8.4  Extended Benevolence Evolving At this point I begin to advance a cultural evolutionary explanation for extended benevolence. In their book Not by Genes Alone (2005), evolutionary anthropologists Robert Boyd and Peter J. Richerson set out the nuts and bolts of their influential theory of cultural evolution. Culture, as they define it, is “information capable of affecting individuals’ behavior that they acquire from other members of their species through teaching, imitation, and other forms of social transmission” (Richerson & Boyd, 2005: 5). Cultural variants are elements of cultural information; they include ideas, knowledge, beliefs, values, skills, and attitudes (Richerson & Boyd, 2005: 5–6, 63). Different populations of people exhibit differences in language, custom, moral belief systems, technologies, and art because they adopt different cultural variants (Richerson & Boyd, 2005: 6). Cultural evolution, as Boyd and Richerson define it, is change in the relative frequencies of different cultural variants within a population over time (Richerson & Boyd, 2005: 59–60). Boyd and Richerson identify several causes, or forces, of cultural evolution (Richerson & Boyd, 2005: 68–69). Among those cultural 7  I do not claim that extended benevolence will remain as widespread as it is forever. The recent rise of nationalist-populism in the West might augur the demise of extended benevolence. It is too soon to tell. My aim is merely to establish that the existence of extended benevolence can be explained from a Darwinian—i.e., cultural evolutionary—perspective.

8  The Cultural Evolution of Extended Benevolence

161

evolutionary forces are transmission biases, which are features of human psychology that make people more likely to adopt some cultural variants than others (Richerson & Boyd, 2005: 68). Boyd and Richerson distinguish between three transmission biases (Richerson & Boyd, 2005: 69). First, there is content-based bias, which operates when individuals are more likely to learn or remember some cultural variants than others due to their content (Ibid.). Boyd and Richerson add that “[c]ontent-based bias can result from calculation of costs and benefits associated with alternative variants” (Richerson & Boyd, 2005: 69). Second, there is frequency-­based bias, in which individuals choose to adopt a cultural variant based on how frequent it is in the surrounding community (Ibid.). And third, there is model-based bias, in which individuals choose to adopt a cultural variant as a result of observing the attributes of other people who have adopted the variant. A model-­ based bias known as prestige bias may motivate an individual to adopt a cultural variant merely because the most prestigious, high-status individuals in the relevant society have adopted it. Alternatively, a model-based bias called success bias may guide an individual to adopt a cultural variant for the reason that others who’ve adopted it are relatively successful in some way—i.e., are more wealthy, healthy, happy, etc. Boyd and Richerson’s framework can be used to explain the cultural evolution of extendedly benevolent institutions. Human rights institutions and animal welfare protections, in particular, can be regarded as assemblages of cultural variants that have been increasingly adopted in many societies. It is uncontroversial that these social phenomena have spread through social transmission. Moreover, there are quantitative measures of both human rights and animal protections. Philosopher Jonathan Birch stresses that cultural variants need to be measured quantitatively for researchers to do the essential work of making mathematical models of cultural evolution (Birch, 2017: 196). Fariss developed a Human Rights Protection Score on the basis of several other indices (Fariss, 2014). Also, the Animal Protection Index, published by the organization World Animal Protection, scores countries according to their demonstrated commitment to promote animal welfare through policy and legislation.

8.4.1  Transmission Biases and Human Rights Social scientists have pinpointed content-based and frequency-based transmission biases that are causing humans rights to proliferate worldwide. Consider, for instance, the work of political scientist Brian Greenhill (2015). Greenhill finds that “over time, states adopt similar human rights practices to those of the other states with whom they share IGO memberships” (Greenhill, 2015: 14). IGOs are inter-­ governmental organizations whose members are representatives of sovereign countries (Greenhill, 2015: 5–6, 60). Well-known IGOs include the United Nations and the European Union, while others include the Gulf Cooperation Council (GCC), the West African Health Organization, and the International Coral Reef Initiative.

162

A. Luco

Greenhill relies on the Correlates of War 2 International Governmental Organizations Data Set, which provides data on 495 IGOs between the years 1815 to 2005 (Greenhill, 2015: 60). Greenhill’s analysis demonstrates that among IGOs whose cultures strongly expect their member states to respect human rights norms, the human rights records of member states tend to improve within the first few years of joining the IGO (Greenhill, 2015: ch. 3). This occurs because diplomats or policymakers from member countries who operate in the IGOs are influenced by their exposure to the human rights cultures of the organizations. They then go on to influence policymaking in their home countries (Greenhill, 2015: 46–51). By Greenhill’s account, acculturation is one mechanism through which co-­ members of IGOs become more similar in their human rights adherence. In an acculturation process, “an actor changes his or her beliefs and behaviors in order to conform to the norms of a new social environment” (Greenhill, 2015: 44–45). Acculturation is different from material inducement, in which an agent changes behavior to comply with someone else’s demands so as to reap material awards or avoid material sanctions. It’s also distinct from persuasion, in which an agent undergoes a change in beliefs after thoughtfully deliberating over information conveyed by others (Greenhill, 2015: 39, 43). Acculturation is driven by two of the transmission biases emphasized by Boyd and Richerson: frequency-based bias and model-­ based bias. Meanwhile, persuasion would qualify as a content-based bias. Greenhill shows that a frequency-based bias is at work when countries adopt the same human rights practices as their IGO partners. He measures the human rights performance of countries by means of the Physical Integrity Rights (PIR) index. The PIR gives states an annual score which represents the frequency of human rights violations—namely, torture, political imprisonment, extrajudicial killing, and disappearances—that take place within each state in a given year (Greenhill, 2015: 62). Greenhill then tests for an association between states’ PIR scores and their IGO context. IGO context is another measure of Greenhill’s design which is roughly a weighted average of the PIR scores of all the IGO partners of a focal state (cf. Greenhill, 2015: 64–70). For the 154 countries he surveys (covering the period from 1982 to 2006), Greenhill finds a positive, statistically significant, and relatively robust correlation between states’ PIR scores and their IGO context from the previous year. When controlling for other factors such as GDP per capita, democracy, and trade dependence, if a focal state’s IGO partners had lower PIR scores in a given year, the state tended to have lower PIR scores the following year, and if a focal state’s IGO partners had higher PIR scores in a given year, the state tended to have higher PIR scores the following year (Greenhill, 2015: 72–76). Greenhill’s results are indicative of a frequency-based bias which causes a country to reduce the number of its human rights violations after participating in an IGO network composed largely of partner countries that have low numbers of violations. In the work of another political scientist, we see how content-based transmission biases help to explain why countries comply with international human rights treaties. In her book Mobilizing for Human Rights (2009), Beth A. Simmons argues that a country’s ratification of a human rights treaty increases the expected utility of mobilizing for human rights—i.e., of joining a mass movement to demand the

8  The Cultural Evolution of Extended Benevolence

163

fulfillment of the rights promised in a human rights treaty (Simmons, 2009: 138–153).8 Simmons argues, first, that a country’s ratification of human rights treaties increases the probability that mobilizing for human rights will successfully strengthen a country’s protections of human rights. Ratification of human rights treaties increases the probability of successful mobilization by attracting more allies to the country’s human rights movement, by enhancing the perceived legitimacy of the human rights movement, and by expanding the legal and political strategies that the human rights movement can employ to achieve broader human rights protections (Simmons, 2009: 144–147). Second, Simmons argues that a country’s ratification of human rights treaties increases the utility, or value, of human rights protections for the people within the country. Legal frameworks, including treaties, perform an “educative role” by changing individuals’ perceptions of their own identities and interests (Simmons, 2009: 140). Simmons cites the work of social anthropologist Sally Engle Merry, whose research describes how individuals can incorporate transnational human rights into their already-held values and perspectives (Merry, 2006). When people understand and reflect on the content of a human rights treaty, they may come to think of themselves as being entitled to the rights codified in the treaty (Simmons, 2009: 141–143). As a result, exposure to the content of a human rights treaty may increase the utility of human rights protections for people who come to perceive themselves as rights-bearers. The expected utility of human rights mobilization is the product of the utility of human rights protections and the probability of successfully realizing human rights protections. Simmons’s theory predicts that actual compliance with ratified human rights treaties will be greater in countries where the expected utility of human rights mobilization is higher. When this happens, there will be more mobilization, and thus more political pressure placed by citizens on governments to comply with the human rights treaties they have ratified. Simmons posits that the expected utility of human rights mobilization is highest in countries that are transitioning from an autocratic political system to a partially democratic one (Simmons, 2009: 150–153). Many of these partially democratic transitional regimes (PDTRs) are just beginning to emerge from a condition where there had been extensive political repression. Hence there is more demand—i.e., high utility—for human rights protections within these regimes. But because PDTRs are newly and partially democratic, they also have institutional mechanisms—such as the ballot, a free press, and an independent judiciary—that incentivize

8  International human rights treaties are international legal agreements in which the governments ratifying the agreement commit to respecting the human rights of their people. They include the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), the International Covenant on Civil and Political Rights (ICCPR), the International Covenant on Economic, Social and Cultural Rights (ICESCR), the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), the Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment (CAT), and the Convention on the Rights of the Child (CRC).

164

A. Luco

governments to be responsive to citizens’ demands. Hence there is a reasonably high probability that mobilizing for human rights will lead to real improvements in the human rights performance of PDTRs. The predictions of Simmons’s expected utility hypothesis are borne out by the data: in PDTRs where the expected utility of human rights mobilization is hypothesized to be highest, ratification of human rights treaties is most strongly associated with improved human rights protections. Simmons compares the human rights performance of stable autocracies, stable democracies, and PDTRs. She finds that PDTRs that ratified the Convention Against Torture (CAT) are much more likely to reduce their incidence of torture than PDTRs that did not ratify  it. As Simmons observes, “[r]atification of the CAT is associated with almost a 40 percent increase in the likelihood that a country will improve by one category on the torture scale” (Simmons, 2009: 276). Also, among PDTRs, ratification of the International Covenant on Civil and Political Rights (ICCPR) is associated with an 11 percent improvement in a country’s average religious freedom score (Simmons, 2009: 176). And furthermore, ratification of the ICCPR by PDTRs is associated with fairer domestic trials for up to 5 years (Simmons, 2009: 185). The mechanism posited by Simmons’s expected utility hypothesis is a content-­ based bias, since it involves agents selecting novel cultural variants as a result of a cost-benefit calculation. Citizens in partially democratic transitional regimes value the rights codified in human rights treaties, and when assessing whether to mobilize collectively for stronger human rights protections, they deem themselves to have a good enough chance of success under the political circumstances to make mobilizing a better prospect than not mobilizing. Here, stronger human rights protections and the status quo can be considered alternative cultural variants. Citizens in the relevant regimes—the PDTRs—assess the relative costs and benefits of pursuing novel cultural variants versus staying with the status quo, and they opt in favor of the former.

8.4.2  Transmission Biases and Animal Welfare We can look to other social science research for insight into how transmission biases explain the cultural evolution of extendedly benevolent institutions that protect nonhuman animals. Plausibly, norms that prohibit cruelty to animals were favored in cultural evolution by certain content-based transmission biases. Some of these content biases were likely rooted in people’s capacities for sympathy and perspective-­ taking. In Sentimental Rules (2004), philosopher Shaun Nichols traces a process of growing public opposition to animal cruelty in Western European societies during the nineteenth century. By the late nineteenth century, animal protection laws prohibiting animal blood sports and other abuses became commonplace throughout the United Kingdom and Europe (Nichols, 2004: loc. 1879–1883). But why did laws against animal cruelty become so popular in that particular moment? Strikingly, Nichols notes that anti-cruelty laws were championed by pet owners “who seem to

8  The Cultural Evolution of Extended Benevolence

165

have developed heightened sensitivity to the plight of animals” (Nichols, 2004: loc. 1887). This “heightened sensitivity” should not be surprising, considering that pet owners are well-practiced in taking the perspective of, and sympathizing with, their pets. Indeed, a questionnaire study of Scottish primary school children found that an emotional attachment to pets predicts a concern for the welfare of all animals—not just pet animals, but also farm animals and wild animals (Hawkins et al., 2017). Of course, since pet ownership predates the nineteenth century, it cannot be the whole explanation for extensions in benevolence toward animals that began in that century. James M. Jasper and Dorothy Nelkin discuss other factors in their book The Animal Rights Crusade (Jasper & Nelkin, 1992) (see also Jasper, 1997: 162–165). Jasper and Nelkin suggest that European and American attitudes toward nonhuman animals have changed gradually since the sixteenth century. In this period, a bourgeoisie inhabiting industrialized towns and cities grew, while the share of people practicing agriculture shrank. An agricultural way of life fostered the perception that animals were mere resources to be exploited. Sure enough, people in agricultural societies did have pets and formed emotional attachments to them, but these affectionate bonds coexisted with the economic use and consumption of livestock. With urbanization and industrialization, a declining fraction of the population directly exploited animals as a resource. As Jasper notes, people “hunted less, had fewer fields to plow, and raised fewer animals to slaughter” (Jasper, 1997: 163). Instead, more and more people incorporated pets into their tight emotional circles, cherishing them as beloved companions. Additionally, advances in science in the 18th and 19th centuries, such as Darwin’s theory of common descent, made the similarities between animals and humans more salient in people’s minds (Ibid.). These sociological developments allowed for feelings of sympathy and affection to gradually displace a callous, exploitative orientation toward animals. Nichols cites the spread of norms against animal cruelty as evidence in favor of his affective resonance theory of cultural evolution. According to this theory, “norms prohibiting actions that are likely to elicit negative affect, ‘affect-backed norms,’ will have an advantage in cultural evolution” (Nichols, 2004: loc. 2020). In other words, people have a defeasible preference to adopt and follow norms that do not elicit negative affect. Nichols suggests that norms protecting animal welfare are “affect-backed” in the sense that they spare people from experiencing aversive emotions caused by an awareness of the suffering of animals. If so, we should expect to see these norms become more widespread as people increasingly sympathize with and take the perspective of animals. This is indeed what took place in Europe between the sixteenth and nineteenth centuries.

166

A. Luco

8.5  T  he Moral Sense as an Assemblage of Adapted Transmission Biases The research reviewed thus far illustrates Boyd and Richerson’s cultural evolutionary framework. That framework offers a powerful explanation for the diffusion of extendedly benevolent institutions and behaviors due to frequency-based and content-­based transmission biases. In this section, we’ll see that the transmission biases driving the cultural evolution of extended benevolence can themselves be explained in the light of cultural evolutionary theory. As the philosopher Tim Lewens (2015) observes, Boyd and Richerson theorize that some transmission biases are adapted in the sense that they evolved because they enhanced the reproductive success of our hominin ancestors (Lewens, 2015: 17; Richerson & Boyd, 2005: 7–8, 71, 196–197). Following Boyd and Richerson, I maintain that the moral sense is an assemblage of such adapted transmission biases. As we saw in Sect. 8.2, Darwin thought of the human moral sense as a complex cognitive-motivational system made up of five components. The psychologist Michael Tomasello (2016, 2018a, 2018b, 2020) has an account of how the “sense of moral obligation” might have evolved. This sense of obligation is a foundation for at least the first three of the components that Darwin ascribes to the moral sense: namely, sympathy, our disposition to follow community rules, and our capacity to express normative “ought” judgments about people’s behavior. Hereafter, community rules will be denoted as “norms.” Tomasello theorizes that the sense of obligation is a motivation that evolved in Homo sapiens psychology because it facilitated large-scale cooperation among individuals who were not genetically related to one another. If Tomasello’s account is correct, the sense of obligation is an adapted transmission bias that inclines people to carry out cooperative behaviors and preserve cooperative arrangements. In Tomasello’s theory, the sense of moral obligation evolved in two major transitions. First, a little less than 2 million years ago, a global cooling and drying period caused land-dwelling monkeys to migrate into the habitats of early hominins of the Homo genus. The resulting competition over food forced some early Homo to scavenge carcasses killed by other animals. But eventually, early Homo populations— perhaps Homo heidelbergensis some 400,000 years ago—began to forage for food cooperatively in face-to-face, dyadic interactions (Tomasello, 2018a: 662, 664; Tomasello, 2018b). Cooperation was so essential to survival that natural and social selection pressures favored individuals who possessed psychological dispositions to cooperate. Not having collaborators was a sure way to die. On the other hand, individuals who could prove themselves to be reliable cooperation partners were selected as collaborators and mates, and this brought significant fitness advantages. The moral psychology that emerged from this first transition was a “second-­ personal morality,” which Tomasello defines as “the tendency to relate to others with a sense of respect and fairness based on a genuine assessment of both self and others as equally deserving partners in a collaborative enterprise” (Tomasello, 2018b; see also Tomasello, 2018a: 665). Second-personal morality includes a

8  The Cultural Evolution of Extended Benevolence

167

capacity to feel sympathy for someone who is or prospectively could be a partner in cooperation. But Tomasello emphasizes that second-personal morality also consists of a sense of fairness (Tomasello, 2018a: 664–665; Tomasello, 2020: 5–6; Tomasello, 2016: loc. 808–823). The sense of fairness is an attitude of impartiality where partners in dyadic cooperation recognize each other “as equally deserving individuals, equally worthy of respect” (Tomasello, 2020: 6). This sense of fairness is based on a recognition of self-other equivalence, which arises when cooperating partners both understand that they each have a role they should perform as a means of achieving a jointly intended goal, and that there are impersonal criteria for the proper performance of every role (Tomasello, 2018a: 665). Second-personal morality also includes a capacity for joint commitment—a communicative act in which cooperating partners both pledge to fulfill their respective roles and adhere to a fair division of the gains. Furthermore, joint commitment includes an implicit or explicit avowal that whoever reneges on their pledge to cooperate deserves to be sanctioned. In addition, joint commitment involves a capacity for deviants to feel guilt as a result of violating the terms of the cooperative partnership (Ibid.).9 The second transition in Tomasello’s account led to the sense of moral obligation that we modern humans, Homo sapiens, possess. Tomasello dubs it “‘objective’ morality” (see Tomasello, 2018a: 666–667; Tomasello, 2016: loc. 163–179, 1712). According to Tomasello, two great demographic shifts gave rise to Homo sapiens about 150,000  years ago (Tomasello, 2016: loc. 154; Tomasello, 2018b).10 First, intense competition between groups forced ancestral hominins to seek protection from marauders by coalescing into more tightly knit social groups. The groups created divisions of labor on which all group members depended for their survival in foraging and defense. Second, population growth led to tribal organization. Small foraging bands composed of a few families numbering in the dozens united into much larger tribes composed of thousands of individuals. Members of the same tribe cooperated among themselves, while they competed with other tribes. Fellow tribespeople included unfamiliar non-kin—individuals who neither had any genetic relation nor any history of face-to-face interaction with one another. However, it was essential for the early humans to differentiate unfamiliar members of their own tribe from outsiders. For people in the same tribe were far more reliable sources of cooperation and protection. Consequently, the tribes formed distinct cultures which served as markers of shared group allegiance, values, and skills. Those who shared the same tribal membership exhibited the same manner of speech, dress, food preparation, and the like (Tomasello, 2018a: 666). So, the ancestral humans who

9  Tomasello stresses that chimpanzees and bonobos, our closest evolutionary cousins, do not have a second-personal morality because they do not form joint commitments. They do collaborate with others to acquire food, mates, and social dominance. And they do exhibit helping behavior which suggests that they feel sympathy for others in need. However, their sympathy is limited to those with whom they have collaborated in the past. And, crucially, they do not exhibit resentment elicited by a perception of unfair treatment (Tomasello, 2016: loc. 431–692). 10  Our species may well be older. Recent excavations of fossils from Jebel Irhoud in Morocco have been dated to 315,000 years ago (Boyd & Silk, 2018: 325).

168

A. Luco

survived and reproduced most successfully were the ones whose psychologies enabled them to learn the ways of their culture, conform to their culture’s practices, teach their cultural practices to others, maintain a strong sense of cultural identity and allegiance, and generally care for the welfare of the cultural group (Tomasello, 2018b; Tomasello, 2020: 7–8). With these demographic changes, the sympathies of individuals scaled up to a concern for all members of the cultural group, including unfamiliar non-kin in that group. The impartial sense of fairness also scaled up. It was then understood that a complex division of labor, consisting of many interdependent roles performed by many individuals, had to be sustained in order to achieve collectively intended goals. Joint commitments gave way to social norms. Each member of the cultural group expected all members to comply with the group’s norms; each was disposed to sanction norm-violators; and each felt accountable to social norms in such a way that one’s own failure to comply would induce guilt and a troubling sense of identity-­ loss. Social norms were also internalized psychologically as an objective “view from nowhere.” They were accepted by all group members as normative standards that everyone was obliged to live up to. At the same time, it appeared to group members that the social norms did not issue from any single individual (Tomasello, 2016: loc. 2944–2961). This internalization of social norms extended impartial attitudes, so that all groupmates were thought to be equally deserving of others’ compliance with the prevailing norms (Tomasello, 2016: loc. 168, 2969).11 Tomasello’s account explains how three facets of the moral sense may have been adaptations for early human cooperation—namely, (1) the capacity to make “ought” judgments, along with (2) sympathy and (3) the disposition to abide by norms. Hereafter, I will explain how these three facets operate as transmission biases favoring the cultural evolution of extended benevolence.

8.6  How Extended Benevolence Emerged There is an explanatory challenge  that makes it difficult to see how extendedly benevolent behaviors and institutions could originally emerge. The challenge can be expressed as a question: why wouldn’t the moral sense evolve to motivate parochial and xenophobic behaviors that exclusively serve the interests of a cultural in-group? Tomasello himself entertains the idea that what we modern humans consider to be our cultural in-group could potentially be extended to include all of humanity (see Tomasello, 2016: loc. 182; Tomasello, 2018b; Tomasello, 2020: 7). But if, as Tomasello explicitly argues, our ancestors survived by making distinctions between insiders and outsiders, then wouldn’t selection pressures eliminate any  The impartial perspective generated by social norms does not guarantee equal status in society. Of course, social norms can allow for gross inequalities in power, prestige, privilege, and wealth. Instead, the impartial attitude that arises from social norms is the attitude that everyone ought to comply with prevailing norms.

11

8  The Cultural Evolution of Extended Benevolence

169

psychological tendency to perceive one’s in-group as the whole human population? Moreover, even if we grant that evolutionary forces permitted a moral psychology that sees the entire human species as an all-inclusive in-group, the details of how this orientation would arise by cultural evolutionary processes are not clear. For this reason, Allen Buchanan and Russell Powell (2018) voice skepticism about the prospects for a Boyd and Richerson-style explanation of the cultural evolution of “inclusivist morality,” which is Buchanan and Powell’s term for extended benevolence (Buchanan & Powell, 2018: 175). Buchanan and Powell even insist that cultural evolutionary transmission biases “cannot explain why inclusivist norms rose to sufficiently high frequencies…or [were] found to be persuasive by large segments of the population” (Ibid., emphasis added). My response to this explanatory challenge calls attention to an adaptive problem our ancestors faced. Indeed, they would have needed to distinguish unfamiliar non-­ kin of the same tribe, who were usually more reliable as cooperation partners, from outsiders who were usually less reliable. Because unfamiliar members of the same tribe needed to identify one another as trustworthy collaborators, symbols and rituals were used as markers of group identity.12 Symbols are things to which meaning is ascribed by a social convention (Wurz, 2012). A ritual is a pattern of behavior practiced by a social group. Rituals are often symbolic in that they carry meaning for the people who practice them. Anthropologist Joseph Henrich classifies rituals as a type of social norm (Henrich, 2016: 36). A team of anthropologists led by Kim Hill studied the social ties that bind collections of hunter-gatherer bands into a tribe (Henrich, 2016: 162–164; Hill, Wood, Baggio, Hurtado, & Boyd, 2014). They found that ritual relationships were more important than genetic and affinal relationships in facilitating crucial patterns of cooperation such as the sharing of meat and information, as well as receiving help when one is sick or injured (Henrich, 2016: 163). Ritual relationships, such as participating in multi-band sparring clubs, were found to be strong predictors of inter-­ band interactions in two mobile hunter-gatherer groups—the Aché and Hadza (Hill et al., 2014: 7). Boyd and Richerson also highlight that symbolic markers of group identity include shared language, dialect, styles of dress, and common adherence to rituals. Rituals include “gift exchanges, ceremonial activities, and rules of exogamy,” and they are among the symbolic markers that can provide human groups with a kind of insurance against misfortune (Richerson & Boyd, 2005: 221). For instance, the North American Blackfeet once hunted bison as their core subsistence activity. Since failed hunts were common, the Blackfeet developed a tribal-scale

 There is accumulating evidence that early Homo sapiens were engaging in symbolic and ritual behavior by around 70,000 years ago (Boyd & Silk, 2018: 327–330). Perforated shell beads were excavated from the Grotte de Pigeons in Morocco. This site is dated to 82,000 years ago. Some of the shells were painted with red ocher, and may have been worn on a cord or attached to clothing. Today, African peoples commonly use red ocher for symbolic purposes (Boyd & Silk, 2018: 329). At Diepkloof Rock Shelter in South Africa, 60,000-year-old ostrich shell fragments were found. The shells were decorated with geometric patterns, and are believed to signify group identity in the same way that pottery decorations do for modern foragers today (Ibid.).

12

170

A. Luco

network of relationships among smaller bands. This allowed bands that had been unsuccessful in their hunts to seek the assistance of more successful bands within the same tribe (Richerson & Boyd, 2005: 227). Additionally, Boyd and Richerson have argued that symbolic markers were used by our ancestors to reap the benefits of cooperation among tribal societies (Richerson & Boyd, 2004). Boyd and Richerson explain that the late Pleistocene hunter-­ gatherer ancestors who left Africa some 50,000 years ago maintained complex toolkits. These toolkits would have required a huge social network of people far larger than a tribe to correct accumulations of errors in reproducing the tools (Richerson & Boyd, 2004: 69). In addition, there were other benefits that Homo sapiens attained through inter-tribal cooperation, including military alliances, long-distance trade, and intermarriage (Ibid.). For such inter-tribal cooperation and tool refinement to be possible, modern humans needed to find some means of signaling their reliability as collaborators to the people of other tribes. The solutions they came up with were of the same kind as strategies used to bring people together at the tribal level. They constructed symbols and rituals that turned out to have the power to unify countless masses under a single mega-group identity. Boyd and Richerson reserve the term “workaround” for symbolic markers that can be used to establish mega-group identities (Richerson & Boyd, 2004: 69–71). Like the symbolic markers of intra-tribal membership, workarounds include symbols and rituals. Unlike the intra-tribal markers, workarounds could designate membership in nations comprising over a billion inhabitants who participate in a vast division of labor. Architectural monuments, for instance, provide symbols of national identity, and they can serve as sites of mass ritual performances. Religions and political ideologies also perform the function of workarounds; indeed, they can bind people into mega-groups even larger than the nation. Boyd and Richerson attribute to “humanistic,” “universalistic,” and “liberal” ideologies the potential to establish an inclusive “global village” identity. A “global village” identity may form the basis for extendedly benevolent concern to all human beings, all sentient creatures, and even all the denizens of the biosphere (Richerson & Boyd, 2004: 71, 73). The claim that ideologies can generate an ultra-inclusive group identity finds support in the research reviewed in Sect. 8.3. That work suggested that normative attitudes favoring extended benevolence, including beliefs in animal rights and emancipative values, have an impact in bringing about extendedly benevolent institutions such as animal welfare protections, human rights, and democracy. From Tomasello’s research, we’ve seen evidence that the disposition to learn, follow, and enforce norms is likely to be an adapted element of the human moral sense. Following Joseph Henrich, let us call this disposition norm psychology (Henrich, 2016: 188–189). Crucially, normative attitudes expressing a commitment to extended benevolence could spread through norms. In their comprehensive account of norms, Geoffrey Brennan, Lina  Eriksson, Robert  Goodin, and Nicholas Southwood (2013) characterize norms as clusters of normative attitudes. On this view, a normative principle P is a norm within a group G if and only if (i) a significant proportion of the members of G accept P and (ii) a significant proportion of the members of G believe that (i) is true (Brennan et al., 2013: 29). Under these

8  The Cultural Evolution of Extended Benevolence

171

conditions, people will be motivated to act as principle P prescribes. The fact that (i) and (ii) are conditions for the presence of a norm strongly suggests that a frequency-­ based bias is a key enabler of the emergence of norms. Indeed, one way in which new norms may emerge is through a “normative cascade” (Brennan et al., 2013: 98–99). Typically, there is variation among individuals in their respective population thresholds for what proportion of other people in their social group need to accept a normative principle P, before they are willing to accept P themselves. Some group members may have low population thresholds, in the sense that they accept a normative principle and are willing to follow it even when a very low proportion of others in their community share their attitude. Other group members may have somewhat higher thresholds, so that they become willing to accept and follow  a normative principle P only if they observe that a higher proportion of the community already accepts and follows P. Indeed, there may be a diffuse distribution of such thresholds in the relevant community. If there is, then a normative cascade may unfold: a few innovators who accept a novel normative principle P may convince a few others with low population thresholds to accept P, and then this larger mass of individuals convinces still more people with slightly higher thresholds to accept P, and so on until virtually the entire community accepts and follows P.  Normative cascades have been cited to explain the end of footbinding in China and the abandonment of female genital mutilation in hundreds of villages across Northwest Africa (Brennan et  al., 2013: 99; Mackie & LeJeune, 2009). Furthermore, Brian Greenhill (see Sect. 8.4) found limited empirical support for the operation of a normative cascade in the establishment of human rights cultures within IGOs (Greenhill, 2015: 98–101).

8.7  The Proliferation of Extended Benevolence I’ve been arguing that normative attitudes favorable to extended benevolence can emerge through symbolic markers and norms. Now I’ll make a case for a final point: sympathy—mediated by contact and perspective-taking—is an adapted transmission bias that can explain how extended benevolence came to be as widespread as it is. The definitions of “sympathy” and closely associated terms, like “empathy,” have long been debated (for discussion, see Zaki, 2019: 178–182). In the Descent of Man, Darwin uses “sympathy” to designate an emotion that motivates an individual to help others (see Sect. 8.2 above; Darwin, 1879: 121). Tomasello uses the term in the same way (e.g., see Tomasello, 2016: loc. 80, 496–520). Other authors use different words—words such as “compassion” and “empathic concern”—to refer to this same emotion that motivates helping (see Zaki, 2019: 180). I shall follow Darwin and Tomasello’s usage of “sympathy.” I argue that sympathy works as a content-based, adapted transmission bias driving the adoption of the behaviors, institutions, and attitudes associated with extended

172

A. Luco

benevolence.13 To see how this happens, we need to appreciate how sympathy is mediated by two factors: contact and perspective-taking. The more people make contact with and take the perspective of others, the more they sympathize with others. Extended benevolence involves sympathy for all nations, races, and perhaps even all sentient beings. Such expansive sympathy is the product of social environments in which there is an abundance of opportunities for people to make contact with each other and take one another’s perspectives. In social psychology, contact has consistently been found to reduce intergroup hostility, especially in the context of cooperative pursuits of common goals (Paluck, Green, & Green, 2019). Granted, it is possible for contact to intensify intergroup antagonisms, because the groups are sometimes unable to reconcile their differences. This is known as negative intergroup contact. Nevertheless, there is evidence that positive intergroup contact, which results in diminished antagonisms, is more frequent than negative contact. Thus, the cumulative effect of many intergroup contacts can be reduced hostility and increased goodwill overall (Graf, Paolini, & Rubin, 2014; Pettigrew, 2008). Contact is also an enabling condition for sympathy. When people are in contact with others, it presents them with the opportunity to take their perspective—to imagine what it would feel like to be in their situation. Another consistent finding from social psychology is that taking another person’s perspective can generate sympathy, which in turn acts as a motivation to help the other (Stich, Doris, & Roedder, 2010: 172–174). For instance, in a study by Dovidio, Allen, and Schroeder (1990), subjects who were instructed to take the perspective of a young woman in distress were more likely to help the woman. Vaish, Carpenter, and Tomasello (2009) found that 18-month-old children would look with concern at and subsequently help a person suffering from an injury, even when the victim did not display any overt emotions. This suggests that, despite the lack of overt emotional cues, the children could take the perspective of the victim, understand that they need help, and then actually provide help. If sympathy produces extended benevolence, then the enabling conditions of sympathy—namely, contact and perspective-taking—should predict the presence of extended benevolence. And this relationship does indeed hold. I suggested earlier (in Sect. 8.3) that extended benevolence manifests in the acceptance of emancipative values. It turns out that the extent to which people hold emancipative values is associated with a form of contact called “connective opportunities.” A core  There is considerable evidence that sympathy is an adaptation. Chimpanzees and humans may share a common ancestor that possessed a capacity for sympathy. In experimental settings, chimpanzees have been observed helping conspecifics who they observe to be in need. For instance, chimpanzees help conspecifics trying to get food and tools (Tomasello, 2016: loc. 596–618). In addition, human beings seem to be born with a capacity to sympathize. As Tomasello notes, infants as young as fourteen months help unfamiliar adults to fetch out-of-reach objects, and they comfort others who show signs of distress (Tomasello, 2016: loc. 929–952). To explain why a sympathetic capacity might have enhanced the reproductive success of our ancestors, evolutionary theorists have cited the mechanisms of kin selection, mutualism, direct reciprocity, social selection, and cultural group selection (Tomasello, 2016: loc. 225–414).

13

8  The Cultural Evolution of Extended Benevolence

173

proposition of Christian Welzel’s research is that the popular acceptance of emancipative values can be predicted by three socioeconomic factors, which Welzel calls action resources: (1) material resources such as food, shelter and income; (2) intellectual resources such as information, skills, and education; and (3) connective opportunities such as modern transportation and mass communications (Welzel, 2013: loc. 2979–3097). In one analysis, Welzel uses a country’s per capita GDP as a measure of material resources, the average number of schooling years in a country as a measure of intellectual skills, and internet access per 1000 persons as a measure of connective opportunities. He again relies on the World Values Surveys to measure the acceptance of emancipative values within a society. He then runs regressions of emancipative values against these three socio-economic measures for samples of 60 to 80 societies, and finds that 57% of the variation in emancipative values is explained by GDP per capita, 64% is explained by schooling years, and 67% is explained by internet access (Welzel, 2013: loc. 2979–2990). Other statistical models Welzel constructs with different measures and time-lagged data indicate the same strong dependency of emancipative values on the three action resources (Welzel, 2013: chapter 4). Welzel’s finding that emancipative values depend on connective opportunities is unsurprising in light of the relationship between contact and sympathy. Access to transportation and communication technologies raises the likelihood that different people—including people from very different walks of life—will come into contact. Through such enhanced contact, people have more opportunities to take the perspectives of others. Taking more perspectives could broaden people’s sympathies for others, and as a result, people may be more inclined to adopt emancipative values—values that uphold equality of opportunity and freedom of choice for everyone.14 Furthermore, perspective-taking and sympathy may also explain the spread of extended benevolence toward nonhuman animals. Perspective-taking can be facilitated in many ways. One way is through texts and images that document the plights of others. Jasper’s work (discussed in Sect. 8.3) traced the way that texts and images recording the suffering of animals drove people to join the animal rights movement. Additionally, Brian Lowe and Caryn Ginsberg (2002) conducted a survey of 100 animal rights activists from North America, Europe, and South Korea, and found that a strong majority of respondents rated pamphlets (75%) and books (76%) as a somewhat important or very important influence which had prompted them to get involved in the animal rights movement (Lowe & Ginsberg, 2002: 207–208). In addition, pamphlets (87%) and books (83%) were overwhelmingly rated by the

 Welzel identifies internet connectivity as a form of connective opportunity. It may be suspected that internet connectivity does not promote extended benevolence, but rather antipathy between different ideological groups who segregate themselves in digital “bubbles.” While this question is certainly deserving of further study, a recent analysis by Jha and Kodila-Tedika (2020) found a strong positive correlation between the use of Facebook and democracy ratings in 125 countries. Evidently, there is no tension between social media and democracy–one form of an extendedly benevolent institution.

14

174

A. Luco

respondents as either “somewhat” or “very” important in their work to influence others. This finding lends credibility to the idea that texts and images provide opportunities to take the perspectives of others (see also Tamir, Bricker, Dodell-Feder, & Mitchell, 2016). Successful perspective-taking can prime sympathy for humans and nonhumans alike. To summarize, the explanatory challenge was the task of explaining how the moral sense could evolve in such a way that it fosters extended benevolence beyond one’s cultural in-group and even beyond one’s species. My response to this challenge has been that ideological workarounds, norm psychology, contact, and perspective-­taking can extend the range of beings with whom one sympathizes to include cultural outsiders and animals. Here it may be objected that Tomasello’s model is inconsistent with the above theory of the cultural evolution of extended benevolence. According to this objection, Tomasello’s account predicts that there would be strong constraints on the scope of human benevolence. The reason is that there would have been no fitness-­ advantage for our hominin ancestors to sympathize with out-groups and animals. Instead, only cooperation with members of symbolically marked in-groups would have been fitness-enhancing, since on Tomasello’s account, other members of one’s symbolically marked in-group would have been the most reliable and trustworthy partners in cooperation. An ancestral individual who was inclined to cooperate with outsiders would often be exploited by them; an ancestor who helped animals would get virtually no fitness-benefit from their helpful acts, since most animals can’t cooperate in the ways that are essential for human survival. My account of extended benevolence has claimed that some of the components of the moral sense—namely, norm psychology and sympathy—are adapted transmission biases. Adapted biases are adaptations—i.e., they exist because they helped our hominin ancestors to survive and reproduce in their environments. Although I’ve argued that extended benevolence is a product of the moral sense, I need not commit myself to the dubious idea that extended benevolence itself ever enhanced ancestral reproductive success. For some products of adaptations are not adaptations themselves. While adapted transmission biases are adaptations, the cultural variants they select or generate may not be. As a case in point, Boyd and Richerson cite the trend of declining birth rates in economically developed countries known as the demographic transition (Richerson & Boyd, 2005: 169–174). Developed countries typically contain a large proportion of educated professionals—doctors, lawyers, managers, politicians, etc.—who tend to achieve high salaries and social status. Attaining that status normally requires investing considerable time in an education and career, which often limits the time people can dedicate to raising children. The result is lower fertility rates in countries with highly professionalized workforces. Looking at the demographic transition from a cultural evolutionary perspective, prestige bias and success bias could explain why people prioritize education and careers over childrearing. If the high-status, successful people are well-­ educated professionals who have just a few children, their life choices will be imitated by others. So, although prestige and success biases are plausibly adaptations, some of the behaviors (the cultural variants) they motivate may be downright

8  The Cultural Evolution of Extended Benevolence

175

detrimental to our reproductive success. Similarly, it’s possible that the moral sense is an assemblage of adaptations that enhanced our ancestors’ reproductive success by facilitating cooperation, while its component capacities for sympathy and normsare capable of producing attitudes and behaviors that do not advance reproductive success. Thus, even if extended benevolence is not itself an adaptation, this is consistent with the claim that the moral sense is an adaptation which gave rise to extended benevolence.15

8.8  Conclusion: An Evolutionary Foundation for Extended Benevolence I conclude that the emergence and proliferation of extended benevolence can be explained to a significant extent by cultural evolutionary forces. The explanatory strategy of cultural evolutionary theory is recognizably Darwinian in style, since it characterizes some cultural evolutionary forces—namely, the adapted transmission biases—as adaptations. Moreover, the account defended above suggests that three of the five components of the moral sense identified by Darwin are sufficient to explain the emergence of extended benevolence: namely, the capacity to make normative judgments, the disposition to comply with community rules (norm psychology), and sympathy. Some commentators, including Buchanan and Powell, have doubted that evolutionary mechanisms could account for extended benevolence. However, I’ve argued that these observers underestimate the explanatory resources of cultural evolutionary theory. When we look to our deep past, we do find ample indication that our ancestors were parochial and xenophobic. But we can also find, in the historical process of our becoming cultural creatures, the better angels of our nature. Acknowledgements  This research is supported by the Ministry of Education, Singapore, under its Academic Research Fund Tier 1 (RG70/16).

References Birch, J. (2017). The philosophy of social evolution. Oxford, UK: Oxford University Press. Boyd, R., & Silk, J. B. (2018). How humans evolved (8th ed.). New York: W.W. Norton & Company. Brennan, G., Eriksson, L., Goodin, R. E., & Southwood, N. (2013). Explaining norms. Oxford, UK: Oxford University Press.  It is an open question whether extended benevolence will ever be outmoded by alternative cultural variants that do promote the reproductive success of individuals or groups. While this is possible, it is not inevitable. Whether or not it actually comes to pass depends on the relative strength of natural selection against extended benevolence compared to the cultural evolutionary forces that favor extended benevolence.

15

176

A. Luco

Buchanan, A., & Powell, R. (2018). The evolution of moral progress: A biocultural theory. Oxford, UK: Oxford University Press. Cooney, N. (2014). Veganomics: The surprising science on vegetarians, from the breakfast table to the bedroom (Kindle ed.). CreateSpace Independent Publishing Platform. Darwin, C. (1879/2004). The descent of man, and selection in relation to sex. London: Penguin Random House UK. Dovidio, J.  F., Allen, J., & Schroeder, D.  A. (1990). Specificity of empathy-induced helping: Evidence for altruistic motivation. Journal of Personality and Social Psychology, 59(2), 249–260. Fariss, C. J. (2014). Respect for human rights has improved over time: Modeling the changing standard of accountability. The American Political Science Review, 108(2(May)), 297–318. Freedom House. (2020). Countries and Territories. From Freedom in the World 2020. URL: https:// freedomhouse.org/countries/freedom-­world/scores Giménez, E. (2015). Argentine orangutan granted unprecedented legal rights. CNN Espanol, 4 January 2015. URL: https://edition.cnn.com/2014/12/23/world/americas/feat-­orangutan-­ rights-­ruling/index.html Graf, S., Paolini, S., & Rubin, M. (2014). Negative intergroup contact is more influential, but positive intergroup contact is more common: Assessing contact prominence and contact prevalence in five Central European countries. European Journal of Social Psychology, 44, 536–547. Greenhill, B. (2015). Transmitting rights: International organizations and the diffusion of human rights practices. Oxford, UK: Oxford University Press. Hawkins, R. D., Williams, J. M., & the Scottish Society for the Prevention of Cruelty to Animals. (2017). Childhood attachment to pets: Associations between pet attachment, attitudes to animals, compassion, and humane behaviour. International Journal of Environmental Research and Public Health, 14(5), 490. Henrich, J. (2016). The secret of our success: How culture is driving human evolution, domesticating our species, and making us smarter. Princeton, NJ: Princeton University Press. Hill, K. R., Wood, B. M., Baggio, J., Hurtado, A. M., & Boyd, R. T. (2014). Hunter-gatherer inter-­ band interaction rates: Implications for cumulative culture. PLoS One, 9(7 (July)), e102806. Inglehart, R. F. (2018). Cultural evolution: People’s motivations are changing, and reshaping the world. Cambridge, UK: Cambridge University Press. Inglehart, R., & Norris, P. (2003). Rising tide: Gender equality and cultural change around the world. Cambridge, UK: Cambridge University Press. Inglehart, R., & Welzel, C. (2005). Modernization, cultural change, and democracy the human development sequence. Cambridge, UK: Cambridge University Press. Jasper, J. M. (1997). The art of moral protest: Culture, biography, and creativity in social movements. Chicago: University of Chicago Press. Jasper, J.  M., & Nelkin, D. (1992). The animal rights crusade: The growth of a moral protest. New York: Free Press. Jha, C. K., & Kodila-Tedika, O. (2020). Does social media promote democracy? Some empirical evidence. Journal of Policy Modeling, 42, 271–290. Lewens, T. (2015). Cultural Evolution. Oxford, UK: Oxford University Press. Lowe, B. M., & Ginsberg, C. F. (2002). Animal rights as a post-citizenship movement. Society & Animals, 10(2), 203–215. Lührmann, Anna, Seraphine F.  Maerz, Sandra Grahn, Nazifa Alizada, Lisa Gastaldi, Sebastian Hellmeier, Garry Hindle and Staffan I. Lindberg. 2020. Autocratization surges – Resistance growsDemocracy Report 2020. Varieties of Democracy Institute (V-Dem). Mackie, G., & LeJeune, J. (2009). Social dynamics of abandonment of harmful practices: A new look at the theory. United Nations Children’s Fund (UNICEF) Innocenti Research Centre. Merry, S. E. (2006). Human rights and gender violence: Translating international law into local justice. Chicago: University of Chicago Press. Nichols, S. (2004). Sentimental rules: On the natural foundations of moral judgment (Kindle ed.). Oxford, UK: Oxford University Press.

8  The Cultural Evolution of Extended Benevolence

177

Paluck, E. L., Green, S. A., & Green, D. P. (2019). The contact hypothesis re-evaluated. Behavioural Public Policy, 3(2), 129–158. Pettigrew, T. F. (2008). Future directions for intergroup contact theory and research. International Journal of Intercultural Relations, 32, 187–199. Richerson, P. J., & Boyd, R. (2004). Darwinian evolutionary ethics: Between patriotism and sympathy. In P. Clayton & J. Schloss (Eds.), Evolution and ethics: Human morality in biological and religious perspective (pp. 50–77). Cambridge, UK: Wm. B. Eerdmans Publishing. Richerson, P. J., & Boyd, R. (2005). Not by genes alone: How culture transformed human evolution. Chicago: University of Chicago Press. Roser, M. (2020). Democracy. Published online at OurWorldInData.org. URL: https://ourworldindata.org/democracy Simmons, B.  A. (2009). Mobilizing for human rights: International law in domestic politics. Cambridge, UK: Cambridge University Press. Stich, S., Doris, J. M., & Roedder, E. (2010). Altruism. In John M. Doris and the Moral Psychology Reading Group (Ed.), The moral psychology handbook (pp. 147–205). Oxford, UK: Oxford University Press. Tamir, D. I., Bricker, A. B., Dodell-Feder, D., & Mitchell, J. P. (2016). Reading fiction and reading minds: The role of simulation in the default network. Social Cognitive and Affective Neuroscience, 11(2), 215–224. Tomasello, M. (2016). A natural history of human morality (Kindle ed.). Cambridge, MA: Harvard University Press. Tomasello, M. (2018a). Precís of a natural history of human morality. Philosophical Psychology, 31(5), 661–668. Tomasello, M. (2018b). How we learned to put our faith in one another’s hands: The origins of morality. Scientific American, 319(3), 70–75. Tomasello, M. (2020). The moral psychology of obligation. Behavioral and Brain Sciences, 43(e56), 1–58. Treisman, D. (2018). Is democracy really in danger? The picture is not as dire as you think. The Washington Post, 19 June 2018. Vaish, A., Carpenter, M., & Tomasello, M. (2009). Sympathy through affective perspective taking and its relation to prosocial behavior in toddlers. Developmental Psychology, 45(2), 534–543. Waldau, P. (2011). Animal rights: What everyone needs to know. Oxford, UK: Oxford University Press. Welzel, C. (2013). Freedom rising: Human empowerment and the quest for emancipation (Kindle ed.). Cambridge, UK: Cambridge University Press. Welzel, C., Inglehart, R., Bernhangen, P., & Haerpfer, C. W. (2019). Introduction. In C. Haerpfer, P. Bernhagen, C. Welzel, & R. F. Inglehart (Eds.), Democratization (2nd ed., pp. 1–18). Oxford, UK: Oxford University Press. Wurz, S. (2012). The transition to modern behavior. Nature Education Knowledge, 3(10), 15. Zaki, J. (2019). The war for kindness: Building empathy in a fractured world. New  York: Broadway Books.

Chapter 9

The Contingency of the Cultural Evolution of Morality, Debunking, and Theism vs. Naturalism Matthew Braddock

Abstract  Is the cultural evolution of morality fairly contingent? Could cultural evolution have easily led humans to moral norms and judgments that are mostly false by our present lights? If so, does it matter philosophically? Yes, or so we argue. We empirically motivate the contingency of cultural evolution and show that it makes two major philosophical contributions. First, it shows that moral objectivists cannot explain the reliability of our moral judgments and thus strengthens moral debunking arguments. Second, it shows that the reliability of our moral judgments is evidence for theism over metaphysical naturalism. Keywords  Historical contingency · Cultural evolution · Evolution of morality · Evolutionary debunking arguments · Theism · Naturalism · Moral epistemology · Moral objectivism · Moral realism · Cultural group selection · Innate biases · Moral norms

9.1  Introduction Is the cultural evolution of morality fairly contingent? Could cultural evolution have easily led humans to moral norms and judgments that are mostly false by our present lights? If so, does it matter philosophically? Yes, or so we argue. We empirically motivate the contingency of cultural evolution and show that it makes two major philosophical contributions. First, it shows that moral objectivists cannot explain the reliability of our moral judgments and thus strengthens moral debunking arguments. Second, it shows that the reliability of our moral judgments is evidence for theism over metaphysical naturalism. Let us outline the paper. In Sect. 9.2, we clarify our contingency question. In Sect. 9.3, we clarify our answer, i.e. our contingency thesis regarding the cultural M. Braddock (*) Department of History and Philosophy, University of Tennessee at Martin, Martin, TN, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. De Smedt, H. De Cruz (eds.), Empirically Engaged Evolutionary Ethics, Synthese Library 437, https://doi.org/10.1007/978-3-030-68802-8_9

179

180

M. Braddock

evolution of morality. In Sect. 9.4, we show that our contingency thesis contributes to moral debunking arguments and the debate between theism and metaphysical naturalism. In Sect. 9.5, we empirically motivate our contingency thesis. Then we conclude by looking at the bigger picture of contingency and its philosophical implications.

9.2  The Contingency Question Our contingency question: is the cultural evolution of morality fairly contingent? Could cultural evolution have easily led humans to moral norms and judgments that are mostly false by our present lights? We must clarify our contingency question by introducing the empirical background. When discussing the evolution of morality, we must distinguish between three different referents of the term “morality” and thus three different empirical questions: Moral Capacity Question: What explains our moral capacity (i.e. our capacity to think in moral terms and make moral judgments)? Did this capacity evolve as an evolutionary adaptation, byproduct, or exaptation? How did it evolve? Moral Psychology Question: What explains human moral psychology? Which components (e.g. biases, emotions, concepts) have evolved and how did they evolve? Moral Content Question: What explains the content of human moral norms and judgments (e.g. regarding when and who it is permissible to kill)? How have evolutionary and cultural processes affected this content?

The contingency of moral content depends on which processes have affected that content. For example, it could be that certain evolutionary processes robustly constrain the content of our moral norms, whereas certain cultural processes can easily lead humans in all sorts of moral directions. So we must address the Moral Content Question to answer our contingency question. Fortunately, the contingency of our moral norms and judgments does not depend on how exactly we acquired our moral capacity and moral psychology. Even if our moral capacity and moral psychological biases are innate, the content of our moral norms and judgments may be fairly contingent. For instance, Richard Joyce (2007) argues that our moral capacity and certain moral concepts are innate but maintains that the content of human moral norms is culturally contingent. Thus, we don’t need to address the Moral Capacity Question and Moral Psychology Question. Nor do we need to take stands on the empirical debates surrounding them (Joyce, 2017; Sripada, 2008). We must address the Moral Content Question. Let us clarify it. Moral norms centrally refer to social rules or principles that specify actions that are morally permissible, morally required, or morally wrong (Mesoudi & Danielson, 2008; Sripada & Stich, 2006). Moral norms shape our moral judgments. Moral norms and judgments come with varying degrees of determinacy. Let us distinguish between their

9  The Contingency of the Cultural Evolution of Morality, Debunking, and Theism vs…

181

domain content and determinate content: virtually all groups have norms about the domains of harm, fairness, sexual behavior, and so on, but the specific norms and judgments within these domains vary. We are interested in explaining determinate content, not domain content. For explaining why humans have acquired some norm or other about the domain of harm would not tell us whether humans could have acquired harm norms crucially incompatible with our own. With the scientific majority, we presume that biological evolution and our innate biases by themselves cannot explain the determinate content of the vast majority of human moral norms and judgments. Natural selection may play major roles in the evolution of morality (e.g. in explaining our moral capacity and our moral psychology) and may explain the domain content of moral norms or even the determinate content of some moral norms (e.g. incest norms), but cultural processes play the main difference-making role with respect to determinate content. We culturally inherit our moral norms and judgments. This empirical presumption is the dominant view in the scientific and philosophical literature on the evolution of morality and moral psychology (Haidt, 2012; Kitcher, 2011; Levy & Levy, 2020; Mesoudi & Danielson, 2008; Sripada, 2008; Sterelny, 2012). Consider two standard reasons for it. First, we know that determinate moral norms (e.g. regarding infanticide and slavery) can change relatively quickly, even within decades, which points to culture as the main engine because we know that cultural processes and cultural evolution tend to proliferate new traits at a much faster rate than biological evolution. Second, the best explanation of the cross-cultural diversity of determinate moral norms is that varying cultural dynamics are determining the content of these norms rather than innate biological factors. The empirical presumption is not without some apparent dissenters. In the moral psychology literature, we find disagreement about the nature of innate biases, particularly regarding how strongly or weakly they constrain the content of the moral norms that they favor. But for the standard reasons mentioned (e.g. cross-cultural moral diversity), the vast majority of content nativist models—including “moral foundations theory” (Haidt, 2012) and even the programmatic “moral grammar” models (Mikhail, 2011)—allow that cultural processes play the main difference-­ making role when it comes to explaining the parameters of moral norms and thus their determinate content (Sripada, 2008). Accordingly, we presume that cultural processes explain the determinate content of the vast majority of human moral norms and judgments. But our understanding of cultural processes in conventional history and the social sciences (e.g. social psychology, anthropology) strongly suggests that overall they are fairly contingent with respect to moral content. Mundane cultural processes include various forms of social learning and teaching. For example, why have Jews accepted the determinate moral norms of the Torah or Pentateuch? A mundane cultural-historical explanation would be that they were taught these norms, for example through explicit moral instruction, narratives, exemplars and so on (Sterelny, 2012). But what moral norms humans are culturally taught and socially learn depends on a complex array of background enabling conditions. Such background conditions include informational conditions (e.g. non-moral background beliefs),

182

M. Braddock

religious conditions, political conditions (e.g. who has power over the institutions of cultural transmission), legal conditions, economic conditions, technological conditions, and the individual and collective actions and decisions of institutions and moral agents. Histories of moral, legal, and political change often emphasize the contingency of these background conditions: they could have easily been different (Prinz, 2018). Their contingency motivates the contingency of the norms that they shape. For instance, consider how socio-political conditions can easily alter a cultural group’s laws or non-moral background beliefs and thereby alter their norms. Techniques of belief manipulation can be used to reinforce the status quo or dehumanize an inconvenient minority (Buchanan & Powell, 2018, Chapter 7). Or consider the rise of major world religions (e.g. Judaism, Christianity, and Islam) and their substantial and worldwide influence on moral norms and judgments. Such considerations strongly suggest that cultural moral pathways are fairly contingent. Since cultural processes are fairly contingent and since biological evolution and innate biases cannot by themselves explain determinate moral content, do we have an answer already to our contingency question? Not yet. For we must also consider cultural evolution. Even if mundane cultural processes are fairly contingent, it could be that cultural evolutionary processes robustly constrain the content of human moral norms toward norms mostly like our own. After all, cultural evolution can explain certain population-level cultural trends and patterns in human life. Thus, any assessment of the contingency of human moral norms and judgments must address the contingency of cultural evolution. What is cultural evolution? It has broad structural similarities to the process of biological evolution but involves cultural inheritance rather than genetic inheritance and its trajectory is more culturally specific. The process of cultural evolution happens when there is variation in the cultural traits (e.g. norms) of a population, when some cultural variants are selected or favored in accordance with some cultural selection pressure, and when, as a result, the favored variants are inherited (“vertically” from parents, “horizontally” from peers, or “obliquely” from the older generation) by means of various cultural transmission mechanisms such as social learning and imitation.  Understanding the sources of cultural variation, cultural selection pressures, and the cultural transmission mechanisms can enable us to understand and explain cultural trends and patterns (Henrich, 2017; Mesoudi, 2011; Mesoudi & Danielson, 2008). Cultural evolution and biological evolution are distinguished by their inheritance and selection mechanisms. In biological evolution, biological traits are genetically inherited and selected because they enable their possessors “to have more babies” (i.e. to better transmit their genetic traits to the next generation by surviving and reproducing themselves or by promoting the survival and reproduction of genetically related individuals). In cultural evolution, cultural traits are culturally inherited and selected because they enable their possessors “to have more students” (i.e. to better culturally transmit their cultural traits to others). Cultural evolutionary success is measured not by reproductive success or how many biological offspring are

9  The Contingency of the Cultural Evolution of Morality, Debunking, and Theism vs…

183

left but rather by how many cultural offspring are left, though the two types of success can surely correlate. Could cultural evolution have easily led humans to moral norms and judgments that are mostly false by our present lights? Having clarified our contingency question, next we clarify our answer.

9.3  T  he Contingency Thesis: The Contingency of Cultural Evolution Our contingency thesis: Contingency of Cultural Evolution: Cultural evolution (i.e. unguided cultural evolution) could have easily led humans to moral norms and judgments that are mostly false by our current lights.

We must make three clarifications, which preempt misunderstandings and objections. First, our contingency thesis is not that humans could have easily been led to moral systems completely contrary to our own with no substantial overlap. Rather, humans could have easily arrived at systems mostly contrary to our own and mostly false by our present lights. Thus, our thesis is perfectly compatible with the evolutionary robustness of some determinate moral norms (e.g. incest norms) and some basic moral values (e.g. pain is bad, cooperation is good). For instance, Clarke-­ Doane (2016) suggests that biological evolution would entrench the “core” moral value judgment that “killing our offspring is bad.” This claim is perfectly compatible with our contingency thesis. However, the cultural history of infanticide suggests that even this core judgment is culturally alterable. Which norms and judgments are contrary to our own and false by our present lights? Consider just the domain of harm and nasty norms and judgments that have permitted (or encouraged) out-group homicide, infanticide, brutal forms of slavery, human torture, cruel punishment, honor killings, blood feuding, barbaric treatment of enemies and prisoners, harmful rituals (e.g. female genital mutilation), sexual abuse, family abuse (e.g. norms permitting a man to beat his wife and children), animal cruelty, and so on. Such appalling norms and judgments are crucially incompatible with our own and crucially false by our present lights rather than “approximately true.” But cultural evolution could have easily led humans to such norms and judgments. Second, the referent of “humans” in our thesis is the human population rather than us. That is, our thesis is not that actual human individuals (e.g. you and I) but rather that relevantly similar humans with the same cognitive faculties and same evolutionary history could have easily been led to moral norms and judgments mostly contrary to our own. Thus, our thesis is perfectly compatible with certain views about the metaphysics of personal identity (e.g. genetic essentialism) that

184

M. Braddock

would imply a different cultural evolution would have resulted in humans who are not numerically identical to us. Third, our thesis is conditional on an important metaphysical assumption: if cultural evolution is “unguided” in the naturalistic or atheistic sense that there is no morally interested God guiding or constraining its direction, then it could have easily led humans to moral norms and judgments mostly contrary to our own. This naturalistic assumption is necessary for the cogency of our thesis because if a morally interested God is involved with cultural evolution, then its direction may be constrained. But if God is not involved at all, then it is fairly contingent. Contingent in what sense? Stephen Jay Gould (1989) famously argued that macroevolution is historically contingent: if we could somehow rewind the tape of evolutionary history to critical junctures in the deep past and replay it, while varying the initial conditions somewhat, the macroevolutionary outcomes would probably have been very different. For instance, Gould argues that if the initial conditions at the end of the Cambrian era were just a little bit different, animal body plans and animal evolution would have been very different: there would be no vertebrates, no mammals, no humans, nor anything remotely like them. Since then philosophers and scientists have pursued the following questions: Is evolution fairly contingent or robust—for example, with respect to the evolution of intelligent human-like creatures? If so, in what conceptual sense? How can we tell? What would be the philosophical implications? Contingency is relative to outcome: some outcomes are fairly contingent while others are robust. We are interested in the contingency of cultural evolution with respect to the outcome of determinate human moral norms and judgments mostly like our own, so we can set aside contingency questions regarding other outcomes. But we should address the conceptual question: contingent in what sense? The historical contingency in mind is known in the philosophy of science literature as “sensitivity to initial conditions”—that is, the sensitivity of an historical outcome to counterfactual variation in the conditions or processes leading to it (Ben-Menahem, 2018; Turner, 2011). This sort of contingency is arguably what is at issue in the debate between Gould (1989) and Conway Morris (2003) regarding whether evolutionary history is contingent. It comes in degrees: an outcome of a process is more contingent the more sensitive it is to variation in the process. At one end of the spectrum, an outcome of a process is not contingent at all but rather maximally robust or stable if no amount of variation in the process would have made a difference to the outcome. At the other end of the spectrum, an outcome of a process is maximally contingent if the slightest variation or perturbation in the upstream conditions of the process would have made a difference to the downstream historical outcome. For example, Bill Bryson suggests that you, dear reader, are astoundingly contingent: Not only have you been lucky enough to be attached since time immemorial to a favored evolutionary line, but you have also been extremely—make that miraculously—fortunate in your personal ancestry. …[E]very one of your forebears on both sides has been attractive enough to find a mate, healthy enough to reproduce, and sufficiently blessed by fate and

9  The Contingency of the Cultural Evolution of Morality, Debunking, and Theism vs…

185

circumstances to live long enough to do so. Not one of your pertinent ancestors was squashed, devoured, drowned, starved, stranded, stuck fast, untimely wounded, or otherwise deflected from its life’s quest of delivering a tiny charge of genetic material to the right partner at the right moment in order to perpetuate the only possible sequence of hereditary combinations that could result—eventually, astoundingly, and all too briefly—in you. (2004, pp. 5–6)

This concept of contingency as sensitivity to initial conditions differs from other concepts. In metaphysics, contingency is traditionally contrasted with necessity, where a contingent outcome is a non-necessary outcome that might not have happened. But a contingent outcome in our sensitivity sense is one that might not easily have happened. Moreover, we must not confuse contingency with metaphysical indeterminism: the contingency of an outcome does not mean that the same initial conditions would produce a very different outcome but rather that counterfactual variation would do so. For instance, even if determinism is true the outcome of a coin toss is highly contingent: if the preceding physical states had been somewhat different, the outcome would have been different. Since “highly contingent processes can be perfectly deterministic” (Ben-Menahem, 2018, p.  48), our contingency thesis does not rest on a controversial indeterminism. We could also analyze contingency in modal terms or probabilistic terms, if so desired. Let us understand contingency in modal terms, though alternative and more technical analyses could be substituted. To assess whether an outcome of a process is fairly contingent, we must focus on a fairly “nearby” range of scenarios or possible worlds where there are relatively small variations in the process but the worlds are otherwise like our own. Would some possible worlds or scenarios in this nearby range feature a different outcome? If so, then the outcome of the process is fairly contingent. Accordingly, when assessing the Contingency of Cultural Evolution we must focus on a reasonably nearby range of worlds where there are slight variations in the cultural evolutionary processes that have affected the content of human moral norms and judgments. For instance, suppose that the circumstances of cultural evolution (e.g. the cultural or environmental conditions) had been somewhat different and thus the cultural selection pressures had been somewhat different (see Sect. 9.5 of this paper for the empirical details). However, we must hold fixed the physical laws of nature and the existence of relevantly similar humans who have the same cognitive faculties and evolutionary history that we have, at least until the point when the relevant cultural evolutionary processes are somewhat varied, which could be late in human history since cultural evolution is still going on. Thus, we should not have in mind distant worlds featuring the evolution of nonhuman aliens who develop radically different moral systems nor worlds where human evolution takes some radically different turn (e.g. where humans are endowed with different innate cognitive faculties). Rather, we must focus on a fairly nearby range of possible worlds like our own. Would some possible worlds or scenarios in this nearby range feature humans like us who are led by cultural evolution to different moral systems, indeed moral norms and judgments mostly contrary to our own? Is cultural evolution fairly contingent in this sense? We supply empirical motivations for thinking so in Sect. 9.5. But first, we should address the prior question: why does it matter?

186

M. Braddock

9.4  T  he Philosophical Implications: Debunking Arguments and Theism vs. Naturalism First, consider contingency’s contribution to debunking arguments. Debunking arguments target secular “moral objectivists” or “moral realists” who accept the existence of objective moral truths, the basic reliability of our moral judgments, and atheism or agnosticism regarding the existence of God. Such arguments routinely take the following form (Street, 2006; Enoch, 2011, Chapter 7; Braddock, 2016). The reliability of our moral judgments can be understood as a strong correlation between our moral judgments and the moral truths. When we believe that something is wrong, it often is wrong. When we believe that something is right, it often is right. Why is that so? If the moral truths are objective, then this correlation is a striking or astonishing outcome that “cries out for explanation,” like the correlation between two student papers that are strongly similar in their content. Even if we suppose that the moral truths are necessary and could not have been different, our moral judgments could have easily been different. Evolutionary and cultural processes could have easily led humans to different moral judgments that do not accurately represent the necessary moral truths. The contingency of cultural evolution (and the mundane cultural processes of conventional history and the social sciences) motivates exactly this point and thus the need for an explanation. Out of all the false moral judgments we humans could have inherited from cultural processes, why did we inherit mostly true judgments? The objectivist could take our reliability to be a mere coincidence or accident or improbable outcome, but it is hard to believe that we are so lucky. Of course, coincidences do happen. But if a theoretical view is committed to postulating an astonishing coincidence, we should doubt the view, at least initially and especially if competing theoretical views carry no such implication. And moral objectivism seems committed to viewing our moral reliability as an astonishing coincidence, whereas competing metaethical views carry no such implication. As Roger White puts the problem: “it appears ‘too good to be true’ that a random process should conveniently provide us with the remarkable ability to make accurate moral assessments. It is astonishing that this should come about. And astonishing events are always some grounds for suspicion” (White, 2010, p. 593). What the moral objectivist needs then is a high probability explanation of moral reliability that shows it not to be a mere coincidence (Baras, 2017). A high probability explanation is an explanation where the conditions doing the explaining (e.g. evolutionary and cultural processes) imply that the outcome (moral reliability) was probable or something we should expect to obtain. Can moral objectivists supply a good high probability explanation? The debunking argument says they cannot, and different versions of the argument give different reasons for thinking so, some more convincing than others. For instance, consider the main reason that Sharon Street (2006) gives in her “Darwinian Dilemma” for moral realism. First, Street claims that “the forces of natural selection have had a tremendous influence on the content of human

9  The Contingency of the Cultural Evolution of Morality, Debunking, and Theism vs…

187

evaluative judgments”—not directly but indirectly by virtue of shaping our innate evaluative tendencies which have influenced the content of our judgments (2006, p. 113). Second, Street claims that moral objectivists cannot explain why our influential innate tendencies would “track” the moral truths, like a hound dog tracks its prey. According to Street, the only explanation available to them is the “tracking account” which says that our evaluative tendencies evolved because they tracked the moral truths. But Street argues that moral truth-tracking contributes nothing to the explanation of our evaluative tendencies but needless complexity and confusion. Since the tracking account is a bad explanation and yet the only one available, objectivists cannot explain moral reliability. There are two main problems with Street’s argument. First, her empirical claim is dubious. As we observed earlier, the dominant view in the empirical literature is that cultural processes play the main difference-making role when it comes to explaining the determinate content of the vast majority of human moral norms and judgments (Levy & Levy, 2020). Thus, even if our innate tendencies are off track, certain influential cultural processes could mediate and correct for their distorting influence. Second, Street is mistaken in taking the “tracking account” to be the only explanation available to moral objectivists. Even if our innate tendencies did not evolve because they favored true moral judgments, they may still in fact favor them, and objectivists might be able to explain why this is no coincidence. For instance, our innate evaluative tendencies together with cultural evolution might explain why we should expect humans to arrive at mostly true moral judgments (Copp, 2008). A more convincing reason for thinking objectivists cannot explain reliability is the garbage-in, garbage-out point, which is typically granted by objectivists (Braddock, 2016; FitzPatrick, 2015; Street, 2006). Debunkers observe that moral reasoning and deliberation (e.g. making our moral judgments more consistent and coherent, weeding out what appear to be morally irrelevant distinctions) cannot explain moral reliability, unless objectivists can explain the reliability of the moral judgments that are fed into our reasoning. For if our moral premises are mostly false, then we should doubt that reasoning from them will likely yield true moral conclusions. Garbage in, garbage out. So moral objectivists must explain why evolutionary and cultural processes would lead humans to mostly true moral starting points. Some moral objectivists float the suggestion that we know enough of these starting points through some capacity for “rational insight” (Shafer-Landau, 2012) or we “grasp” them through a complicated process of cultural training and critical reflection on our experience (FitzPatrick, 2015). But such objectivist “explanations” remain undeveloped and obscure. For instance, what is the capacity for rational insight and why would we expect unguided evolutionary and cultural processes to endow us with it? No good explanation seems available. Objectivists could insist that an explanation is likely to be forthcoming: just give them more time! However, the more we learn about the origins of human moral judgments, the more difficult it becomes to explain their reliability. If moral objectivists cannot explain the reliability of our moral judgments, what debunking conclusion follows? Consider two distinct conclusions.

188

M. Braddock

First, the more brazen debunking arguments (Street, 2006) take moral skepticism to follow from the objectivity of the moral truths. The fact that we cannot explain our reliability should lead us to doubt our reliability. And if we should doubt our reliability, then we are no longer justified in holding our moral judgments. What follows from that? Well, if moral objectivism is committed to moral skepticism, then it becomes unacceptable: the view that the moral truths are objective but we have no awareness of them has no plausibility. In the evolutionary debunking literature, such skeptical conclusions drawn by debunkers are controversial. Second, the more modest debunking arguments (Enoch, 2011, Chapter 7) take the fact that objectivists cannot explain moral reliability to be one major theoretical cost for moral objectivism and thus one major (defeasible) reason to reject it. Whether this cost makes objectivism unacceptable depends on the overall debate between objectivism and its metaethical competitors (e.g. theistic moral objectivism, moral non-objectivism, moral nihilism). Moral objectivists respond to these debunking arguments in different ways,1 but the flagship response has been to try to come up with a good explanation of moral reliability that coheres with our best empirical sciences and explains why we should expect evolutionary and cultural processes to lead us to a reliable moral system. For instance, David Copp (2008) argues that biological and cultural evolutionary processes would tend to lead humans toward commonsense prosocial moral norms and thus true moral norms and judgments, given his “society-centered” metaethical theory. What contribution would the contingency of cultural evolution make to the debunking discussion? It would strengthen debunking arguments by showing more effectively that moral objectivists cannot explain moral reliability. The fact that cultural evolution could have easily led humans to moral norms and judgments that are mostly false by our current lights should lead moral objectivists to not expect moral reliability from cultural evolution. And if they should not expect reliability from cultural evolution, then they cannot invoke cultural evolution to explain reliability—that is, they cannot provide the high probability explanation needed.2 The mundane cultural processes that feature in conventional history and the social sciences cannot do the trick either, for they appear even more contingent than cultural evolution, as we suggested in Sect. 9.2. The success of debunking arguments could in turn contribute to the debate between theism and metaphysical naturalism. After all, debunking arguments target secular moral objectivism. If there is a morally interested God involved with the development of human morality, then it seems we could explain our moral reliability and avoid the debunking conclusion. Thus, debunking arguments could serve as arguments for theistic moral objectivism over its secular counterpart. But even if 1  Schechter (2018) reviews and critiques different responses that moral objectivists have given to debunking arguments and suggests the most plausible response is to try to explain moral reliability. 2  See Braddock (2016) for this contingency-based critique of the explanations of moral reliability given by Copp (2008) and Enoch (2010). Barkhausen (2016) develops a similar point in a different way.

9  The Contingency of the Cultural Evolution of Morality, Debunking, and Theism vs…

189

debunking arguments are unsuccessful, the contingency of cultural evolution would still make an independent contribution to the debate between theism and naturalism. Next, we show how. Second, consider contingency’s contribution to the debate between theism and naturalism. A deductive argument for theism intends to show that some observation entails theism, whereas an evidential argument for theism intends to show that some observation evidentially supports theism. How do we assess evidential support? One standard method—call it the method of comparative confirmation—for assessing whether some observation (or outcome) O serves as evidence for some hypothesis H1 over another hypothesis H2 is to assess whether we should expect O more given H1 than H2. The prior probabilities or intrinsic plausibility of H1 and H2 (their likelihood of being true prior to considering the observation) also matter to the evidential assessment and their relevance raises questions about how to assess them. But if the hypotheses are comparable in terms of their prior probabilities, then the fact that we should expect O more given H1 than H2 implies that O is evidence for H1 over H2. The stronger the difference in expectation, the stronger the evidential support O would provide for H1 over H2. This method is widely used in the sciences, humanities, law, and in everyday life when we are adjudicating between competing explanations. Philosophical debates frequently employ the method too. In the philosophy of religion, the method underwrites atheistic evidential arguments from evil, which say that suffering of various kinds is evidence for metaphysical naturalism over theism. It is also frequently used in design arguments for theism, which say that various features of our universe constitute evidence for theism over naturalism. We should observe that in these debates the prior probabilities of theism and naturalism are controversial. Invoking theoretical virtues such as simplicity, some philosophers rank naturalism higher, others theism, and others judge it a stalemate (Miller, 2018; Swinburne, 2004). The dialectical context should thus be understood as follows: if the prior probabilities of theism and naturalism are comparable (or unascertainable), then the fact that we should expect some data or observation O more given theism than naturalism would imply that O is evidence for theism over naturalism, and vice versa. In this context, the contingency of human morality’s content could serve as the main ingredient of an evidential  argument for theism from the reliability of our moral judgments. The core of the argument is an evidential probability claim: Pr (moral reliability | naturalism) is much lower than Pr (moral reliability | theism)

Let us outline the argument. The Evidential Argument for Theism from Moral Reliability 1. Moral Objectivism: There are objective moral truths and we have a basic awareness of them. 2. Moral Reliability: Our moral judgments are basically reliable (i.e. mostly true). 3. Contingency of Cultural Evolution: Cultural evolution (i.e. unguided cultural evolution) could have easily led humans to moral norms and judgments that are mostly false by our current lights.

190

M. Braddock

4 . Thus, if naturalism is true, Moral Reliability is very surprising. [From 1 to 3] 5. If theism is true, however, Moral Reliability is not surprising. 6. Thus, Moral Reliability serves as evidence for theism over naturalism. [From 4 and 5] Moral Objectivism is an eminently reasonable starting point. Moral truths are objective in the standard sense that they are not determined by what human cultures or individuals believe, accept, feel, and so on. For instance, just because some cultural majorities  or individuals accept genocide as morally permissible--consider the Rwandan genocide of 1994--that does not make it so. There are good and widely discussed reasons for accepting the existence of objective moral truths, for example the absurdity and moral repugnance of the alternatives (e.g. moral relativism, moral nihilism). The claim that objective moral truths exist but we have no awareness of them has no plausibility. Moral objectivists invariably accept that we (ordinarily) have at least a basic awareness of the moral truths: we know or at least justifiably believe that some things are right and some things are wrong. For instance, it would be right to rescue a child drowning in a shallow pond, and it would be wrong to drown the child. Of course, this does not mean that we will do what is right, for there is a big difference between knowing what is right and doing what is right. To be clear: this basic moral awareness refers to the positive epistemic status of most of our moral judgments, not all or the vast majority of them. For we know that some of our moral judgments are distorted by cultural influences, biases, false background beliefs, and our own vices, even if we don’t know which ones. It is also worth observing that our moral awareness need not be consciously entertained nor verbally expressed, for implicit (or dispositional) moral awareness is common. We can also suppress our moral awareness in various ways. For instance, moral self-deception seems common: even when we have evidence that something is wrong, our desires and biases can lead us to believe that it is permissible. Moral Reliability is also eminently reasonable. Moral objectivists invariably accept it, for example in their responses to debunking arguments. Why? Because otherwise they would be driven to moral skepticism or at least a deeply impoverished moral epistemology. In other words, if we accept that we have a basic moral awareness, then we should accept our basic moral reliability. For we would have little to no moral awareness if we doubted (i.e. denied or suspended judgment about) our basic moral reliability. Consider two reasons for thinking so. First, consider moral reasoning and the garbage-in, garbage-out point. If we doubt that our moral judgments are mostly true, then we should doubt that reasoning from them will likely yield true moral conclusions: reasoning from garbage starting points could easily just lead to more coherent packages of garbage. But if we should not trust moral reasoning to lead us to truth, then we are not justified in accepting its conclusions. If we cannot rely on moral reasoning to acquire moral awareness, then our prospects look bleak. Second, consider non-inferential justification. Moral objectivists invariably accept that many of our moral judgments are justified apart from moral reasoning. That is, we are initially or prima facie justified in holding many of our moral judgments on the basis of moral intuitions, moral seemings, or default entitlement. However, an

9  The Contingency of the Cultural Evolution of Morality, Debunking, and Theism vs…

191

essential condition for this initial justification is that we have no defeaters, i.e. no evidence that removes the judgment’s initial justification. But doubt about the reliability of our moral judgments would constitute exactly such a defeater. If we doubt (i.e. deny or suspend judgment about) the reliability of our moral judgments, even though they seem to be true, then we are no longer justified in holding them, absent independent evidence for them. For motivation, consider a perceptual case. Suppose that we are drunk and come to doubt the reliability of our perceptual judgments. We should conclude then that even though our perceptual judgments seem to be true, we are no longer justified in holding them, absent independent evidence for them. We empirically motivate the Contingency of Cultural Evolution in the next section. In Sect. 9.2, we motivated the contingency of mundane cultural processes that feature in conventional history and the social sciences. Premise (4) of our argument reasonably follows from premises (1) to (3). Since cultural evolution could have easily led relevantly similar humans to moral judgments that are mostly false by our current lights, it is very surprising that we wound up with mostly true moral judgments. At least, it is surprising given naturalism, since naturalism claims that there is no God guiding cultural evolution or constraining its moral direction. Given theism, however, we should not be surprised by our basic moral reliability. We should expect it. C. Stephen Evans makes this suggestion: “if we suppose that the evolutionary process has been guided by God, who has as one of his goals the creation of morally significant human creatures capable of enjoying a relation with God, then it would not seem at all accidental or even unlikely that God would ensure that humans have value beliefs that are largely correct.” (Evans, 2018). Consider three additional reasons in support of this expectation. First, if God is morally good and loving and has created and sustained everything, including humans with their cognitive and moral faculties, then we should not be surprised if we find ourselves with a basic moral awareness. For such awareness is necessary for great moral and spiritual goods, such as the acquisition of virtue and loving friendship with God and other people. Given theism such goods are central ingredients of human flourishing. And a morally good and loving God would desire such goods for us, just as a father desires such goods for his children. Consider virtue. A morally good God would desire that humans become morally good themselves, which involves them not merely doing what they believe to be good (e.g. just) but doing what they know to be good. Consider the cognitive requirements of the virtues. The virtue of forgiveness requires moral awareness that someone has wronged you, the virtue of humility requires awareness of our own moral shortcomings, and so on. Consider love. A loving God would desire that we love him and others in a free and morally significant sense, which requires some awareness of God’s goodness, the moral worth of other people, and how we should respond to them. Since virtue and love are great goods, we should not be surprised if God endowed us with the basic moral awareness necessary to acquire these goods. Second, if God created humans in his own image and God has moral awareness, then we would also expect humans to have moral awareness too, albeit to a lesser degree.

192

M. Braddock

Third, there are also tradition-specific reasons for expecting basic moral reliability given theism. For instance, in the Christian tradition the apostle Paul suggests in the New Testament that God has endowed humans (Jews and Gentiles) with a basic moral awareness or moral law “written on their hearts”: When Gentiles who have not the law [i.e. the commandments given to the Jews] do by nature what the law requires, they are a law to themselves…They show that what the law requires is written on their hearts, while their conscience also bears witness and their conflicting thoughts accuse or perhaps excuses them on that day when, according to my gospel, God judges the secrets of men by Christ Jesus (Romans 2:14–16, RSV)

In the Christian tradition, this basic moral awareness enables us to recognize our moral predicament—that is, our moral wrongdoing, our moral guilt, and the character gap between who we are and who we should be. This recognition enables us to repent and seek God’s forgiveness and the moral and spiritual transformation that we need to flourish. Given Christian theism (or theism), should we expect humans to have perfect moral awareness? It seems not. After all, theistic traditions explicitly recognize  our moral epistemological limitations  and shortcomings.  Moreover,  a good and loving God can be expected to endow humans with morally significant freedom, which would enable us to form our characters and make significant choices with consequences for ourselves and others. Thus, we should not be surprised to find humans making false moral judgments as a result of motivational bias, self-­ deception, willful moral ignorance, moral blindness due to bad character, the “noetic effects of sin”, and other moral epistemological defects. Of course, moral ignorance can also be non-culpable (e.g. the result of cultural indoctrination), and a good God would presumably not hold humans accountable for such ignorance.  For such reasons, it would not be surprising given theism to find  humans equipped  with a basic moral awareness or an imperfect but reliable  enough conscience. In contrast, naturalism provides no comparable reasons for expecting Moral Reliability. Thus, Moral Reliability serves as evidence for theism over naturalism. We have shown that the contingency of the cultural evolution of morality would make major contributions to debunking arguments and the debate between theism and naturalism. It matters and is thus worth assessing. We empirically motivate it in the next section.

9.5  E  mpirical Support for the Contingency of Cultural Evolution As a reminder, our contingency thesis is this: Contingency of Cultural Evolution: Cultural evolution (i.e. unguided cultural evolution) could have easily led humans to moral norms and judgments that are mostly false by our current lights.

9  The Contingency of the Cultural Evolution of Morality, Debunking, and Theism vs…

193

We have shown that much hangs on the thesis. So is it true? In this section, we supply three empirically-based reasons for thinking so. Reason #1: The History of Nasty Moral Norms and Judgments The history of nasty moral norms and judgments is evidence for the Contingency of Cultural Evolution. Given the framing assumption that cultural evolution has influenced determinate moral content, one reason to think that it could have easily led humans badly astray is that it has consistently led humans to nasty norms and judgments throughout the past. Have nasty moral norms and judgments been common? Unfortunately, yes. Historically we have witnessed markedly unjust norms that have not protected basic human interests and rights and have unjustifiably harmed vast multitudes of innocent persons. Many such norms and judgments still exist today. For instance, considerhe domain of harm and all the cultures with norms and judgments permitting out-group homicide, infanticide, brutal forms of slavery, human torture, cruel punishment, honor killings, blood feuding, barbaric treatment of enemies and prisoners, sexual abuse, harmful rituals (e.g. female genital mutilation), family abuse (e.g. norms permitting a man to beat his wife and children), animal cruelty, and so on. Reason #2: The Cultural Adaptiveness of Nasty Norms Another reason for accepting the Contingency of Cultural Evolution is the fact that nasty norms can easily be culturally adaptive. A trait is culturally adaptive for a group if it helps the group successfully transmit their cultural traits to others. Accordingly, a system of moral norms is culturally adaptive for a group if it helps the group successfully transmit their norms to others. Such adaptive moral systems can evolve by various forms of cultural selection, such as cultural group selection (which we discuss later). Can nasty norms easily be culturally adaptive? Consider two reasons for thinking so. First, the cultural adaptiveness of a moral norm depends on various cultural conditions and environmental conditions (e.g. preferences and non-moral background beliefs of the population), which often could have easily been different. For example, whether a particular prosocial norm is transmitted to others depends upon their preferences. But we know that human preferences are fairly malleable by institutions and ruling elites. As Henrich puts it: “People’s preferences and motivations are not fixed, and a well-designed program or policy can change what people find desirable, automatic, or intuitive” (2017, p. 330). We also know that the preferences of a population and their moral norms are often dependent upon their non-moral background beliefs, which may include beliefs about the will of the gods, external threats, or the humanity and capacities of other people. Such non-moral background beliefs often could have easily been different, which in turn would have made a difference to the content of the norms favored by cultural evolution. Second, the logic of cultural selection strongly suggests that cultural evolution could have easily favored nasty moral norms. For the logic is that cultural selection would favor whatever norms enhance the group’s ability to transmit its norms. And nasty norms could easily enhance that ability.

194

M. Braddock

Consider norms regulating the treatment of other groups—for example, inter-­ group harm norms. We do not accept the norm that it is permissible (or required) for in-group members to cooperate and coordinate to kill neighboring out-group members and take their resources, but such a norm could have easily been culturally group adaptive. After all, far from threatening internal social stability, the acceptance of this norm could have easily enabled a group to increase its food supply, expand its territory, eliminate its competition, multiply laborers and sex partners, and thereby more successfully transmit and spread its norms. To motivate the same point, we could discuss nasty moral norms regarding other domains such as punishment, fairness, sexual behavior, and so on. Nasty norms in such domains could easily be culturally adaptive (Braddock 2016). Buchanan and Powell support a similar point in their biocultural account of moral progress. Their account focuses on explaining one major aspect of moral progress, namely moral inclusiveness—the expansion of the scope of beings whom we think require moral consideration. Their thesis is that even though our contemporary inclusivist morality is compatible with our evolved moral psychology, it is still an “anomaly” from a selectionist perspective: “contemporary morality…is strikingly more inclusive than one would expect if selectionist explanations were the whole story, or even most of it” (2018, p. 153). That is, we should not expect inclusivist moral systems (mostly like our own) to evolve culturally. Rather, moral inclusiveness is best explained as a “luxury good” that is “only likely to be widespread and stable in highly favorable conditions,” namely certain complex economic, political, and social-epistemic conditions that have only recently obtained in human history and which can easily be eroded (2018, p. 188, Chapters 6 and 7). Reason #3: Empirical Models of Cultural Evolution The final reason for accepting the Contingency of Cultural Evolution is that it is supported by our most prominent empirical models. We describe three models of the cultural evolution of moral norms and motivate the contingency of the processes that feature in them. It is worth observing that these models are mutually compatible, could be integrated into a more complicated hybrid model, and could explain different target phenomena (e.g. different moral norms). Cultural Group Selection The most prominent selectionist explanation of moral norms is a cultural group selection account. We’ll sketch the broad explanation, allowing of course that it could be fleshed out in various ways. It is worth observing that some of its evolutionary claims about the distant past are highly speculative3 but also non-­essential to the intuitive core of the explanation, which is that certain cultural traits are favored or selected because they help groups with those traits outperform other groups in intergroup competition.

3  E.g. see Clarke’s (2019) critique of the cultural group selection account sketched by Buchanan and Powell (2018), which they call the “received view” among evolutionary theorists who think that morality can be explained in selectionist terms.

9  The Contingency of the Cultural Evolution of Morality, Debunking, and Theism vs…

195

The cultural group selection explanation of moral norms runs as follows (Bowles & Gintis, 2011; Buchanan & Powell, 2018; Henrich, 2017; Richerson & Boyd, 2005; Tomasello, 2016). Rewind human evolutionary history to the late Pleistocene era (a few tens of thousands of years before the agricultural revolution circa 10,000 BCE). Ecological conditions during this time seem to have been rough: human social groups faced scarce resources, selfish tendencies, the absence of institutions regulating prosocial behavior, and competition with other groups (e.g. conflict over scarce resources).4 In this ecological context, effective social cooperation and coordination within groups would have been highly adaptive forms of behavior. Such in-group directed prosocial behavior would have enabled our human ancestors to collaboratively hunt large game, to coordinate in warfare and collective defense, to outcompete less prosocial groups in intergroup competition, and thus to more successfully culturally transmit their norms to others. Accordingly, biological and cultural selection pressures would have favored biological traits (e.g. innate biases) and cultural traits (e.g. cultural norms and institutions) that facilitated such in-group directed prosocial behavior. The selection processes would involve “gene-culture coevolution” or a feedback mechanism where culturally selected traits would favor the natural selection of certain biological traits, which in turn would affect cultural selection, and so on. In this speculative evolutionary context and given the evolution of various biological and cultural prerequisites (e.g. our moral cpacity),  certain moral norms emerged. Some norms would have been arbitrary but other norms would have been culturally group adaptive and thus would have been favored by cultural group selection. Over time cultural group selection, in concert with other factors, would assemble complicated systems of moral norms, with the process accelerating later with the emergence of hierarchical societies, codified laws, organized religion, and diverse political, social, and religious institutions. This process of cultural group selection has continued throughout human history and is still going on today. How did intergroup competition spread certain moral norms? Cultural group selection processes can take different forms because there are different forms of cultural competition between groups. Consider the following five processes of intergroup competition through which moral norms can be culturally group selected, which are summarized and motivated by the cultural evolutionary theorist Joseph Heinrich: War and raiding. The first and most straightforward way that intergroup competition influences cultural evolution is through violent conflicts in which some social groups—due to institutions that foster great cooperation or generate other technological, military, or economic advantages—drive out, eliminate, or assimilate other groups with different social norms.

4  The presumption of intergroup conflict and violence among ancestral humans is notably controversial (Fry, 2005). But the cultural group selection account does not essentially hinge on this presumption because there are various types of intergroup competition that do not involve conflict, as we discuss later.

196

M. Braddock

Differential group survival without conflict. In sufficiently harsh environments, only groups with institutions that promote cooperation, sharing, and internal harmony can survive at all and spread. Groups without these norms go extinct or flee back into more amicable environments…. Differential migration. Since social norms can create groups with greater internal harmony, cooperation, and economic production, many individuals will be inclined to migrate into more successful groups from less successful ones…. Differential reproduction. Under some conditions, social norms can influence the rate at which individuals within a group produce children. Since children tend to share the norms of their group, over time the social norms of groups who produce children at faster rates will tend to spread at the expense of other social norms…. Prestige-biased group transmission. Because of our cultural learning abilities, individuals will be inclined to preferentially attend to and learn from individuals in more successful groups, including those with social norms that lead to greater economic success or better health… Since individuals cannot easily distinguish what makes a group more successful, there is a substantial amount of cultural flow that has nothing to do with success…” (2017, pp. 167–169).

Now consider the contingency of cultural group selection: could such processes have easily led us humans to moral norms mostly contrary to our own? It appears so. For example, nasty inter-group harm norms could have easily been favored by cultural group selection processes, for by acting upon such norms groups could gain the competitive edge and outperform other groups in various kinds of group competition, such as war and raiding and prestige-biased group transmission.5 Also consider the many nasty norms of human history and the present, which we briefly reviewed earlier: many such norms could easily be favored by cultural group selection in virtue of their ability to enhance in-group social stability, coordination, and cooperation. Such considerations suggest no stable bridge exists between cultural group selection models and moral systems mostly like our own. The empirically-informed testimony of cultural evolutionary theorists confirms the easily-nasty contingency of cultural group selection. For example, after reviewing the various forms of intergroup competition, Joseph Henrich observes: Over time, combinations of these intergroup processes will aggregate and recombine different social norms to create increasingly prosocial institutions. To be clear, by ‘prosocial institutions’ I mean institutions that lead to success in competition with other groups. While such institutions include those that increase group cooperation and foster internal harmony, I do NOT mean ‘good’ or ‘better’ in a moral sense. To underline this point, realize that intergroup competition often favors norms and beliefs that can easily result in the tribe or nation in the next valley getting labeled as ‘animals,’ ‘nonhumans,’ or ‘witches’ and motivate efforts to exterminate them. (2017, p. 169)

5  Also see Kitcher (2011, pp. 107–110) for a description of inter-group cultural competition that strongly suggests its contingency.

9  The Contingency of the Cultural Evolution of Morality, Debunking, and Theism vs…

197

Moreover, theorists of the model readily concede the existence of other influences on the cultural evolution of moral norms that can easily shift norms in nasty directions (e.g. special interests, class bias). Content-Biased Cultural Evolution Many evolutionary theorists and moral psychologists adopt a bias-based approach to help explain the content of certain moral norms (Nichols, 2004; Sperber, 1996; Sripada, 2008; Sterelny, 2012). Content-biased cultural evolution happens when a cultural variant (e.g. a moral norm) is more likely to be acquired and transmitted because of the intrinsic attractiveness of its informational content. Its attractiveness is in turn explained by our innate biases. Let innate biases6 refer to innate dispositions to preferentially acquire some cultural traits (e.g. some norms) rather than others. Evolutionary theorists standardly distinguish between two kinds of innate biases: content biases and context biases. Innate content biases are innate tendencies to culturally acquire certain norms rather than others because some aspect of their content makes them attractive to us. For example, our incest bias, involving an emotional aversion to incestuous sex, makes appealing to us a cultural moral norm that prohibits incest and thus makes it more likely that we will acquire and transmit such a norm to others (Lieberman, 2008). Innate content biases that may favor the cultural selection of certain moral norms include our incest bias, sympathy/empathy bias, disgust bias, prosocial bias, reciprocity bias, punishment bias, retributive bias, fairness bias, egalitarian bias, in-­ group/out-group biases, ethnocentric biases, partiality bias, and so on. Innate context biases (“copying biases”) are innate tendencies to culturally acquire certain norms rather than others not because of their content but because of who is transmitting the norms to us. For example, our conformity bias is our tendency to acquire high frequency norms, whatever norms are favored by the group majority (Richerson & Boyd, 2005). Could content-biased cultural evolution have easily led humans toward moral systems contrary to our own? It strongly seems so. We could explore the contingency of each of these biases in detailed case studies. For instance, we have argued elsewhere that our sympathy/empathy biases and fairness bias could have easily led humans to harm norms and fairness norms contrary to our own (Braddock, 2016). In general, there is a Grand Canyon size gap between the claim that we have innate biases (e.g. prosocial biases) to the conclusion that we should expect such biases (e.g. through content-biased cultural evolution) to robustly lead humans to norms mostly like our own. Other general considerations motivate the contingency of innate biases. First, consider the cultural contingency of human moral emotions. We know that many of 6  Disagreement exists in moral psychology about the nature of innate content biases. The process discussed here posits weaker biases than the controversial strong biases—namely, innate schematic moral principles—posited by so-called “moral grammar” models (Mikhail, 2011). For empirical reasons why we should think of innate biases as weak biases rather than strong biases, see Sripada (2008). For standard reasons for thinking innate biases cannot adequately explain the determinate content of most human moral norms and judgments, see section 2 of the present paper.

198

M. Braddock

our innate biases consist of emotional dispositions and that our emotions can easily be modulated by cultural contexts (Prinz, 2016). For instance, culture can easily affect the triggers or elicitors of our emotions—for example, which sorts of behavior trigger disgust and which people are marked as in-group/out-group and thus which people trigger our empathy/sympathy or fail to do so. Second, the contingency of our innate biases is also motivated by the cross-cultural and historical diversity of moral norms. The contingency of innate biases is standardly observed in the relevant empirical literature on innate biases. For instance, Chandra Sripada sums up this literature as follows: Overall, innate biases…seldom determinate in any precise or detailed way the content of the moral norms that they serve to favor. As a result, there will inevitably be a significant role for a number of other features, including the existing cultural context and, indeed, sheer happenstance, in determining the specific content of moral norms. (2008, pp. 337–338)

Context-Biased Cultural Selection Context-biased cultural selection happens when a cultural variant (e.g. a norm) is more likely to be acquired or learned because of the present cultural context, namely because of who is transmitting the cultural trait to us. The most prominent context biases in the cultural evolution literature include our conformity bias and prestige bias. Our conformity bias makes us more likely to acquire the norms that are common or popular in the larger group. Our prestige bias makes us more likely to acquire the norms of prestigious individuals whom we hold in high regard. These biases do not favor norms with specific content but rather norms with whatever content happens to be favored by the existing cultural context at the time. Theoretical and empirical support exists for the reality of these biases and their influence on the cultural evolution of moral norms (Richerson & Boyd, 2005). Could context-biased cultural selection processes have easily led humans toward moral systems contrary to our own? It appears so. Conformity bias and prestige bias will push us in the direction of internalizing nasty norms if these norms are endorsed by the larger group majority or prestigious individuals and groups. Since individuals and groups can easily gain and lose prestige (i.e., the deference and attention of others) for all sorts of highly contingent reasons, it is not difficult to see how prestige-­ biased transmission can easily lead humans to very different moral norms. And since norms can easily emerge and gain footholds in groups for all sorts of reasons (e.g. intergroup competition, self-interested elites taking power), conformity-biased cultural selection will be no more trustworthy: it will favor the prevalent norms whatever their content and tend to generate and sustain lasting norm homogeny within groups. The testimony of evolutionary theorists confirms the point because they regularly observe that context-biased selection can easily produce nasty norms (Richerson & Boyd, 2005).

9  The Contingency of the Cultural Evolution of Morality, Debunking, and Theism vs…

199

9.6  C  onclusion: Contingency and Its Philosophical Implications Is the cultural evolution of morality fairly contingent? Could cultural evolution (i.e. unguided cultural evolution) have easily led humans to moral norms and judgments that are mostly false by our current lights? We supplied empirically-based reasons for thinking so. The support for the contingency of cultural evolution could be expanded by further review of the empirical literature. The more comprehensive case for the contingency of human moral norms and judgments could also be expanded by attention to the cultural processes that feature in conventional history and the social sciences. Even if cultural evolutionary processes have not influenced our moral norms and judgments, other cultural processes have done so and appear even more contingent. For instance, consider the rise of major world religions (e.g. Judaism, Christianity, Islam) and their substantial influence on human moral norms and judgments. Does this contingency matter philosophically? We argued that it strengthens debunking arguments and shows that the reliability of our moral judgments is evidence for theism over metaphysical naturalism. Looking at the bigger picture, it appears that the historical contingency of other striking evolutionary, cultural, and cognitive outcomes could also have important philosophical implications. Even if it turns out that the emergence of life and intelligent human-like creatures are robust evolutionary outcomes (Conway Morris, 2003) and even if we should expect evolution to endow humans with reliable perceptual capacities, other cultural and cognitive outcomes may be fairly contingent and surprising. For example, how contingent is the content of our religious and philosophical judgments (Braddock, 2018)? How contingent is the content of our scientific theories (Kinzel, 2015)? How contingent are our higher level philosophical, scientific, artistic, mathematical, and logical capacities (Crisp, 2016; Schechter, 2010)? Could evolutionary and cultural processes have easily led humans to different judgments, theories, and capacities? What would the philosophical implications of this contingency be? These matters deserve more attention. Acknowledgements  Kind thanks for useful comments on this chapter go to Helen De Cruz, Johan De Smedt, Michael Rota,  an anonymous reviewer, and participants of the conference “Evolutionary Ethics: The Nuts and Bolts Approach” at Oxford Brookes University (Oxford, UK, July 2018).

References Baras, D. (2017). Our reliability is in principle explainable. Episteme, 14, 197–211. Barkhausen, M. (2016). Reductionist moral realism and the contingency of moral evolution. Ethics, 126, 662–689. Ben-Menahem, Y. (2018). Causation in science. Princeton, NJ: Princeton University Press.

200

M. Braddock

Braddock, M. (2016). Evolutionary debunking: Can moral realists explain the reliability of our moral judgments? Philosophical Psychology, 29, 844–857. Braddock, M. (2018). An evidential argument for theism from the cognitive science of religion. In H. Van Eyghen, R. Peels, & G. van den Brink (Eds.), New developments in the cognitive science of religion: The rationality of religious belief (pp. 171–198). Dordrecht, the Netherlands: Springer. Bryson, B. (2004). A short history of nearly everything. New York: Broadway Books. Buchanan, A., & Powell, R. (2018). The evolution of moral progress: A biocultural theory. New York: Oxford University Press. Bowles, S., & Gintis, H. (2011). A cooperative species: Human reciprocity and its evolution. Princeton, NJ: Princeton University Press. Clarke, E. (2019). The space between. Analyse & Kritik, 41, 239–258. Clarke-Doane, J. (2016). What is the Benacerraf problem? In F. Pataut (Ed.), Truth, objects, infinity: New perspectives on the philosophy of Paul Benacerraf (pp. 17–43). Cham, Switzerland: Springer. Conway Morris, S. (2003). Life’s solution: Inevitable humans in a lonely universe. Cambridge, UK: Cambridge University Press. Copp, D. (2008). Darwinian skepticism about moral realism. Philosophical Issues, 18, 186–206. Crisp, T. (2016). On naturalistic metaphysics. In K. J. Clark (Ed.), The Blackwell companion to naturalism (pp. 61–74). Hoboken, NJ: John Wiley & Sons, Inc. Enoch, D. (2010). The epistemological challenge to metanormative realism. Philosophical Studies, 48, 413–438. Enoch, D. (2011). Taking morality seriously: A defense of robust realism. Oxford, UK: Oxford University Press. Evans, C. S. (2018). Moral arguments for the existence of god. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Fall 2018 Edition). https://plato.stanford.edu/archives/fall2018/ entries/moral-­arguments-­god/ FitzPatrick, W.  J. (2015). Debunking evolutionary debunking of ethical realism. Philosophical Studies, 172, 883–904. Fry, D. P. (2005). The human potential for peace: An anthropological challenge to assumptions about war and violence. New York: Oxford University Press. Henrich, J. (2017). The secret of our success: How culture is driving human evolution, domesticating our species, and making us smarter. Princeton, NJ: Princeton University Press. Gould, S.  J. (1989). Wonderful life: The burgess shale and the nature of history. New  York: W. W. Norton. Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New York: Vintage. Joyce, R. (2007). The evolution of morality. Cambridge, MA: MIT Press. Joyce, R. (2017). Human morality: From an empirical puzzle to a metaethical puzzle. In M. Ruse & R. Richards (Eds.), Cambridge handbook of evolutionary ethics (pp. 101–113). Cambridge, UK: Cambridge University Press, Cambridge. Kinzel, K. (2015). State of the field: Are the results of science contingent or inevitable? Studies in History and Philosophy of Science, 52, 55–66. Kitcher, P. (2011). The ethical project. Cambridge, MA: Harvard University Press. Levy, A., & Levy, Y. (2020). Evolutionary debunking arguments meet evolutionary science. Philosophy and Phenomenological Research, 100, 491–509. Lieberman, D. (2008). Moral sentiments relating to incest: Discerning adaptations from by-­ products. In W. Sinnott-Armstrong (Ed.), Moral psychology, volume 1: The evolution of morality (pp. 165–190). Cambridge, MA: MIT Press. Mesoudi, A., & Danielson, P. (2008). Ethics, evolution and culture. Theory in Biosciences, 127, 229–240. Mesoudi, A. (2011). Cultural evolution: How Darwinian theory can explain human culture and synthesize the social sciences. Chicago, IL: University of Chicago Press.

9  The Contingency of the Cultural Evolution of Morality, Debunking, and Theism vs…

201

Mikhail, J. (2011). Elements of moral cognition: Rawls’ linguistic analogy and the cognitive science of moral and legal judgment. New York: Cambridge University Press. Miller, C. (2018). The intrinsic probability of theism. Philosophy Compass, 13, e12523. Nichols, S. (2004). Sentimental rules: On the natural foundations of moral judgment. Oxford, UK: Oxford University Press. Prinz, J. (2016). Culture and cognitive science. In E.  N. Zalta (Ed.), The stanford encyclopedia of philosophy (Fall 2016 Edition). https://plato.stanford.edu/archives/fall2016/entries/ culture-­cogsci/ Prinz, J. (2018). The history of moral norms. In K. Gray & J. Graham (Eds.), Atlas of moral psychology (pp. 266–278). New York: The Guilford Press. Richerson, P. J., & Boyd, R. (2005). Not by genes alone: How culture transformed human evolution. Chicago, IL: University of Chicago Press. Schechter, J. (2010). The reliability challenge and the epistemology of logic. Philosophical Perspectives, 24, 437–464. Schechter, J. (2018). Explanatory challenges in metaethics. In T. McPherson & D. Plunkett (Eds.), The Routledge handbook of metaethics (pp. 443–458). New York: Routledge. Shafer-Landau, R. (2012). Evolutionary debunking, moral realism and moral knowledge. Journal of Ethics and Social Philosophy, 7, 1–37. Sperber, D. (1996). Explaining culture: A naturalistic approach. Cambridge, MA: Blackwell. Sripada, C., & Stich, S. (2006). A framework for the psychology of norms. In P.  Carruthers, S. Laurence, & S. Stich (Eds.), The innate mind, volume 2: Culture and cognition (pp. 280–301). New York: Oxford University Press. Sripada, C. (2008). Nativism and moral psychology: Three models of the innate structure that shapes the contents of moral norms. In W. Sinnott-Armstrong (Ed.), Moral psychology, volume 1: The evolution of morality (pp. 319–343). Cambridge, MA: MIT Press. Sterelny, K. (2012). The evolved apprentice: How evolution made humans unique. Cambridge, MA: MIT Press. Street, S. (2006). A Darwinian dilemma for realist theories of value. Philosophical Studies, 127, 109–166. Swinburne, R. (2004). The existence of god. Oxford, UK: Oxford University Press. Tomasello, M. (2016). A natural history of human morality. Cambridge, MA: Harvard University Press. Turner, D. (2011). Paleontology: A philosophical introduction. Cambridge, UK: Cambridge University Press. White, R. (2010). You believe that just because…. Philosophical Perspectives, 24, 573–615.

Chapter 10

Morality as Cognitive Scaffolding in the Nucleus of the Mesoamerican Cosmovision J. Alfredo Robles-Zamora

Abstract  The aim of this chapter is to investigate changes and continuities of human moral systems from an evolutionary and socio-historical perspective. Specifically, I will argue that these systems can be approached through the lens of non-genetic inheritance systems and niche construction. I will examine morality from a comparative socio-historical perspective. From this, I suggest that these inheritance systems are cognitively scaffolded, and that these scaffolds are part of the nucleus of the historical Mesoamerican cosmovision. Keywords  Cosmovision · Niche construction theory · Scaffolding cognition · Sociohistorical approach · Non-genetic inheritance systems · Mesoamerican studies · Cultural change · Non-western morality · Historical processes · Integrative anthropology · Extended evolutionary synthesis

10.1  Introduction The aim of this chapter is to investigate changes and continuities of human moral systems from an evolutionary and socio-historical perspective. Specifically, I will argue that these systems can be approached through the lens of non-genetic inheritance systems and niche construction. I here want to draw attention to the possibility of approaching morality not only as a socio-historical process, but also as a cognitive process that plays an important role in non-genetic inheritance systems that recognize the extended evolutionary synthesis (Fuentes, 2016b; Laland, Matthews, & Feldman, 2016). From this, I propose that these inheritance systems are J. A. Robles-Zamora (*) National Autonomous University of Mexico (UNAM), Mexico City, Mexico Interdisciplinary Professional Unit in Energy and Mobility (UPIEM), National Polytechnic Institute (IPN), Mexico City, Mexico e-mail: [email protected] © Springer Nature Switzerland AG 2021 J. De Smedt, H. De Cruz (eds.), Empirically Engaged Evolutionary Ethics, Synthese Library 437, https://doi.org/10.1007/978-3-030-68802-8_10

203

204

J. A. Robles-Zamora

cognitively scaffolded, and that these scaffolds are part of the nucleus of the historical Mesoamerican cosmovision. Since an important part of my argument is based on the concept of cosmovision of historical Mesoamerica, it is necessary to specify the scope of this work. It is not a historical description of beliefs about notions of good or evil in a non-Western society. Nor is it an anthropological interpretation of the myths and ritual practices of the peoples who inhabited this cultural area. What I aim to do here is the exposition of the theories that are relevant to my case study, which are coherent with a research framework that makes it possible to study morality from an evolutionary point of view. I will use historical data and theoretical arguments to explore an empirical research program in the field of evolutionary ethics for Mesoamerican studies. In order to study morality from an evolutionary perspective then, two requirements are indispensable. The first is to show that morality can be understood as part of an evolutionary process; the second is to clarify in what sense morality can be approached from an empirical or naturalized perspective in socio-historical disciplines. Both requirements are synthesized in the following question: what kind of theoretical presuppositions should be assumed when we consider sociohistorical studies to understand the evolutionary processes that resulted in moral capacities? One way of approaching this question is to take a look at how moral systems developed in non-Western societies, in order to have a broader perspective on what sociohistorical processes are involved in the permanence and disruption of normative precepts in different human societies.1 The work has the following structure: in Sect. 10.2, I will examine morality as part of the cosmovision of human cultural groups to approach it as a cognitive process that is relational and practical, which we can integrate with an extended evolutionary perspective. The concept of cosmovision has a technical character that can be lost if one instead uses the more widely used term “worldview”. A central aspect of cosmovision is its relational and practical character, which allows to articulate it with the cultural scaffolding approach (Portal, 1996); I will here use an ex professo notion will be retaken for the case of Mesoamerica, which has been developed during more than a decade from ethnographic, historical and archaeological data (López Austin, 1996, 2001, 2004). Section 10.3 explains Mesoamerican cosmovision in the context of cognitive scaffolding and through the lens of non-genetic inheritance systems and niche construction. The final section will show the need an integrative conceptual framework that allows for a transdisciplinary dialogue between sociohistorical disciplines and recent evolutionary approaches, to help address the problems that arise when we compare the Mesoamerican cosmovision with the standard evolutionary approach. The case of the Mesoamerican cosmovision is 1  I do not want to obviate the classic problems that evolutionary ethics has faced, i.e. the problem of the naturalist fallacy and the argument of the open question; different authors have contributed approaches that allow re-evaluating these problems (Baeten, 2012; Martínez, 2003; Sober, 1998); from these, my intention here is to explore other ways that would enrich the empirical investigation of human morality from a transdisciplinary perspective.

10  Morality as Cognitive Scaffolding in the Nucleus of the Mesoamerican Cosmovision

205

interesting because it allows the articulation of a perspective that would not necessarily have to concentrate on standard evolutionary processes, such as selection and mechanisms of cultural inheritance modeled in analogy to genetic transmission.

10.2  Cosmovision, a Cognitive and Social Phenomenon One of the many problems that academics face in wanting to study morality from a sociohistorical perspective is whether it is possible to take morality and cultural systems as equivalents. A very influential way of understanding cultural systems is as a structure of symbols that allow the generation of social institutions, for example, art as an institution arises from what symbolizes the beautiful and the grotesque in own experience, or religious institutions that are sustained from a symbolic dichotomy between good and evil. Csordas (2013) shows that morality cannot be equated with cultural systems because, among other things, morality does not have its own moral institutions and therefore does not have systemic properties that are analyzable in terms of symbolic systems, rather it has modalities of action in different cultural domains, where modalities of action are: a flavor, a moment, a valence, an atmosphere, a dimension of human action that may be more or less pronounced, more or less vividly discernible, and more or less urgent across settings and situations but always present whenever humans are present (Csordas, 2013, p. 536).

Csordas proposes to understand morality not as a noun, but as an adjective that can be attributed to those modalities of action (among which, the category of evil has a relevant role for an anthropology of morality). Csordas’ argument is relevant here because it allows us to understand morality as open processes that are not necessarily subordinated to symbolic systems or social institutions, but rather form part of human experience. This perspective allows us to break with the traditional dichotomous vision between cultural and biological phenomena, since it shows that both are part of human beings’ experiences. Two questions that I will try to answer are: how does this vision of morality fit in the contemporary evolutionary approaches? And, how does a comparative socio-historical vision contribute to understand the evolutionary role of morality? A common way of entering the morality of a cultural group is through its religious tradition, which has allowed to recognize that religion in some way or another has an active role in evolutionary processes (Sloan, 2002; Teehan, 2006). Nevertheless, ethicists have argued (e.g. Rachels, 1986; Singer, 1993) that morality cannot be reduced to the study of religious traditions, as these often set their moral norms alongside other religion-specific elements, such as rituality or priesthood, which can derive in the problem of moral relativism. Flanagan and his collaborators (Flanagan, Sarkissian, & Wong, 2016) have exposed that moral relativism is a problem that the naturalization of ethics must address, however, the extent and scope of this problem remains unclear, as is the potential response to it (i.e., is it to be dissolved, avoided or accepted?) (Miller, 2011).

206

J. A. Robles-Zamora

There are also intrinsic problems regarding the analysis of morality from a sociohistorical perspective of religion. One of them has been to determine whether or not morality depends on religion and whether religious traditions are a direct product or a by-product of evolution (De Waal, 2013). Added to this, an important characteristic of religion is the adherence to a lineage of beliefs, which helps build the identity of a group that agrees with those beliefs or also works as a source of dissent with other groups outside that lineage (Hervieu-Léger, 2005, p. 138). The problem is that it is possible to have moral ideas without adhering to a religious lineage, since unlike religion, our moral faculties do not necessarily seem to be reduced to adhering to a lineage or religious tradition, but rather respond to the possession of a capacity that we suppose was developed through different evolutionary processes; and that as Csordas (op. cit.) has pointed out, morality is a modality of action intertwined with human experiences. Unlike the concept of religion, it seems to me that the notion of cosmovision can be understood as that condition of possibility that allows us to generate modalities of action for our moral capacities. For the purposes of this paper, I would like to point out that an important difference (but not the only one) between religion and cosmovision is that latter does not depend on adherence to a lineage or belief system, which can be interpreted by a cultural group, but that cosmovision is that which enables and conditions our experience and interpretation through practice. The emphasis on the practical and relational characteristics of the cosmovision is not meant to suggest that there are only non-believing practices within it. What is important about these practical characteristics is that―based on Michael Polanyi’s (2009) notion―they involve forms of tacit knowledge that can be transmitted within and between generations, which influence extended evolutionary processes such as niche construction. In this sense I will start from the assumption that cosmovision is rooted in cognitive processes, but without committing myself to a nativist vision, while emphasizing its experiential and practical conditions. To defend this point, I will start from Ingold’s criticisms of the dichotomies under which the relationship between cognition and culture has been understood in sociohistorical disciplines. The anthropologist Tim Ingold has pointed out that some approaches in anthropology often assume that there is a distinction between the mental representations or conceptual scheme of a group and its cultural expressions. Instead, he considers that cognitive processes should be understood as processes that involve the organism as a whole without distinguishing between internal processes of the mind and external activities of the body. In order to show this, he uses a metaphor: in the locomotion of a car the engine is involved, however, it is not possible to affirm that locomotion is a process that is reduced to the engine, as if the engine were a pilot, but that locomotion involves all parts of the vehicle and even the surface of its movement; in Ingold’s words: Like locomotion, cognition is an accomplishment of the whole animal, it is not accomplished by a mechanism interior to the animal and for which it serves as a vehicle. There is therefore no such thing as an ‘intelligence’ apart from the animal itself, and no evolution of intelligence other than the evolution of animals with their own particular powers of perception and action (1993, p. 431).

10  Morality as Cognitive Scaffolding in the Nucleus of the Mesoamerican Cosmovision

207

For Ingold, thinking is doing, and therefore plans and mental representations must be understood as products of action, not as mediators or internal programs that guide action as a behavioral output, which is an assumption that usually sustains the classical cognitivist approach (Ingold, 2014a). Before delving deeper into this point, it is important to point out that a key aspect of Ingold’s work has been to argue that anthropology and evolutionary studies start from an erroneous assumption: that there is a separation between human beings and nature (Ingold, 1993, p. 442), so that it has been considered that there is a biological evolution and a cultural history (Ingold, 1991; most recently and on a similar vein Zeder, 2018). For Ingold (2000a, p. 168) such separation is forced and involves considering nature as a depersonalized entity from which it is possible to distance oneself; instead, he proposes that it is more convenient to talk about an environment than about nature, since they do not mean the same thing, while nature implies detachment, the environment implies perceiving it in a direct way interacting with its affordances from experimenting with the materials with which the environment provides. The concept of affordances is adapted from the work of James Gibson visual perception and is defined as the practical possibilities that: “the environment are what it offers the animal, what it provides or furnishes, either for good or ill”. Gibson uses the English verb to afford and turns it into the noun affordance to point out “the complementarity of the animal and the environment” (Gibson, 1986, p. 127) in a way that cannot be expressed by another English term.2 The affordances are adopted by Ingold because they can be better understood from the possibilities of action offered by the environment and not as properties that the entities have. From this concept, he has shown that cognition is a process that is not limited to the internal processes of the brain, but has been distributed among practices in accordance with the possibilities offered by the environment, and therefore it is possible to consider that the study of cognition by anthropology cannot be reduced to the field of subjective mental representations, which are supposed to become objective in the field of culture. From this perspective, anthropology contributes to the study of cognition insofar as it recognizes that cognition is in direct participation with the environment in a flow between practices and materials. From Ingold’s critique (2000a) of the dualist approaches that address the relationship between cognition and culture, it is possible to recognize that the affordances that environments make possible are multiple and in this sense, having new sensory data does not imply an adjustment in the previous conceptual or representational scheme found in the mind of individuals, but tune in to the environment and its affordances. One consequence of this view is that cognition is seen as a collective and historical process and not from an individual viewpoint. From this perspective, cultural groups are practical communities in which perceiving and acting are part of the same process: dwelling, which is always an open and fluid process in which skills and modes of action are embodied.

 An analysis of how Gibson refined the concept of affordance can be found in Jones (2003).

2

208

J. A. Robles-Zamora

The idea that each culture has a way of perceiving or attuning itself to its environment has been endorsed by approaches from different disciplines that recognize that the relationship between cognitive processes and the production of cultural traits should not necessarily be understood as the materialization of information that was previously processed in specific regions of the brain, but as a condition of possibility that is situated and embodied (Gabora, 2008; Martínez, 2014; Oyserman, 2015; Rietveld, Denys, & van Westen, 2018). By the term condition of possibility, I mean the ability to generate practical responses from the affordances offered in a niche by a cultural group. In other words, cognitive skills such as categorization, problem solving, memory or perception are conditioned by the cultural context in which multiple developmental processes and interactions with the environment are involved. This is not a new idea, but it contributes to the proposals that try to find a perspective that integrates (and not only fits) the socio-historical studies with the cognitive ones (e.g. Bender & Beller, 2011, 2016; Kolodny, Feldman, & Creanza, 2018). In this sense, far from the classical definitions of cosmovision as literally a Weltanschauung that is relative to each cultural group, I want to emphasize its cognitive, relational and practical character that makes possible the apprehension of affordances in an environment for human groups. The work of van Dijk and Rietveld (2017) emphasizes how affordances are apprehended in practical situations by social groups, in which there is a coordination that allows tuning with different affordances offered by the environment. An important concept in their approach is the way of life, which is understood as: “our actively maintained standing practices – our regular ways of doing things” (van Dijk & Rietveld, 2017, p. 5). In the cosmovision of cultural groups, there are affordances that have an important role in the modes of action or practices that are relevant to understanding the evolutionary change of their moral systems. These affordances have a collective and historical character that must be considered. There are reasons to suppose that, although the notion of modalities of action (by Csordas), dwelling (by Ingold) and the form of life (taken up by van Dijk and Rietveld) are different, each of these points towards a more relational and practical vision of human experience that can be articulated to delineate what is understood here by cosmovision. This implies recognizing that the cosmovision is that condition of possibility that allows to generate modalities of action for our moral capacities, which is incorporated in a relational and dynamic cognitive process, of which I believe, we still have not clear idea what its ontological nature is (Chemero, 2009, chapter 7), but of which we can suppose that it is the result of different evolutionary processes (Barrett, 2018). More importantly, the notion of cosmovision delineated by the notions that have been exposed, allow to break with the traditional dichotomies between mind and body or culture and nature. Therefore, cosmovision can be understood as a social phenomenon, but also as a cognitive phenomenon that is relational and practical (i.e. is not only mediated by input and output mechanisms for action); and in this sense, morality is an evolutionary phenomenon that is part of and is made possible by the cosmovision.

10  Morality as Cognitive Scaffolding in the Nucleus of the Mesoamerican Cosmovision

209

10.3  Cosmovision and Scaffolding in Niche Construction Research has suggested that the evolutionary development of modern cognitive skills seems not to be reduced only to adaptive genetic processes, but that there are also other processes involving forms of transmission beyond genetics (Odling-­ Smee, Laland, & Feldman, 2003). It has been argued that there is a co-evolution between the environment and the organism that breaks with the traditional and unidirectional vision, which allows taking into account cultural factors that may have been key to the development of current cognitive skills (Boyd, Richerson, & Henrich, 2011; Falk, 2016; Tomasello, 2011; Tomasello & Herrmann, 2010; Whiten, Ayala, Feldman, & Laland, 2017). One of the proposals that have been discussed in the evolutionary studies has been the Niche Construction Theory (NCT) developed by Odling-Smee, Laland and Feldman (Odling-Smee et al., 2003). Broadly speaking, NCT states that the construction of niches is a process of modification of the environment by organisms that inhabit it, in which these modifications produce alterations in the selection pressures of the organisms, which over time, affects the organisms that inhabit this changed environment (Laland et al., 2016). In this way organisms transmit an ecological inheritance to the next generation; in other words, modifications to the environment that shape phenotypic characteristics that make an organism adapt3 to an ecological niche. For NCT the construction of niches is a reciprocal process to variation and stability in the evolutionary changes of organisms that does not necessarily fall only on a genetic basis. Recognizing niche construction as a relevant evolutionary process implies considering that some selective pressures are not independent of the activities of organisms, i.e. NTC considers that the niches construction activities of organisms modify or influence the natural selection process (Baedke, Fábregas-Tejeda, & Vergara-­ Silva, 2020; Laland & O’Brien, 2010, p. 304; Zeder, 2018). It is becoming increasingly recognized that isolating natural selection is not enough to understand the complexity of evolutionary changes, since it is articulated with other processes, including mutation, variation, and inheritance (Fuentes, 2016a; Thompson, 1999). Similarly, Jablonka and Lamb (2013), Tomasello (2011) and Fuentes (2017) have offered different kinds of arguments showing the importance of inheritance systems beyond than genetics, which play an active role in the evolution of organisms. Among these inheritance systems, Jablonka and Lamb have presented examples and 3  Martínez (2003, p. 105) has explained that there are two notions of adaptation that have been widely used in philosophy of biology, one understood as functional responses of the organism to certain problems and the second characterized as historical, in which adaptation implies considering ontogenetic and ecological processes. The author also shows that traditionally the second meaning was reduced to the first; in this sense, what the defenders of NTC would understand by adaptation would be more attached to the second meaning, trying to avoid some type of functional reductionism, since the notion of ecological heritage implies taking into account that organisms modify niches and subsequent generations respond evolutionarily to that modification without necessarily reducing the notion of adaptation to a functional sphere.

210

J. A. Robles-Zamora

thought experiments that allow a simpler understanding of behavioral and symbolic inheritance systems. A relevant characteristic that I would like to highlight is that both systems transmit information to the next generation without going directly to genetics and, more importantly for the point I want to show, it allows us to postulate that understanding how these systems evolve requires a much broader perspective in which sociohistorical disciplines can make an important contribution. For example, Zeder (2016, 2018) argues that the challenge ahead of the defenders of the extended evolutionary perspective is that they generate new empirical predictions that are different from those of the standard theory. In order to identify the models that allow to integrate extended processes that offer significant examples of evolutional change, she presents the case of domestication as an evolutionary process that has impact both at the biological-genetic and at the social-cultural level. Zeder points out that domestication is strongly a coevolutionary relationship in which the domesticator and the domesticated perform reciprocal niche-construction activities that improve the benefits of both. An important point that Zeder shows is that a requirement for domestication is that the domesticator and the domesticate can adjust their phenotype to the pressures of the environment without changing their genotype. This plasticity of development must appear in both to take advantage of the challenges of niche construction activities. In humans this capacity for plasticity is expressed through the generation of new behaviors, but also by adopting new ones or adjusting old ones; consequently, in domestication we find not only genetic variability, but also a cultural variability that would have to be explained. For the extended theory of evolution (including NTC), the variation could be explained as plastic responses to behaviors that are cultural, and that form part of inheritance systems, which over time were fixed in the genes by means of genetic adaptation. Variations then arise due to some kind of directionality of cultural processes, which are part of niche construction activities. Insofar recognizing the evolutionary importance of systems of inheritance other than genetics reinforces the idea that sociohistorical disciplines can (empirically) contribute to the study of morality. For the case of the cultures that inhabited Mesoamerica, the inheritance systems are interesting because they allow to raise that, in spite of the variety of cultures that occupied this area, the cosmovision contains a nucleus of practices and relations shared by these societies (López Austin, 2001; Taube, 2012), which remained more or less stable during at least two thousand years,4 and from which the origin of contemporary cultural characteristics can be traced among the societies that inhabit what was that cultural zone.

4  Despite the fact that the cultures that developed in the Mesoamerican area went through various environmental pressures, archaeological evidence indicates that the cultivation of maize (Zea mays) was relevant for the development of a religious and calendar system around 900 B.C. (GonzálezTorres, 2007) The historical evidence of the colonial period, and the ethnographic evidence of the twentieth century, shows great coherence with the archeological evidence, which has made it possible to establish a continuum around which different cultural expressions are manifested (Ayala Falcón, 1995; López Austin, 2006).

10  Morality as Cognitive Scaffolding in the Nucleus of the Mesoamerican Cosmovision

211

The concept of cosmovision that I will deploy here has been developed in detail by historian López Austin, who defines cosmovision as: [a] historical fact that produces mental processes, which is immersed over a very long period of time, resulting in a systematic, relatively coherent set; it is made up of a collective network of mental acts with which a social entity in a given historical period aims to comprehend the universe in a holistic way (López Austin, 2015, p. 44; the emphasis is mine).

López Austin (2015) developed this concept because he wanted to avoid the problems derived from the implementation of universalist or particularistic criteria to explain the development of cultural practices in this area. One of these problems is the atomization of cultural diversity: cultural unity and diversity are reduced to absolute categories that do not allow considering the processes that produce cultural diversification. López Austin has also argued that a particularistic approach faces the problem of concentrating on excessively precise aspects of one culture, which would preclude comparison with other cultures for explanatory purposes. For the historian, this atomization simplifies the study of similarities and divergences between different cultures. In contrast, the notion of cosmovision proposed by López Austin makes it possible to understand unity/diversity as a non-universalist conceptual dyad that allows comparisons between cultures. The concept of cosmovision developed by López Austin assumes two characteristics that are of interest for the hypothesis presented here: that socio-historical disciplines are relevant to the understanding of the evolution of morality, insofar as it is recognized that cosmovision play an important role in clarifying the relationship between morality and niche-construction activities. Firstly, as noted in the definition above, cosmovision starts with the recognition of the importance of mental processes with respect to the development of cultural phenomena. Specifically, López Austin (2015, p. 33) considers that cultural practices can be understood in terms of causal chains of mental acts that are encompassed in a collective network of mentalities. This is interesting because, as shown in the previous section, it emphasizes that the cosmovision is a cognitive and cultural phenomenon that can be approached from its practical and relational characteristics based on a non-dualistic view. A second characteristic is that there is a nucleus (or hard core) of the Mesoamerican cosmovision. This nucleus is composed of a: “collective network of mental acts that changes with extraordinary slowness” (López Austin, 2015, p.  35). Unlike other cultural phenomena that disappear over a relatively short time scale (for example, human sacrifice after European contact), López Austin argues that there are phenomena whose transformation is much slower, and therefore it can be said that they form part of a nucleus that resists the impact of historical processes (Fig. 10.1). I propose that these nuclear characteristics can be understood in terms of cognitive scaffolds that are part of the construction of a niche. The notion of scaffolding is important to this research because it can identify the type of interactions that are relevant to the construction of a niche over time. Although there is ample discussion about how the concept of scaffolding should be understood (Caporael, Griesemer, & Wimsatt, 2014; Mehri & Amerian, 2014; Shvarts & Bakker, 2019), Wimsatt and Griesemer (2007) have pointed out the

212

J. A. Robles-Zamora

Fig. 10.1  The nucleus model of the Mesoamerican cosmovision. In the center are the scaffolds that are part of the inheritance systems that make possible the modalities of action. Based on and adapted from the original idea of López Austin (2015, p. 35)

importance of understanding the scaffolding as a support that allows the generative entrenchment of different kinds of processes, agents and materials, which allows the development of skills through a wide range of transmission channels. This generative entrenchment also implies a great variety of processes that cannot be reduced to a single perspective, but are completed by biological, socio-historical and cognitive perspectives. And what is important to highlight, these authors consider that the construction of scaffolding is a special type of niche construction that allows the unveiling of the ontogenetic processes involved in this kind of construction. For Caporael (2003) and Wimssatt and Griesemer (2007), scaffolding construction involves the repeated assembly of relationships and practices that, in a specific context, developed inheritance systems that serve as scaffolding for members of a group and for other social groups. In a similar vein, Lopez Austin emphasizes that the nucleus of the cosmovision is not an absolute entity, but has relative levels, in which common elements scaffold practices in other regions and in other social groups. Some examples of nuclear elements are: (a) complementary binary opposition; (b) conceiving a geometry of the cosmos; and (c) belief in the possibility of mystical journeys. A fundamental aspect to be taken into account about the notion of nucleus is that: “The organization of the components in the system, the insertion and adjustment of innovations, and the recomposition of the system after the dissolution or loss of elements depend on the nucleus” (López Austin, 2015, pp. 35–36). An example of the relationships between niche construction activities, scaffolding and cosmovision can be found in ethnographic works on the indigenous and mestizo peoples that inhabit different regions of that once was Mesoamerica.

10  Morality as Cognitive Scaffolding in the Nucleus of the Mesoamerican Cosmovision

213

Despite the temporal distance with the ancient religious practices prior to the European contact, the cosmovision still makes possible some relations and practices immersed in the agricultural calendar, the techniques of seeding and the types of cultivation. Based on the ethnographic reports of Boege (1988), Portal (1986) and Galinier (1987), López-Austin (2016) shows how peasants from different regions and ethnic groups maintain or cancel social relations with the sowing field on the basis of a morality grounded in the transition between the negative and the positive. In their cosmovision these poles are not opposites, but complementary (nuclear element [a]); accordingly, the field is an entity that allows the growth of maize, it is a life-giver, but it is also an entity that consumes the bodies that are deposited in it. So, for people who belong to the Mesoamerican tradition, agriculture is a transitory activity that occurs between opposite but complementary poles. In this way, while Mazatec peasants give offerings to the chikones (entities seen as the owners of the land) to carry out agricultural activities, because the land is “wounded” during planting, Otomi peasants are suspicious of “foreign” crops, such as sugarcane (saccharum), which is considered the opposite of corn: a negative entity that embodies the dispossession of the land and alcoholism. In both cases, cultivation techniques, sowing time management and relations with the field are mediated by the myths, rituals and the ecological inheritance provided by previous generations. In the cultures of Mesoamerican tradition, we find moral systems that not only guide the behavior in society, but also the activities that should be had with the environment. These systems have persisted despite the evangelization process; they are scaffolded in such a way that they are flexible enough to generate and maintain relational and practical elements (affordances) that are relevant in the construction of a niche. Others authors have highlighted the practical and relational character of scaffolding that allows for the establishment of relationships with different kinds of resources in a cultural or social niche, which is useful for the purpose of this paper (Estany & Martínez, 2014; Martínez, 2016; Murphy, 2015). For example, Sterelny (2003) understands niches as particular forms of interaction with the environment by living organisms in order to meet their needs. The construction of niches is done through the development of scaffolding that commits us to those resources. Broadly speaking, the thesis of scaffolded cognition postulates that there are biological processes that organisms unload (offloaded) into the environment to facilitate their function; among these processes are different cognitive processes, such as memory or attention (Wimsatt & Griesemer, 2007). In the example mentioned above, understanding how people of Mesoamerican tradition have offloaded biological processes into the environment requires taking into account the nuclear elements of their cosmovision (e.g. Mcclung de Tapia & Martínez-Yrizar, 2017). An important aspect of the nucleus of the cosmovision is that it has a much longer temporal dimension than the rest of the components of the cosmovision, which makes its components more resistant to change than those components that are “far away” from that hard core. The analysis of myths, the archaeological record, ethno-­ archaeological analogies, historical sources and ethnographic reports (among other techniques and methodologies) can be considered as sources for the study of

214

J. A. Robles-Zamora

morality from an evolutionary perspective, insofar as it is recognized that the scaffolded or nuclear elements of the cosmovision play an important role in the processes of niche construction, which is constrained by systems of inheritance beyond genetics.

10.4  The Epistemological Challenge I have tried to show that there are theories that are congruent enough to establish a line of argument that allows us to account for the evolution of morality for the cultural area of what was once Mesoamerica. However, the exposition of this congruence is not without its problems. Although I cannot go into all of them in depth, I want to draw attention to one of these problems: the reductionism approach. In several works Ingold has shown that the study of different cultural features in isolation leads to a series of distinctions that are inappropriate for anthropological research. Among these distinctions is that which contrasts history with evolution (Ingold, 1991), the environment unrelated to material culture (Ingold, 2001, 2007) and technology seen in isolation from society (Ingold, 1993). The problem pointed out by Ingold and more recently by Fuentes (2016a), is that biological and cultural phenomena tend to be approached in a narrow and reductionist way, which happens with some approaches that address cultural evolution (Gayon, 2016; Martínez, 2014). Mesoudi, White and Laland (Mesoudi, Whiten, & Laland, 2006), for example, has argued that studies of social or cultural anthropology are poorly demonstrated due to their lack of rigor in identifying quantitative data and developing their theories. From this perspective, socio-historical studies on cosmovision have limited contribution to the study of cultural evolution and morality. On the other hand, integrative anthropology such as that proposed by Fuentes (2018) seeks to see that the aspect of extended synthesis requires socio-cultural anthropology, as Zeder (2018) has also stated. Fuentes (2015) has pointed out that the evolutionary studies of culture that have been made in the last decades concentrate exclusively on functional factors such as the relationship between economy and ecology, or between cognitive capacities and energetic capacities, which has resulted in most of these works starting from a narrow position regarding culture as an evolutionary phenomenon. The reductionism that is criticized does not consist in reducing the phenomena to smaller and more manageable elements to study them, but in considering that a single unified perspective is necessarily required to have an evolutionary vision of cultural phenomena. Fuentes has also argued that an inherent problem with the narrowness with which we try to account for the evolution of culture is that there is no common evolutionary framework in which anthropological studies, such as archaeology, biological

10  Morality as Cognitive Scaffolding in the Nucleus of the Mesoamerican Cosmovision

215

anthropology, cultural anthropology and paleoanthropology can be articulated to break the reductionist approach maintained in this type of studies.5 The work of Ingold (2000b, 2014b) and Fuentes (2013, 2015) coincides in showing that a theory of cultural evolution must consider the cognitive faculties involved in the transformation and selection of cultural traits. Separately, both authors propose a point of view in which cognition is seen as a relational process that should not be isolated but located in a cultural environment or niche. Ingold and Fuentes’ position is not trivial, because if sociohistorical disciplines want to contribute to the field of evolutionary ethics, an integrative and non-reductionist conceptual framework is required in order to generate a transdisciplinary dialogue between extended evolutionary theory and the studies of cognitive sciences that focus on cultural phenomena. Therefore, what has been said until this point is not intended to be a simple exchange of concepts, but rather the proposal of an integration of different concepts that may be fertile (although no less problematic) to engage in dialogue with other disciplines within the framework of evolutionary ethics. In other words, the epistemological challenge consists not only in developing integrative methodologies that allow the articulation of information from different kinds of sources (archaeological, ecological, historical, ethnographic and so on), but also in developing new ontological commitments to the evolution of morality, in which the boundaries between the biological and the cultural are blurred.

10.5  Conclusions One consequence of modeling the nucleus of the Mesoamerican cosmovision as a cognitive scaffold is that it allows for the integration of cultural and biological aspects of the development of past societies without falling into cultural or biological determinism. This point is important because it accommodates the study of complex forms of heredity that need not be reduced to genetics (cf. Müller, 2017). In this sense it is concluded that the study of the notion of morality, at least for the case of Mesoamerican cultures, requires considering some cultural practices from a holistic perspective. I have tried to show that there are reasons to consider that morality is an evolved phenomenon that is part of the cosmovision, (at least for cultures of Mesoamerican tradition). The concept of cosmovision has been introduced because it can be understood as a phenomenon that is at the same time social and cognitive, in which there are relational and practical aspects that allow us to understand morality from a perspective in which we can identify scaffolded features, which generate affordances for the environment.

5  Unlike other parts of the world where archaeology is considered part of the history of art, in the academic tradition of the Americas, archaeology is part of anthropology.

216

J. A. Robles-Zamora

If we recognize that those relational and practical aspects, which are scaffolded in the nucleus of cosmovision, are relevant in explaining the evolution of morality, then we must also recognize that the relational and practical character of the cosmovision makes it play a relevant role in niche construction activities. There is an extensive documentation that shows how the cosmovision is involved in the ecological activities that developed and are currently developing the cultures of Mesoamerican tradition, among them are activities that dramatically reshape the environment (cf. López Austin & López Luján, 2017; Mcclung de Tapia, 2012; Miriello et al., 2011; Reese-taylor, 2012; Zurita Noguera, Ortiz, & Martínez-Yrizar, 2019). If the cosmovision plays an important role in niche construction, then an integrative perspective is required, in which a holistic epistemology is generated (including socio-historical perspectives), and in which new ontological commitments are evaluated in order to understand the role of morality in human evolution. Therefore, if the elements scaffolded in the nucleus of the cosmovision, which include moral aspects, are important for an evolutionary explanation, then it is desirable to have an integrative perspective that recognizes the value of information from socio-historical studies to acknowledge the active role of morality in evolution. As I have stated before, the development of the concept of cosmovision on the part of López Austin, has been under the conviction of avoiding the problems derived from the implementation of universalist or particularistic criteria to explain the development of the cultural practices in this area, and for that reason it has been proposed as a concept that can be integrated into evolutionary ethics. López-Austin argues that reductionisms simplify the study of similarities and divergences between different cultures, and in this sense, he proposes to understand unity/diversity as a non-universal conceptual dyad that allows to do comparative studies. Rescuing cultural unity/diversity implies recognizing the practical and relational character of the cosmovision. With this I do not intend to suggest that evolutionary ethics is reduced to the study of cosmovision as we have understood it here, but rather that taking cosmovision into account allows us to understand what kind of sociohistorical processes are involved in the non-genetic inheritance systems that are relevant to evolutionary ethics. One way of approaching the evolutionary origin of morality is to investigate non-­ western cosmovision from a comparative and historical method. The notion of morality developed in the context of ancient Mesoamerican cultures can serve as an example to suggest that certain cultural practices serve as scaffolding for the development of a morality within non-genetic inheritance systems in the process of constructing niche.

10  Morality as Cognitive Scaffolding in the Nucleus of the Mesoamerican Cosmovision

217

References Ayala Falcón, M. (1995). La escritura, el calendario y la numeración. In L. Manzanilla & L. López Luján (Eds.), Historia antigua de México. Vol. III: El horizonte Posclásico y algunos aspectos intelectuales de las culturas mesoamericanas (pp. 383–417). Mexico: INAH-UNAM-Miguel Ángel Porrúa Editores. Baedke, J., Fábregas-Tejeda, A., & Vergara-Silva, F. (2020). Does the extended evolutionary synthesis entail extended explanatory power? Biology and Philosophy, 35(1), 20. https://doi. org/10.1007/s10539-­020-­9736-­5 Baeten, E. (2012). Another defense of naturalized ethics. Metaphilosophy, 43(5), 533–550. https:// doi.org/10.1111/j.1467-­9973.2012.01765.x Barrett, L. (2018). The evolution of cognition: A 4E perspective. In A. Newen, L. De Bruin, & S. Gallagher (Eds.), The Oxford handbook of 4E cognition (Vol. 1, pp. 297–320). https://doi. org/10.1093/oxfordhb/9780198735410.013.38 Bender, A., & Beller, S. (2011). The cultural constitution of cognition: Taking the anthropological perspective. Frontiers in Psychology, 2(April), 1–6. https://doi.org/10.3389/fpsyg.2011.00067 Bender, A., & Beller, S. (2016). Probing the cultural constitution of causal cognition – A research program. Frontiers in Psychology, 7(February), 1–6. https://doi.org/10.3389/fpsyg.2016.00245 Boege, E. (1988). Los mazatecos ante la nación. Contradicicnes de la identiad étnica en el México actual. Mexico: Siglo XXI Editores. Boyd, R., Richerson, P. J., & Henrich, J. (2011). The cultural niche: Why social learning is essential for human adaptation. Proceedings of the National Academy of Sciences, 108(Supplement_2), 10918–10925. https://doi.org/10.1073/pnas.1100290108 Caporael, L. R. (2003). Repeated assembly. In S. Schur & F. Rauscher (Eds.), Evolutionary psychology. Alternative approaches (pp. 71–89). Dordrecht, the Netherlands: Kluwer. Caporael, L. R., Griesemer, J. R., & Wimsatt, W. C. (2014). Developing scaffolds in evolution, culture, and cognition. Cambridge, MA: MIT Press. Chemero, A. (2009). Radical embodied cognitive science. Cambridge, MA: MIT Press. Csordas, T.  J. (2013). Morality as a cultural system? Current Anthropology, 54(5), 523–546. https://doi.org/10.1086/672210 De Waal, F. (2013). The bonobo and the atheist: In search of humanism among the primates. New York: W.W. Norton. Estany, A., & Martínez, S. (2014). ‘Scaffolding’ and ‘affordance’ as integrative concepts in the cognitive sciences. Philosophical Psychology, 27(1), 98–111. https://doi.org/10.1080/0951508 9.2013.828569 Falk, D. (2016). Evolution of brain and culture: The neurological and cognitive journey from Australopithecus to Albert Einstein. Journal of Anthropological Sciences, 94, 99–111. https:// doi.org/10.4436/JASS.94027 Flanagan, O., Sarkissian, H., & Wong, D. (2016). Naturalizing ethics. In K. J. J. Clarke (Ed.), The Blackwell Companion to Naturalism (pp. 16–33). https://doi.org/10.1002/9781118657775.ch2 Fuentes, A. (2013). Evolutionary perspectives and transdisciplinary intersections: A roadmap to generative areas of overlap in discussing human nature. Theology and Science, 11(2), 106–129. https://doi.org/10.1080/14746700.2013.780430 Fuentes, A. (2015). Integrative anthropology and the human niche: Toward a contemporary approach to human evolution. American Anthropologist, 117(2), 302–315. https://doi. org/10.1111/aman.12248 Fuentes, A. (2016a). La evolución es importante pero podría no ser lo que pensamos. Cuicuilco, 23(65), 271–292. Fuentes, A. (2016b). The extended evolutionary synthesis, ethnography, and the human niche: Toward an integrated anthropology. Current Anthropology, 57(supplement 13), 13–26. https:// doi.org/10.1086/685684 Fuentes, A. (2017). Human niche, human behaviour, human nature. Interface Focus, 7(5), 1–13. https://doi.org/10.1098/rsfs.2016.0136

218

J. A. Robles-Zamora

Fuentes, A. (2018). Towards integrative anthropology again and again: Disorderly becomings of a (biological) anthropologist. Interdisciplinary Science Reviews, 43(3–4), 333–347. https://doi. org/10.1080/03080188.2018.1524236 Gabora, L. (2008). The cultural evolution of socially situated cognition. Cognitive Systems Research, 9(1–2), 104–114. https://doi.org/10.1016/j.cogsys.2007.05.004 Galinier, J. (1987). Pueblos de la sierra madre. Etnografía de la comunidad otomí. Mexico: Instituto Nacional Indigenista-CEMCA. Gayon, J. (2016). Evolución cultural: una apreciación general. In R.  Gutiérrez Lombardo, J. Martínez Contreras, & A. Ponce de León (Eds.), Cultura y evolución (pp. 1–15). Mexico: Centro de Estudios Filosóficos, Políticos y Sociales Vicente Lombardo Toledano. Gibson, J. J. (1986). The ecological approach to visual perception. New York: Psychology Press-­ Taylor & Francis Group. González-Torres, Y. (2007). Notas sobre el maíz entre los indígenas mesoamericanos antiguos y modernos. Dimensión Antropológica, 41, 45–80. Retrieved from http://www.dimensionantropologica.inah.gob.mx/?p=1716 Hervieu-Léger, D. (2005). La religión, hilo de la memoria. Spain: Herder. Ingold, T. (1991). Evolución y vida social. México: Editorial Grijalbo- CONACULTA. Ingold, T. (1993). Tool-use, sociality and intelligence. In K. R. Gibson & T. Ingold (Eds.), Tools, language and cognition in human evolution (pp.  429–445). Cambridge, UK: Cambridge University Press. Ingold, T. (2000a). Culture, perception and cognition. In The Perception of the Environment. Essays on livelihood, dwelling and skill (pp. 157–171). London: Routledge. Ingold, T. (2000b). Making culture and weaving the world. In P. M. Graves-Brown (Ed.), Matter, materiality and modern world (pp. 50–71). London: Routledge. Ingold, T. (2001). From complementarity to obviation: On dissolving the boundaries between social and biological anthropology, archaeology, and psychology. In S. Oyama, P. E. Griffiths, & R. Gray (Eds.), Cycles of Contingency. Developmental Systems and Evolution (pp. 255–279). https://doi.org/10.1017/S1380203807002127 Ingold, T. (2007). Materials against materiality. Archaeological Dialogues, 14(1), 1–16. https:// doi.org/10.1017/S1380203807002127 Ingold, T. (2014a). Making and growing: An introduction. In E. Hallam & T. Ingold (Eds.), Making and growing: Anthropological studies of organisms and artefacts (pp. 1–24). Burlington, VT: Ashgate Publishing Company. Ingold, T. (2014b). Religious perception and the education of attention. Religion, Brain and Behavior, 4(2), 156–158. https://doi.org/10.1080/2153599X.2013.816345 Jablonka, E., & Lamb, M. (2013). Evolución en cuatro dimensiones. Buenos Aires: Editorial capital intelectual. Jones, K. S. (2003). What is an affordance? Ecological Psychology, 15(2), 107–114. https://doi. org/10.1207/S15326969ECO1502_1 Kolodny, O., Feldman, M. W., & Creanza, N. (2018, April 5). Integrative studies of cultural evolution: Crossing disciplinary boundaries to produce new insights. Philosophical Transactions of the Royal Society B: Biological Sciences, Vol. 373. https://doi.org/10.1098/rstb.2017.0048. Laland, K.  N., & O’Brien, M.  J. (2010). Niche construction theory and archaeology. Journal of Archaeological Method and Theory, 17(4), 303–322. https://doi.org/10.1007/ s10816-­010-­9096-­6 Laland, K., Matthews, B., & Feldman, M. W. (2016). An introduction to niche construction theory. Evolutionary Ecology, 30(2), 191–202. https://doi.org/10.1007/s10682-­016-­9821-­z López Austin, A. (1996). La cosmovisión mesoamericana. In S.  Lombardo & E.  Nalda (Eds.), Temas mesoamericanos (pp. 471–507). México: INAH-CONACULTA. López Austin, A. (2001). El núcleo duro, la cosmovisión y la tradición mesoamericana. In J. Broda & F. Báez-Jorge (Eds.), Cosmovisión, ritual e identidad de los pueblos indígenas de México (pp. 47–65). Mexico: CNCA- FCE.

10  Morality as Cognitive Scaffolding in the Nucleus of the Mesoamerican Cosmovision

219

López Austin, A. (2004). La composición de la persona en la tradición mesoamericana. Arqueología Mexicana, 65, 30–35. López Austin, A. (2006). Los mitos el Tlacuache. Mexico: UNAM-Instituto de Investigaciones Antropológicas. López Austin, A. (2015). Sobre el concepto de cosmovisión. In A. Gámez Espinosa & A. López Austin (Eds.), Cosmovisión mesoamericana. Reflexión, polémicas y etnografías (pp. 17–51). Mexico: COLMEX-FCE. López-Austin, A. (2016). El conejo en la cara de la luna. Ensayos sobre mitología de la tradición mesoamericana. Mexico: Editorial Era. López Austin, A., & López Luján, L. (2017). Monte Sagrado-Templo mayor. El cerro y la pirámide en la tradición religiosa mesoamericana. Mexico: IIA-UNAM & INAH. Martínez, M. (2003). La Falacia naturalista y el argumento de la pregunta abierta. Universitas Philosophica, 40, 65–88. Martínez, S. F. (2014). Technological scaffoldings for the evolution of culture and cognition. In L. R. Caporael, J. R. Griesemer, & W. C. Wimsatt (Eds.), Developing scaffolds in evolution, culture, and cognition (pp. 249–263). Cambridge, MA: MIT Press. Martínez, S.  F. (2016). Cultura material y cognición social. In P.  Hernández Chávez, J.  García Campos, & M. Romo Pimentel (Eds.), Cognición: Estudios multidisciplinarios (pp. 247–264). Mexico: Centro de Estudios Filosóficos, Políticos y Sociales Vicente Lombardo Toledano. Mcclung de Tapia, E. (2012). Ecological approaches to archaeological research in Central Mexico: New directions. In D. L. Nichols (Ed.), The Oxford handbook of Mesoamerican archaeology (pp. 567–578). Oxford, UK: Oxford University Press. Mcclung de Tapia, E., & Martínez-Yrizar, D. (2017). Aztec agricultural production in a historical ecological perspective. In D. Nichols & E. Rodríguez-Alegría (Eds.), The Oxford handbook of the Aztecs (pp. 1–14). Oxford, UK: Oxford University Press. Mehri, E., & Amerian, M. (2014). Scaffolding in sociocultural theory: Definition, steps, features, conditions, tools, and effective considerations. Scientific Journal of Review, 3(7), 756–765. https://doi.org/10.14196/sjr.v3i7.1505 Mesoudi, A., Whiten, A., & Laland, K. N. (2006). Towards a unified science of cultural evolution. Behavioral and Brain Sciences, 29(4), 329–347. https://doi.org/10.1017/S0140525X06009083 Miller, C. (2011). Moral relativism and moral psychology. In S.  Hales (Ed.), The Blackwell Companion to Relativism (pp. 346–367). Chicester, UK: John Wiley & Sons. Miriello, D., Barca, D., Crisci, G. M., Barba, L., Blancas, J., Ortíz, A., et al. (2011). Characterization and provenance of lime plasters from the Templo Mayor of Tenochtitlan (Mexico city). Archaeometry, 53(6), 1119–1141. https://doi.org/10.1111/j.1475-­4754.2011.00603.x Müller, G. B. (2017). Why an extended evolutionary synthesis is necessary. Interface Focus, 7(5), 20170015. https://doi.org/10.1098/rsfs.2017.0015 Murphy, Z.  R. (2015). Extended scaffolding: A more general theory of scaffolded cognition. Retrieved from Open Access Theses website: http://docs.lib.purdue.edu/ open_access_theses/585 Odling-Smee, J., Laland, K.  N., & Feldman, M.  W. (2003). Niche construction: The neglected process in evolution. Princeton, NJ: Princeton University Press. Oyserman, D. (2015). Culture as situated cognition. In Emerging trends in the social and behavioral sciences (Vol. 318, pp. 1–20). https://doi.org/10.1002/9781118900772.etrds0067 Polanyi, M. (2009). The tacit dimension. Chicago, IL: Amartya Sen. Portal, M.  A. (1986). Cuentos y mitos en una zona mazateca. Mexico: Instituto Nacional de Antropología e Historia. Portal, M. A. (1996). El concepto de cosmovisión desde la antropología mexicana contemporánea. Inventario Antropológico. Anuario de La Revista Alteridades, 2, 59–83. Rachels, J. (1986). The elements of moral philosophy. New York: Random House. Reese-taylor, K. (2012). Sacred places and sacred landscapes. In D. L. Nichols (Ed.), The Oxford handbook of Mesoamerican archaeology (pp. 1–12). Oxford, UK: Oxford University Press.

220

J. A. Robles-Zamora

Rietveld, E., Denys, D., & van Westen, M. (2018). Ecological-enactive cognition as engaging with a field of relevant affordances: The skilled intentionality framework (SIF). In A. Newen, L. De Bruin, & S.  Gallagher (Eds.), Oxford handbook of 4E cognition (pp.  41–70). Oxford, UK: Oxford University Press. Shvarts, A., & Bakker, A. (2019). The early history of the scaffolding metaphor: Bernstein, Luria, Vygotsky, and before. Mind, Culture, and Activity, 26(1), 4–23. https://doi.org/10.108 0/10749039.2019.1574306 Singer, P. (1993). Practical ethics (2nd ed.). Cambridge, UK: Cambridge University Press. Sloan, D. (2002). Darwin’s cathedral: Evolution, religion, and the nature of society. Chicago, IL: University of Chicago Press. Sober, E. (1998). Evolutionary theory and social science. In Routledge Encyclopedia of Philosophy. https://doi.org/10.4324/9780415249126-­R043-­1 Sterelny, K. (2003). Thought in a hostile world: The evolution of human cognition. Oxford, UK: Blackwell. Taube, K. A. (2012). Creation and cosmology: Gods and mythic origins in ancient Mesoamerica. In D. L. Nichols (Ed.), The Oxford handbook of Mesoamerican archaeology (pp. 741–751). Oxford, UK: Oxford University Press. Teehan, J. (2006). The evolutionary Bassis of religious ethics. Zygon, 41(3), 747–774. https://doi. org/10.1111/j.1467-­9744.2005.00772.x Thompson, P. (1999). Evolutionary ethics: Its origins and contemporany face. Zygon, 34(3), 473–484. Tomasello, M. (2011). Human culture in evolutionary perspective. In M. J. Gelfand, C.-Y. Chiu, & Y.-Y.  Hong (Eds.), Advances in culture and psychology: Vol. 1. Advances in culture and psychology (pp. 5–51). Oxford, UK: Oxford University Press. Tomasello, M., & Herrmann, E. (2010). Ape and human cognition: What’s the difference? Current Directions in Psychological Science, 19(1), 3–8. van Dijk, L., & Rietveld, E. (2017). Foregrounding sociomaterial practice in our understanding of affordances: The skilled intentionality framework. Frontiers in Psychology. https://doi. org/10.3389/fpsyg.2016.01969 Whiten, A., Ayala, F. J., Feldman, M. W., & Laland, K. N. (2017). The extension of biology through culture. Proceedings of the National Academy of Sciences, 114(30), 7775–7781. https://doi. org/10.1073/pnas.1707630114 Wimsatt, W. C., & Griesemer, J. R. (2007). Reproducing entrenchments to scaffold culture: The central role of development in cultural evolution. In R. Brandon & R. Sansom (Eds.), In integrating evolution and development: From theory to practice (pp. 227–323). Cambridge, MA: MIT Press. Zeder, M. A. (2016). Domestication as a model system for niche construction theory. Evolutionary Ecology, 30(2), 325–348. https://doi.org/10.1007/s10682-­015-­9801-­8 Zeder, M. A. (2018). Why evolutionary biology needs anthropology: Evaluating core assumptions of the extended evolutionary synthesis. Evolutionary Anthropology, 27(6), 267–284. https:// doi.org/10.1002/evan.21747 Zurita Noguera, J., Ortiz, A., & Martínez-Yrizar, D. (2019). La innovación en Mesoamérica a través del tiempo. Mexico: Instituto de Investigaciones Antropológicas-UNAM.

Index

A Adaptations, 11, 18, 21, 90–92, 94, 99, 100, 103, 112–113, 122–125, 127, 130, 168, 174, 175, 180, 209, 210 Aurobindo, 7 Authority (moral), 12, 136, 138, 139, 142–148 B Behavioral capacities, 111–131 Beliefs, 10, 22, 26, 29, 41–58, 90, 91, 96, 97, 115, 126, 139, 140, 142–144, 149, 160, 162, 170, 181, 182, 191, 193, 196, 204, 206, 212 Belief update, 10 Buddhism, 7 C Cairns, 69, 71 Chimpanzees, 10, 11, 30, 63–84, 118, 129 Cognitions, 8, 10, 11, 18, 22, 32, 42, 50, 56, 57, 90–103, 113, 206, 207, 213, 215 Cognitive development, 22, 35 Competitions, 5, 6, 20, 126, 166, 167, 194–196, 198 Conformity, 50, 66, 90–92, 99, 101, 112, 114–116, 118–121, 125, 130 Conformity bias, 197, 198 Contacts, 12, 66, 171–174, 211, 213 Cooperation, 6, 7, 47, 48, 56, 64, 65, 82, 90–92, 94, 95, 99–103, 141, 142, 147, 148, 161, 166–170, 174, 175, 183, 195, 196

Cosmovision, 13, 203–216 Cultural change, 129 Cultural differences, 32, 34 Cultural evolution, 2, 10, 12, 13, 44, 47–49, 56, 90, 112, 122, 126–131, 154–175, 179–199, 214, 215 Cultural group selection, 5, 9, 172, 193–196 Cultural variants, 160, 161, 164, 174, 182, 197, 198 D Darwin, C., 2–8, 12, 136, 138, 142, 143, 154–156, 165, 166, 171, 175 Democracies, 12, 157–160, 162, 164, 170 Descriptive/Prescriptive, 1, 8, 97, 136–141, 145, 146 Dual-process theories, 10, 17–37 E Emancipative values, 12, 159, 160, 170, 172, 173 Ethics, v, 1–9, 18–37, 64, 83, 113, 159, 205 Evolutionary developmental psychology, 20–21 Evolutionary ethics, v, 1–13, 17–37, 58, 90–92, 199, 204, 215, 216 Evolution, v, ix, 4–7, 9, 12, 18, 21, 56, 64, 68, 82, 83, 96, 103, 112–113, 123, 124, 126–130, 136, 142, 143, 149, 154–156, 181–185, 195, 199, 206, 207, 209, 210, 214, 216 Evolution of culture, 64, 214

© Springer Nature Switzerland AG 2021 J. De Smedt, H. De Cruz (eds.), Empirically Engaged Evolutionary Ethics, Synthese Library 437, https://doi.org/10.1007/978-3-030-68802-8

221

Index

222 Evolution of ethics, 64 Evolution of morality, 1, 9–11, 13, 83, 122, 129, 179–199, 211, 214–216 Exaptations, 11, 89–103, 111–131, 180 Executive functioning (EF), 28, 30, 35 Experimental philosophy, 9 Extended benevolence, 12, 154–175 Extended evolutionary synthesis, 9, 203 F Fairness, 18, 34, 67, 94–97, 116, 166–168, 181, 194, 197 H Historical contingency, 184, 199 Historical processes, 175, 207, 211 Human rights, 157, 159–164, 170, 171 Hypocrisy, 10, 41–46, 48, 53–58 I Innate biases, 181, 182, 195, 197, 198 Integrative anthropology, 214 K Kant, I., 1, 90, 92, 138, 142 Kropotkin, P., 5–7 M Mengzi, 2, 3 Mental time-travel, 11, 93, 95–97, 99–101, 103 Mesoamerican studies, 204 Moore, G.E., 8 Moral development, 18, 22 Moral epistemology, 190 Moral foundations theory, 6, 18, 119, 181 Moral norms, 11, 12, 48, 56, 82, 93–96, 101, 113, 116–118, 121, 128–130, 136, 138, 148, 179–185, 187–189, 192–199, 205 Moral objectivism, 144, 186, 188–190 Moral realism, 5, 12, 136, 139–141, 143, 145, 146, 149, 186 Moral reasoning, 23, 24, 29, 33, 34, 130, 187, 190 Moral sense, 12, 13, 123, 124, 128, 143, 144, 154–156, 166–168, 170, 174, 175, 196

Mozi, 3, 4 Mutual aid, 5–7, 11, 67 N Naturalism, 2–4, 6, 13, 179–199 Naturalistic ethics, 2 Neurosciences, 2, 9, 65, 90, 96, 99, 103 Niche construction theory (NCT), 209 Non-genetic inheritance systems, 203, 204, 216 Normativity, 11, 92, 93, 111–131, 146 Norms, 11, 20, 32, 33, 48, 49, 53, 65, 90–92, 94, 96, 100, 103, 114, 116–119, 121, 124, 125, 128–131, 137, 142, 148, 162, 164–166, 168–171, 181–183, 193–198 Norm psychology, 170, 174, 175 O “Objective” morality, 136, 137, 139, 140, 142, 145, 167 Objectivity, 11, 138–146, 148, 149, 188 Obligations, 12, 64, 84, 136–139, 142, 143, 145, 148, 149, 166, 167 Other-perspective-taking (OPT), 93–97, 100, 101, 103 Outcome-to-intent shift, 10, 17–37 P Perspective-taking, 24, 123, 164, 171–174 Piaget, J., 10, 22–25, 29 Prestige bias, 10, 49, 161, 174, 198 Prudence, 11, 91, 93, 96, 99 R Rationality, 44, 93, 103 Realism, 136, 140 Rée, P., 5, 8 S Sacred trees, 69, 71 Scaffolding cognition, 203–215 Scapegoats, 77–82 Schopenhauer, A., 1, 2 Secondary adaptations, 112, 122–126, 128–130 Second-personal morality, 166, 167 Sidgwick, H., 7, 8

Index Social epistemology, ix Social learning, 21, 33, 49, 111, 115, 116, 120, 121, 130, 181 Social norms, 11, 12, 53, 64, 65, 67, 93, 96, 112, 114, 116–118, 120, 121, 126, 129, 130, 142, 168, 169, 195, 196 Sublimation, 82 Sympathies, 3, 6, 12, 67, 97, 102, 128, 146, 147, 154–156, 160, 164–168, 171–175, 197, 198 T Theism, 12, 13, 179–199

223 Theory of mind, 26, 65, 66, 82, 97, 98, 102 Transmission bias, 166, 171 V Values, 12, 21, 36, 42, 57, 67, 90, 94, 96, 98, 115, 118, 119, 121, 122, 124, 125, 128, 129, 131, 136–140, 142–146, 149, 159, 160, 163, 164, 167, 173, 183, 191, 216 W Wang, Y., 3 Workarounds, 170, 174