Deliberate Ignorance: Choosing Not to Know 0262045591, 9780262045599

The history of intellectual thought abounds with claims that knowledge is valued and sought, yet individuals and groups

858 105 4MB

English Pages 396 [397] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Deliberate Ignorance: Choosing Not to Know
 0262045591, 9780262045599

Table of contents :
Contents
List of Contributors
Preface
The Phenomenon
1 Homo Ignorans: Deliberately Choosing Not to Know • Ralph Hertwig and Christoph Engel
2 The Complex Dynamics of Deliberate Ignorance and the Desire to Know in Times of Transformation: The Case of Germany • Dagmar Ellerbrock and Ralph Hertwig
3 Utilizing Strategic Ignorance in Negotiations • Sarah Auster and Jason Dana
4 Blinding to Remove Biases in Science and Society • Robert J. MacCoun
Deep Structure
5 The Deep Structure of Deliberate Ignorance: Mapping the Terrain • Barry Schwartz, Peter J. Richerson, Benjamin E. Berkman, Jens Frankenreiter, David Hagmann, Derek M. Isaacowitz, Thorsten Pachur, Lael J. Schooler, and Peter Wehling
6 How Forgetting Aids Homo Ignorans • Lael J. Schooler
7 Willful Construction of Ignorance: A Tale of Two Ontologies • Stephan Lewandowsky
Models
8 Models of Deliberate Ignorance in Individual Choice • Gordon D. A. Brown and Lukasz Walasek
9 The Evolution of Deliberate Ignorance in Strategic Interaction • Christian Hilbe and Laura Schmid
10 The Zoo of Models of Deliberate Ignorance • Pete C. Trimmer, Richard McElreath, Sarah Auster, Gordon D. A. Brown, Jason Dana, Gerd Gigerenzer, Russell Golman, Christian Hilbe, Anne Kandler, Yaakov Kareev, Lael J. Schooler, and Nora Szech
Norms
11 Harry Potter and the Welfare of the Willfully Blinded • Felix Bierbrauer
12 Is There a Right Not to Know Genetic Informationabout Oneself? • Benjamin E. Berkman
13 Reflections on Deliberate Ignorance • Lewis A. Kornhauser
14 Normative Implications of Deliberate Ignorance • Joachim I. Krueger, Ulrike Hahn, Dagmar Ellerbrock, Simon Gächter, Ralph Hertwig, Lewis A. Kornhauser, Christina Leuker, Nora Szech, and Michael R. Waldmann
Institutions
15 Institutions Promoting or Countering Deliberate Ignorance • Doron Teichman, Eric Talley, Stefanie Egidy, Christoph Engel, Krishna P. Gummadi, Kristin Hagel, Stephan Lewandowsky, Robert J. MacCoun, Sonja Utz, and Eyal Zamir
16 Deliberate Ignorance and the Law • Eyal Zamir and Roi Yair
17 Deliberate Ignorance: Present and Future • Christoph Engel and Ralph Hertwig
Bibliography
Subject Index
Strüngmann Forum Report Series

Citation preview

Deliberate Ignorance Choosing Not to Know

Strüngmann Forum Reports Julia R. Lupp, series editor The Ernst Strüngmann Forum is made possible through the generous support of the Ernst Strüngmann Foundation, inaugurated by Dr. Andreas and Dr. Thomas Strüngmann.

This Forum was supported by the Deutsche Forschungsgemeinschaft

Deliberate Ignorance Choosing Not to Know

Edited by Ralph Hertwig and Christoph Engel

Program Advisory Committee: Gordon D. A. Brown, Christoph Engel, Simon Gächter, Ralph Hertwig, Julia R. Lupp, and Richard McElreath

The MIT Press Cambridge, Massachusetts London, England

© 2020 Massachusetts Institute of Technology and © Massachusetts Institute of Technology the2020 Frankfurt Institute for Advanced Studies and the Frankfurt Institute for Advanced Studies Series Editor: J. R. Lupp Series Editor: J. R. Lupp Editorial Assistance: M. Turner, C. Stephen, A. Ducey-Gessner Editorial Assistance: M. Turner, C. Stephen, A. Ducey-Gessner Photographs: N. Miguletz Photographs: N. Miguletz Lektorat: BerlinScienceWorks Lektorat: BerlinScienceWorks All rights reserved. No part of this book may be reproduced in any form Allelectronic rights reserved. No part of this book may be reproduced inrecording, any form by or mechanical means (including photocopying, by information electronic orstorage mechanical means (including photocopying, recording, or and retrieval) without permission in writing from or information the publisher. storage and retrieval) without permission in writing from the publisher. The book was set in TimesNewRoman and Arial. The book was set ininTimesNewRoman andAmerica. Arial. Printed and bound the United States of Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Library of Congress Cataloging-in-Publication Data Names: Hertwig, Ralph, editor. | Engel, Christoph, 1956- editor. Names: Hertwig,ignorance Ralph, editor. | Engel,not Christoph, Title: Deliberate : choosing to know /1956editededitor. by Ralph Title: Deliberate ignorance : choosing not to know / edited by Ralph Hertwig and Christoph Engel. Hertwig and Christoph Massachusetts Engel. Description: Cambridge, : The MIT Press, [2020] | Series: Description: Massachusetts : The MIT Press, [2020] | and Series: StrüngmannCambridge, forum reports | Includes bibliographical references Strüngmann forum reports | Includes bibliographical references and index. index.ers: LCCN 2020010682 | ISBN 9780262045599 (paperback) Identi Identiers:LCSH: LCCNIgnorance 2020010682 | ISBN (paperback) Subjects: (Theory of 9780262045599 knowledge)--Social aspects. | Subjects: LCSH: Ignorance (Theory of knowledge)--Social Ignorance (Theory of knowledge)--Psychological aspects.aspects. | Ignorance (Theory of knowledge)--Psychological aspects. Classi cation: LCC BD221 .D434 2021 | DDC 121/.2--dc23 Classi cation: LCC BD221 .D434 2021 | DDC 121/.2--dc23 LC record available at https://lccn.loc.gov/2020010682 LC record available at https://lccn.loc.gov/2020010682 10 9 8 7 6 5 4 3 2 1 10 9 8 7 6 5 4 3 2 1

Contents List of Contributors

vii

Preface

xi

The Phenomenon 1

Homo Ignorans: Deliberately Choosing Not to Know Ralph Hertwig and Christoph Engel

3

2

The Complex Dynamics of Deliberate Ignorance and the Desire to Know in Times of Transformation: The Case of Germany Dagmar Ellerbrock and Ralph Hertwig

19

3

Utilizing Strategic Ignorance in Negotiations Sarah Auster and Jason Dana

39

4

Blinding to Remove Biases in Science and Society Robert J. MacCoun

51

Deep Structure 5

The Deep Structure of Deliberate Ignorance: Mapping the Terrain Barry Schwartz, Peter J. Richerson, Benjamin E. Berkman, Jens Frankenreiter, David Hagmann, Derek M. Isaacowitz, Thorsten Pachur, Lael J. Schooler, and Peter Wehling

65

6

How Forgetting Aids Homo Ignorans Lael J. Schooler

89

7

Willful Construction of Ignorance: A Tale of Two Ontologies Stephan Lewandowsky

101

Models 8

Models of Deliberate Ignorance in Individual Choice Gordon D. A. Brown and Lukasz Walasek

121

9

The Evolution of Deliberate Ignorance in Strategic Interaction Christian Hilbe and Laura Schmid

139

10

The Zoo of Models of Deliberate Ignorance Pete C. Trimmer, Richard McElreath, Sarah Auster, Gordon D. A. Brown, Jason Dana, Gerd Gigerenzer, Russell Golman, Christian Hilbe, Anne Kandler, Yaakov Kareev, Lael J. Schooler, and Nora Szech

155

vi

Contents

Norms 11

Harry Potter and the Welfare of the Willfully Blinded Felix Bierbrauer

187

12

Is There a Right Not to Know Genetic Information about Oneself? Benjamin E. Berkman

199

13

Reections on Deliberate Ignorance Lewis A. Kornhauser

217

14

Normative Implications of Deliberate Ignorance Joachim I. Krueger, Ulrike Hahn, Dagmar Ellerbrock, Simon Gächter, Ralph Hertwig, Lewis A. Kornhauser, Christina Leuker, Nora Szech, and Michael R. Waldmann

241

Institutions 15

Institutions Promoting or Countering Deliberate Ignorance Doron Teichman, Eric Talley, Stefanie Egidy, Christoph Engel, Krishna P. Gummadi, Kristin Hagel, Stephan Lewandowsky, Robert J. MacCoun, Sonja Utz, and Eyal Zamir

275

16

Deliberate Ignorance and the Law Eyal Zamir and Roi Yair

299

17

Deliberate Ignorance: Present and Future Christoph Engel and Ralph Hertwig

317

Bibliography

333

Subject Index

373

Strüngmann Forum Report Series

379

List of Contributors Auster, Sarah Milan, Italy

Department of Decision Sciences, Bocconi University, 20136

Berkman, Benjamin E. Department of Bioethics, National Institutes of Health, Bethesda, MD 20892-1156, U.S.A. Bierbrauer, Felix Center for Macroeconomic Research, University of Cologne, 50923 Cologne, Germany Brown, Gordon D. A. Department of Psychology, University of Warwick, Coventry CV4 7AL, U.K. Dana, Jason School of Management, Yale University, New Haven, CT 06511, U.S.A. Egidy, Stefanie Max Planck Institute for Research on Collective Goods, 53113 Bonn, Germany Ellerbrock, Dagmar Department of History, Technische Universität Dresden, 01062 Dresden, Germany Engel, Christoph Max Planck Institute for Research on Collective Goods, 53113 Bonn, Germany Frankenreiter, Jens Max Planck Institute for Research on Collective Goods, 53113 Bonn, Germany Gächter, Simon School of Economics, University of Nottingham, Nottingham NG7 2RD, U.K. Gigerenzer, Gerd Harding Center for Risk Literacy, Max Planck Institute for Human Development, 14195 Berlin, Germany Golman, Russell Department of Social and Decision Sciences, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A. Gummadi, Krishna P. Department of Networked Systems, Max Planck Institute for Software Systems, 66123 Saarbruecken, Germany Hagel, Kristin Department of Human Behavior, Ecology and Culture, Max Planck Institute for Evolutionary Anthropology, 04103 Leipzig, Germany Hagmann, David Harvard Kennedy School, Harvard University, Cambridge, MA 02138, U.S.A. Hahn, Ulrike Department of Psychological Sciences, Birbeck University of London, London WC1E 7HX, U.K.

viii

List of Contributors

Hertwig, Ralph Department of Adaptive Rationality and Cognition, Max Planck Institute for Human Development, 14195 Berlin, Germany Hilbe, Christian Max Planck Research Group Dynamics of Social Behavior, Max Planck Institute for Evolutionary Biology, 24306 Plön, Germany Isaacowitz, Derek M. Department of Psychology, Northeastern University, Boston, MA 02115, U.S.A. Kandler, Anne Department of Human Behavior, Ecology and Evolution, Max Planck Institute for Evolutionary Anthropology, 04103 Leipzig, Germany Kareev, Yaakov Federmann Center for the Study of Rationality, The Hebrew University of Jerusalem, 91904 Jerusalem, Israel Kornhauser, Lewis A. 10012 U.S.A.

New York University School of Law, New York, NY

Krueger, Joachim I. Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI 02912, U.S.A. Leuker, Christina Center for Adaptive Rationality, Max Planck Institute for Human Development, 14195 Berlin, Germany Lewandowsky, Stephan School of Experimental Psychology, University of Bristol, Bristol BS8 1TU, U.K.; and University of Western Australia, Crawley WA 6009, Australia MacCoun, Robert J. Law School, Department of Psychology, and Freeman Spogli Institute, Stanford University, Stanford, CA 90305, U.S.A. McElreath, Richard Department of Human Behavior, Ecology and Culture, Max Planck Institute for Evolutionary Anthropology, 04103 Leipzig, Germany Pachur, Thorsten Center for Adaptive Rationality, Max Planck Institute for Human Development, 14195 Berlin, Germany Richerson, Peter J. Department of Environmental Science and Policy, University of California Davis, Davis, CA 95616, U.S.A. Schmid, Laura Institute of Science and Technology (IST) Austria, 3400 Klosterneuburg, Austria Schooler, Lael J. Department of Psychology, Syracuse University, Syracuse, NY 13244, U.S.A. Schwartz, Barry Department of Management, Haas School of Business, UC Berkeley, Berkeley, CA 94720, U.S.A. Szech, Nora Department of Political Economy, Karlsruhe Institute of Technology, 76133 Karlsruhe, Germany

List of Contributors

ix

Talley, Eric Millstein Center for Global Markets and Corporate Ownership, Columbia Law School, New York, NY 10025, U.S.A. Teichman, Doron Faculty of Law, Hebrew University of Jerusalem, Mt. Scopus, Jerusalem 9190501, Israel Trimmer, Pete C. Faculty of Biology, Evolutionary Biology, University of Bielefeld, Bielefeld 33615, Germany Utz, Sonja Department of Social Media, Leibniz-Institut für Wissensmedien, 72076 Tübingen; and University of Tübingen, 72074 Tübingen, Germany Walasek, Lukasz Department of Psychology, University of Warwick, Coventry CV4 7AL, U.K. Waldmann, Michael R. Department of Psychology, University of Göttingen, 37073 Göttingen, Germany Wehling, Peter Institute of Sociology, Faculty of Social Sciences, Goethe University, 60323 Frankfurt am Main, Germany Yair, Roi Faculty of Law, Hebrew University of Jerusalem, Mt. Scopus, Jerusalem 9190501, Israel Zamir, Eyal Faculty of Law, Hebrew University of Jerusalem, Mt. Scopus, Jerusalem 9190501, Israel

Preface Science is a highly specialized enterprise—one that enables areas of enquiry to be minutely pursued, establishes working paradigms and normative standards, and supports rigor in experimental research. All too often, however, “problems” are encountered that fall outside the scope of any one discipline, and to progress, new perspectives are needed to expand conceptualization, increase understanding, and dene trajectories for research to pursue. The Ernst Strüngmann Forum was established in 2006 to address these types of topics. Founded on the tenets of scientic independence and the inquisitive nature of the human mind, we provide a platform for experts to scrutinize topics that require input from multiple areas of expertise. Our gatherings (or Forums) take the form of intellectual retreats: disciplinary idiosyncrasies are put aside, existing perspectives are questioned, and consensus is never the goal. Instead, gaps in knowledge are exposed, questions formulated, and ways to push research forward are collectively sought. The results of the entire process are disseminated through the Strüngmann Forum Report series. This volume reports on the 29th Ernst Strüngmann Forum. It synthesizes the ideas and perspectives that evolved over a two-year period and highlights questions that remain to be addressed. For those seeking insight into the process, this brief overview is offered. In 2017, perhaps due to their previous experience with the Ernst Strüngmann Forum (Engel and Singer 2008; Gigerenzer and Gray 2011), Ralph Hertwig and Christoph Engel contacted us to explore the possibility of proposing a theme on deliberate ignorance. Having recently published an article on this topic (Hertwig and Engel 2016), they were eager to explore the phenomenon in greater depth and sought out our help to create the requisite discourse. Their proposal provided a clear starting point, but as anyone who has been involved with our approach will tell you, once initiated, the discourse takes on its own dynamics: at each stage, as perspectives from others become available, ideas are revisited, scrutinized, and examined. After the proposal was accepted, Gordon Brown, Simon Gächter, and Richard McElreath joined us on the Program Advisory Committee to transform the proposal into a framework that would support an extended, multidisciplinary discussion. The committee worked together to delineate discussion topics, identify potential participants, and formulate overarching goals: • • •

To examine the epistemic choice of deliberate ignorance using specic cases To identify and model the motivational, cognitive, and affective processes that underlie deliberate ignorance To explore normative implications and institutional responses to deliberate ignorance

xii

J. R. Lupp

Four thematic areas were created to focus the working groups and questions proposed for each to consider. To maximize interactions, invited “background papers” presented information in advance on specic topics, and from March 17–22, 2019, a diverse group of experts—economists, psychologists, legal scholars, anthropologists, behavioral ecologists, sociologists, ethicists, historians, and computer scientists—gathered in Frankfurt for a most lively discussion. This volume is organized around these thematic areas. Each section contains the background papers in their nalized form (i.e., after peer review and revision) as well as summary reports of the group discussions (Chapter 5, 10, 14, and 15).

Exploring the Phenomenon of Deliberate Ignorance The contributions in this rst section explore different aspects of deliberate ignorance. To provide direct access to the core topics from Hertwig and Engel’s 2016 article, Chapter 1 presents a slightly adapted version. It lays out the rationale for their initial denition. Further, it systematizes different types of deliberate ignorance, describes their functions, discusses normative implications, and considers how to theorizeo the phenomenon. This is then followed by three case studies: In Chapter 2, Dagmar Ellerbrock and Ralph Hertwig examine whether deliberate ignorance is present in societies that undergo transitions, with a focus on twentieth-century Germany. In Chapter 3, Sarah Auster and Jason Dana discuss the strategic use of ignorance in negotiations, analyzing when the deliberate avoidance of information can be advantageous. In Chapter 4, Robert MacCoun looks at how blinding methods can potentially remove bias to improve judgments. Addressing various concerns that can arise (e.g., in blind orchestral auditions), he points to the need for new theory and continued research.

What Constitutes the Deep Structure of Deliberate Ignorance? In this working group, Barry Schwartz et al. explore the extent to which deliberate ignorance is common across different actors and domains of experience (Chapter 5). They review some of the psychological and cultural mechanisms that may be involved and identify potential variables that could inuence deliberate ignorance as well as the consequences that would follow. In Chapter 6, Lael Schooler analyzes how processes critical to encoding, retrieving, and forgetting information in memory help achieve functions ascribed to deliberate ignorance. Thereafter, in Chapter 7, Stephan Lewandowsky looks at the purposefully construction of ignorance using two specic cases: the rationale used to justify the 2003 invasion of Iraq and the persistent use of disinformation by

Preface

xiii

Donald Trump. Lewandowsky critically discusses the consequences of such willful construction of ignorance on individuals and society at large.

How Can Deliberate Ignorance be Modeled? To address the types of conceptual frameworks that may be needed to model deliberate ignorance, Gordon Brown and Lukasz Walasek review, in Chapter 8, existing models used in psychology and economics. They argue that both types are useful to understand different aspects of the phenomenon and identify three broad classes of relevant models, highlighting current gaps that research may wish to pursue. In Chapter 9, Christian Hilbe and Laura Schmid look at specic cases where deliberate ignorance evolves during strategic interactions. They propose two basic models to illustrate how ignorance can evolve among self-interested and payoff-maximizing individuals. Chapter 10 summarizes the extensive discussions of this working group. Pete Trimmer et al. begin with a focus on cases where standard assumptions are violated, consider cases from the individual’s perspective, and discuss different classes of “not wanting to know” something. In addition, they explore strategic cases of deliberate ignorance, where obtaining information would signal to others that information acquisition has occurred, and discuss whether deliberate ignorance could emerge in population-level models.

Is Deliberate Ignorance Good or Bad and, if so, When? When is it legitimate to ignore available information? When should the discovery of the truth be traded against anticipated consequences? When does concealing information improve welfare or break through cycles of revenge and retribution? In Chapter 11, Felix Bierbrauer argues that welfare economics should deliberately ignore (certain types of) social preferences to avoid repugnant policy choices. Benjamin Berkman then considers, in Chapter 12, the “right not to know” specic to the ethical debates related to genetic testing and genomic sequencing. Challenging the majority view that there is a nearly absolute right not to know, he suggests a more nuanced approach and offers recommendations on how best to balance individual autonomy and professional advantage in the future. In Chapter 13, Lewis Kornhauser reects on different interpretations of deliberative ignorance and develops a taxonomy of the phenomenon. He suggests criteria that could be used to select among denitions, and identies normative questions that arise, ranging from debates over individual rationality to questions in political philosophy. In a summary of their discussions (Chapter 14), Krueger et al. outline steps to enable a normative analysis of deliberate ignorance. From the perspectives of morality and rationality, they hold that deliberate ignorance is neither categorically bad nor

xiv

J. R. Lupp

good, and offer a suite of criteria to afford a more nuanced understanding and enable future work.

What Are the Institutional Implications of Deliberate Ignorance? In Chapter 15, Doron Teichman et al. outline concrete institutional mechanisms (e.g., contracts) that this working group felt could counter or promote deliberate ignorance. They provide an analysis of how organizational structures and mechanisms are used to compartmentalize information and review technology’s role. Following on, in Chapter 16, Eyal Zamir and Roi Yair survey ways in which the law overcomes some instances of deliberate ignorance while fostering others. They raise the issue of collective ignorance and provide examples where the law actually encourages deliberate ignorance to facilitate better decision making and promote different values. They examine the issue of system design and constitutional protection of human rights using “veils of ignorance” as well as specic legal topics: inadmissibility and other evidence rules, anonymity and omitted details of candidates to overcome the biases and prejudices of decision makers, expungement of criminal records, and the right to be forgotten.

Closing Remarks It is important to note that a Forum is not a linear process. The initial framework put into place by the Program Advisory Committee triggered a lively debate between experts with multiple (sometimes divergent) perspectives. Realizing effective discourse, however, required a willingness to reach across the divide between disciplinary traditions, terminology, and concepts—a challenge that may still exist, long past the completion of this book. Yet consensus was never the goal of this exercise. Instead, diverging opinions were needed to uncover true “gaps” in knowledge. Then the challenge became to collectively formulate ways to ll such gaps. To close out this volume, Engel and Hertwig reect in Chapter 17 on some of the important conceptual issues that emerged from the Forum. They highlight what they consider to be some of the important insights that were gained as well as some of the open issues that remain to be addressed. An endeavor of this kind creates unique group dynamics and puts demands on everyone. Throughout, each person who participated played an active role, and for their efforts, I am grateful. A special word of thanks goes to the Program Advisory Committee, to the authors and reviewers of the background papers, as well as to the moderators of the individual working groups (Pete Richerson, Richard McElreath, Ulrike Hahn, and Eric Talley). The rapporteurs of the working groups (Barry Schwartz, Pete Trimmer, Joachim Krueger, and

Preface

xv

Doron Teichman) deserve special recognition, for to draft a report during the Forum and nalize it in the months thereafter is no simple matter. Importantly, I extend my appreciation to Ralph Hertwig and Christoph Engel: both contributed equally to this 29th Ernst Strüngmann Forum, lending their expertise and motivational powers to each step as needed. The Ernst Strüngmann Forum is able to conduct its work because of its stable institutional support. The generous backing of the Ernst Strüngmann Foundation, established by Dr. Andreas and Dr. Thomas Strüngmann in honor of their father, enables us to pursue new knowledge in the service of science and society. In addition, the following valuable partnerships are gratefully acknowledged: the work of our Scientic Advisory Board ensures scientic independence of the Forum; the Deutsche Forschungsgemeinschaft offers supplemental nancial support; and the Frankfurt Institute for Advanced Studies shares its vibrant intellectual setting with us. Expanding the boundaries to knowledge is never easy, and long-held views are often difficult to put aside. Yet, when the limits to knowledge begin to appear and gaps can be identied, the act of formulating strategies to move past this point becomes a most invigorating activity. On behalf of everyone involved in this 29th Ernst Strüngmann Forum, I hope this volume will motivate further action to address the many issues that require attention to complete our understanding of the phenomenon of deliberate ignorance. Julia R. Lupp, Director, Ernst Strüngmann Forum Frankfurt Institute for Advanced Studies Ruth-Moufang-Str. 1, 60438 Frankfurt am Main, Germany https://esforum.de/

The Phenomenon

1 Homo Ignorans: Deliberately Choosing Not to Know Ralph Hertwig and Christoph Engel Abstract Western history of thought abounds with claims that knowledge is valued and sought, yet people often choose not to know. We call the conscious choice not to seek or use knowledge (or information) deliberate ignorance. Using examples from a wide range of domains, this chapter1 demonstrates that deliberate ignorance has important functions. We systematize types of deliberate ignorance, describe their functions, discuss their normative desirability, and consider how the phenomenon can be theorized. To date, psychologists have paid relatively little attention to the study of ignorance, let alone the deliberate kind. The desire not to know, however, is no anomaly. It is a choice to seek, rather than reduce, uncertainty whose reasons require nuanced cognitive and economic theories and whose consequences—for the individual and for society—require analyses of both actor and environment.

Homo Ignorans: Deliberately Choosing Not to Know Yet ah! Why should they know their fate? Since sorrow never comes too late, And happiness too swiftly ies. Thought would destroy their paradise. No more; where ignorance is bliss, ‘Tis folly to be wise. —Gray (1747) The old saw “What you don’t know won’t hurt you” turns out to be false at a deeper level. Just the contrary is true “It is just what you don’t know that will hurt you”.…Ignorance makes real choice impossible. —Maslow (1963)

When James Watson, co-discoverer of the structure of DNA, agreed to have his genome sequenced and released, he had one request: Information about the apolipoprotein E gene, associated with late-onset Alzheimer disease, should 1

This chapter has been adapted from the authors’ 2016 article published in Perspectives on Psychological Science, vol. 11, no. 2, pp. 359–372.

4

R. Hertwig and C. Engel

not be shared, even with him (Wheeler et al. 2008). What made this quintessential knowledge-seeker shrink from this information? The Human Desire to Know Knowledge is valued; knowledge is sought. Western history of thought abounds with examples. Adam and Eve could not help but eat from the tree of knowledge. The rst line in Aristotle’s Metaphysics reads: “All men, by nature, desire to know” (Ross 1924:255). English philosophers Thomas Hobbes and Francis Bacon celebrated curiosity and the pleasures of learning. Hobbes located curiosity among the passions and considered it a kind of “perpetuum mobile of the soul” (Daston and Park 2001:307). Curiosity is a pure desire, distinguished “by a perseverance of delight in the continual and indefatigable generation of Knowledge, [which] exceedeth the short vehemence of any carnall Pleasure” (Hobbes 1651/1968:124). Similarly, Bacon said of knowledge: “there is no satiety, but satisfaction and appetite are perpetually interchangeable” (Montagu 1841:250). Modern psychology has echoed these views and portrayed humans as possessing an emotion-like urge to know (Silvia 2008) or an instinct-like “burning curiosity” (Maslow 1963:114). Building on Carnap’s (1947:138–141) “principle of total evidence,” philosophers have argued that utility maximizers use all freely available evidence when estimating a probability (Good 1967), and economists have contended that utility maximizers always prefer more information to less (Blackwell 1953). Legal scholars claim that more knowledge promotes the veracity of judgments and facilitates settlement (Loewenstein and Moore 2004). Economic models often assume that more knowledge translates into greater bargaining power (see references in Conrads and Irlenbusch 2013). Psychoanalysts help individuals to liberate themselves from their “ostrich-like policy” of repressing painful knowledge (Freud 1950:152). Knowledge is valued; knowledge is sought. The Human Desire Not to Know In today’s aging societies, the risk of outliving personal assets is real. Economic life-cycle models suggest spending those assets optimally; that is, tailoring consumption patterns such that assets reach zero at death (Modigliani 1986). To plan accordingly, however, retirees need at least one crucial piece of information: the date of their death. Yet do we mortals—as opposed to our economically rational alter egos—really want to know exactly when we are going to die? To have a “good” death, perhaps we should. The medieval Ars Moriendi literature warns that a sudden death robs people of the opportunity to repent their sins. From this perspective, prisoners facing execution are “fortunate,” as they know the hour of their death (Bellarmine 1989).

Homo Ignorans: Deliberately Choosing Not to Know

5

Although humans are often portrayed as informavores, the circumstances under which they refrain from acquiring or consulting information are many and varied. Take, for instance, individuals at risk of Huntington disease. Nearly everyone with the defective gene who lives long enough will go on to develop this devastating condition. Yet only 3% to 25% of those at high risk opt to take the near-perfect test available to identify carriers of the gene (Creighton et al. 2003; Yaniv et al. 2004). Similarly, up to 55% of people who decide to be tested for HIV do not subsequently return to learn their test results (Hightow et al. 2003). Knowledge is not always sought (Ullmann-Margalit 2000). The Stasi, East Germany’s secret police, recruited vast networks of civilian informers—colleagues, friends, and even spouses—to spy on anyone deemed disloyal. When East Germany ceased to exist, people were allowed to consult the les that had been kept by the Stasi to see who had informed on them, sometimes with heartbreaking results (Jones 2014). Not everyone, however, wanted to know. Nobel laureate Günter Grass, for example, a frequent visitor to East Germany, refused to nd out which of his friends and colleagues had spied on him (Hage and Thimm 2010). The reality, functions, and rationality of this epistemological abstinence are our focus here. We are not interested in ignorance, per se (Gross and McGoey 2015; Merton 1987; Moore and Tumin 1949; Schneider 1962), in the institutional “production” of ignorance (Proctor and Schiebinger 2008), or in the suppression of unwanted memories (Anderson and Green 2001). In addition, we do not doubt that ignorance can have enormous individual and collective costs (e.g., Marshall 2014). Our concern, instead, is on deliberate ignorance, dened as the conscious individual or collective choice not to seek or use information (or knowledge; we use the terms interchangeably). We are particularly interested in situations where the marginal acquisition costs are negligible and the potential benets potentially large, such that—from the perspective of the economics of information (Stigler 1961)—acquiring information would seem to be rational (Martinelli 2006). We believe that deliberate ignorance is anything but a rare departure from the otherwise unremitting quest for knowledge and certainty: It is an underrated mental tool that exploits the sometimes ingenious powers of ignorance. We therefore posit that psychological science has erred in choosing to remain largely ignorant on the topic of deliberate ignorance. We demonstrate that deliberate ignorance is widespread and propose a taxonomy that brings structure to the rich body of examples provided as well as address normative issues: Is deliberate ignorance a good thing? If so, when, for whom, and why?

A Taxonomy of Deliberate Ignorance Mainstream social and behavioral sciences have long skirted the topic of ignorance (“a certain sociological ignorance of ignorance”: Abbott 2010:174)

6

R. Hertwig and C. Engel

or treated it as a social problem in need of eradication (Ungar 2008). Recently, though, sociologists, philosophers, and anthropologists have come to view ignorance as an object of study with important epistemological and political implications (Gross and McGoey 2015; High et al. 2012; Proctor and Schiebinger 2008). Psychologists, however, have barely been involved in the new study of ignorance or deliberate ignorance, although selective exposure is pertinent to it (Hart et al. 2009). Against this background, we propose the taxonomy outlined in Figure 1.1. Our taxonomy is just that: an attempt at organizing the evidence. An important next step will be theory building. But rst, it is important to recognize the landscape of deliberate ignorance. The taxonomy maps out what is, in large parts, uncharted empirical and conceptual territory in psychology. Deliberate Ignorance as an Emotion-Regulation and Regret-Avoidance Device People can manipulate their beliefs by selecting the sources of information they consult (Akerlof and Dickens 1982) and ignoring some sources altogether. Information avoidance, or defensive avoidance (Howell and Shepperd 2013), versus protective ignorance (Yaniv et al. 2004) has been dened as “any behavior intended to prevent or delay the acquisition of available but potentially unwanted information” (Sweeny et al. 2010:341). It has primarily been studied in the health domain (Howell and Shepperd 2012; Melnyk and Shepperd 2012; Shani et al. 2012). People may choose to avoid potentially threatening health information because it compromises cherished beliefs: they may fear loss of autonomy (e.g., a grueling medical regimen); anticipate mental discomfort, fear, and cognitive dissonance; or want to keep hope alive. On a pragmatic level, medical information may have material implications. People with the Huntington disease gene may fear stigmatization, discrimination in the workplace, and loss of medical or insurance benets (Wahlin 2007). In addition, once an irreversible decision has been made (e.g., to undergo a risky treatment), a person may want to avoid regret by not seeking information Deliberate ignorance

Emotion-regulation and regretavoidance device

Suspense- and surprisemaximization device

Performanceenhancing device

Gaining bargaining advantage

Strategic device

Self-disciplining

Impartiality and fairness device

Eschewing responsibility

Figure 1.1 Taxonomy of types of deliberate ignorance.

Cognitivesustainability and informationmanagement device

Avoiding liability

Homo Ignorans: Deliberately Choosing Not to Know

7

that suggests a different decision might have produced a better outcome (Van Dijk and Zeelenberg 2007). The regulatory function of deliberate ignorance may extend to a wider range of domains (e.g., investors who ignore their portfolios in downturns: Karlsson et al. 2009) as well as to emotions (e.g., social and moral emotions: Elster 1996; Hutcherson and Gross 2011). One such emotion is envy. Pay secrecy can be a rm’s strategy to hide pay inequality. Among employees, choosing not to discuss one’s pay with one’s colleagues can be a conscious strategy to avoid envy and its potentially detrimental effects on job satisfaction. Deliberate Ignorance as a Suspense- and Surprise-Maximization Device Suppose you are planning to spend the weekend binging on the new season of your favorite TV drama. Would you appreciate a friend giving you a preview? Hardly. People attend soccer games and read mystery novels for the drama. Revealing the ending would spoil their fun. Any policy designed to maximize suspense or surprise will reveal key outcomes (e.g., your birthday present) only at the last minute (Ely et al. 2015). Deliberate Ignorance as a Performance-Enhancing Device A common belief in psychology and beyond is that presenting learners with information on their task performance is a powerful and effective way to boost performance. Yet feedback has also been shown to reduce performance under some circumstances (Kluger and DeNisi 1996), such as when it causes attention to be directed away from the task to the self, depleting the cognitive resources needed for the task. It has also been suggested that feedback revealing large discrepancies between aspired-for and actual performance triggers arousal that, in turn, impairs performance (Kluger and DeNisi 1998). These detrimental effects raise the counterintuitive possibility that deliberately foregoing information may enhance learning and, relatedly, performance (Huck et al. 2015; Shen et al. 2015). For instance, arousal might be particularly high and disadvantageous when comparisons with a rival are involved (Garcia et al. 2013). Another way in which deliberate ignorance may enable performance—and we admit that this idea is purely speculative—is the tendency to adopt an inside view when intuitively forecasting the future progress of a plan. According to Kahneman and Lovallo (1993), people tend to look at the unique details of a plan or project rather than focusing on the statistics of a class of past similar cases. This mindset is typically regarded as bias, resulting in overly optimistic forecasts. Yet taking an inside view and deliberately ignoring outside information may be instrumental to reaching the decision to engage in an ambitious project. It is possible that no textbook would ever be written, no house built,

8

R. Hertwig and C. Engel

and no opera composed if people based their decision on the progress and success of similar endeavors. Deliberate Ignorance as a Strategic Device In economics, psychology, political science, and sociology, the reason most frequently invoked to explain why people do not always seek knowledge is strategic ignorance. Strategic ignorance has diverse functions; we discuss four of them (Figure 1.1). Since Schelling (1956), economists have investigated to what extent deliberate ignorance helps negotiators to gain a bargaining advantage (McAdams 2012). Consider a situation in which one negotiator does not know how costly a breakdown in negotiations would be for both parties. Typically, there are multiple options for striking a successful deal, and each has a different degree of appeal for the negotiating parties. Both parties would generally prefer any of these options to a breakdown in negotiations. In game theoretic terms, the typical bargaining situation puts negotiators in a “battle of the sexes.” If one party opts not to know what a reasonable solution is, the burden of avoiding a stalemate rests with the informed bargainer, who is forced to make concessions from which the ignorant party stands to gain. Forsaking information may even help both parties. If the information is likely to be ambiguous, for example, any egocentric bias in resolving this ambiguity may shrink the bargaining range (Loewenstein and Moore 2004). Indeed, a number of experimental bargaining studies and principle-agent situations (Crémer 1995) have shown that negotiating players may benet from ignorance and that a nontrivial number of players deliberately decide to remain ignorant. This observation holds if players can hide their intention to remain ignorant (Conrads and Irlenbusch 2013). Second, deliberate ignorance may function as a self-disciplining device. This possibility is elaborated in Carrillo and Mariotti’s (2000) theoretical analysis of a person with time-inconsistent preferences (i.e., a future incarnation of the self with other goals than the present self) with respect to consuming a good that exacts costs on future health. For instance, nonsmokers who believe the risk of lung cancer to be high may fear that seeing lower estimates would encourage them to smoke, and thus change their behavior in a way they will later regret. Third, people can eschew responsibility for their actions by avoiding knowledge of how those actions and their outcomes affect others or public goods such as the environment (Thunström et al. 2014). Studies using the dictator game have shown that the opportunity to avoid responsibility (by choosing to be ignorant of the recipient’s payoffs) increases the proportion of selsh choices; conversely, when players cannot avoid responsibility, they render fairer (or more ethical) choices (Dana 2006; Dana et al. 2007). Eschewing moral responsibility through ignorance also helps to prevent cognitive dissonance: “often it

Homo Ignorans: Deliberately Choosing Not to Know

9

is better not to know because if you did know, then you would have to act and stick your neck out” (Maslow 1963:123). Utility-maximizing individuals may even be willing to pay to be shielded from information (Nyborg 2011). Fourth, choosing to remain ignorant can be a strategy for avoiding liability in a social or even a legal sense (Gross and McGoey 2015; McGoey 2012a). It can be used in the context of • • •

institutional failures, such as ignorance of unauthorized trading or of the risks of highly speculative nancial instruments (Davies and McGoey 2012), risky but lucrative business endeavors, such as ignorance of a new drug’s adverse effects (McGoey 2012b), or humanitarian catastrophes (Cohen 2001; Maslow 1963).

As just one example, scientic communities, funding institutions, and lawmakers decide to leave some areas of inquiry unfunded because exploring them involves profound risks to the public (e.g., research on highly pathogenic avian inuenza H5N1 viruses: Fouchier et al. 2012). Finally, policy makers may resist evidence-based evaluation of their policies because they do not want to be held responsible for failures. In recent years, for instance, the German federal states have made it impossible for researchers to break down the data of the Programme for International Student Assessment (PISA) by state, thereby preventing scientists and the general public from comparing performance across federal states. Let us briey turn to the pervasive role of deliberate ignorance as a strategy for avoiding liability in legal affairs. There are few places where deliberate ignorance plays a more central role than in the courtroom. Under most rules of criminal law, it must be shown to the requisite standard that a defendant was aware of the facts that constitute the crime in question. To illustrate, consider the U.S. Code, Title 18 (Crimes and Criminal Procedure, Part 1, Section 1035)2 on social security fraud: (a) Whoever, in any matter involving a health care benet program, knowingly and deliberately...makes any materially false, ctitious, or fraudulent statements or representations, ...shall be ned under this title or imprisoned not more than 5 years, or both” [emphasis added].

This and other provisions require the determination of positive knowledge. A potential defendant may therefore avoid criminal liability simply by not acquiring knowledge. Legal systems sometimes seek to override this strategy (Robbins 1990). For instance, the “ostrich instruction” tells jury members in U.S. courts that they may nd the knowledge requirement to be satised by a defendant’s willful ignorance of the relevant facts. Yet this instruction raises important questions, such as why the willfully blind actor is, in a normative sense, legally and morally culpable (Hellman 2009) and what exactly the 2

https://www.law.cornell.edu/uscode/text/18/1035 (accessed Jan. 13, 2020).

10

R. Hertwig and C. Engel

mental state of willful ignorance is, including the underlying motives (Sarch 2014). Last but not least, how the legal system evaluates the implications of deliberate ignorance depends on who the homo ignorans is. In the attorney–client relationship, the lawyer’s deliberate ignorance is tacitly approved. It has been argued that attorneys, notwithstanding their obligations to the public, must be permitted, in the interest of loyalty to their client, not to seek out important information pertaining to the client’s conduct. This practice has been argued to raise ethical issues (Roiphe 2011). Deliberate Ignorance as an Impartiality and Fairness Device In his conception of a social contract, Rawls (1999) asked readers to place themselves in a hypothetical state of not knowing their place in society, or any other personal, social, or historical circumstances. Theoretically speaking, everyone thus shielded by a thick veil of ignorance from the temptation of pursuing their own special interests would agree on universal standards of fairness and justice. Beyond the realm of thought experiments, this veil-of-ignorance method is used, for instance, by experimenters in double-blind randomized trials (Kaptchuk 1998), hiring boards, and courts to preempt bias. One example is blind auditioning in symphonic orchestras. This fairly recent change in the audition policies of major U.S. orchestras (e.g., candidates play behind a screen to hide their identity) has contributed to a higher probability of female musicians being hired, thus, substantially boosting the proportion of women in symphonic orchestras (Goldin and Rouse 2000). Deliberate Ignorance as a Cognitive-Sustainability and Information-Management Device In 2008, the average American was estimated to consume 100,500 words and 34 gigabytes (106 bites) of information per day (Bohn and Short 2009). Though vast, this amount is small compared with what they might theoretically have consumed (Hilbert and López 2011). With the arrival of technologies and data-collecting devices such as predictive genetic testing, self-tracking devices that measure, for instance, the number of bites per meal, ubiquitous computing, the Internet of Things, and myriad social media (e.g., Facebook, Twitter, WhatsApp), modern societies have entered a brave new world. Depending on one’s perspective, it is either a paradise or a nether world where people drown in intractable amounts of information. In this new world, countless actors (e.g., companies, advertisers, media, and policy makers) seek to colonize and appropriate people’s attention. There is a risk that “hyperpalatable mental stimuli” designed to capture limited attentional resources will hijack the human mind, which evolved in a different information ecology (Crawford 2015). By the same token, obesogenic environments now brim with inexpensive, convenient food products engineered

Homo Ignorans: Deliberately Choosing Not to Know

11

to take consumers to their bliss point (i.e., the concentration of sugar or fat or salt at which sensory pleasure is maximized). Evolved to crave such hyperpalatable food, consumers risk losing control over what and how much they eat (Moss 2013). Just as food engineers have become masters at hitting people’s physical bliss points, the (social) media and Internet companies have become experts in designing mental stimuli that commandeer people’s attention: The Internet now hosts some 700–800 million individual porn pages alone (The Economist 2015). “Stimulation begets a need for more stimulation” (Crawford 2015:17) and distractibility may be the mental equivalent of obesity. In an informationally fattening environment, citizens risk losing control over how they allocate their attention. Alarm about information overload is arguably as old as the concept of information itself (Bell 2010). Nevertheless, attending to a piece of information does exact opportunity costs: the choice to know one fact invariably implies not knowing other facts. For humans, who are hardwired to monitor their environment, the ability to allocate one’s limited attentional resources reasonably is becoming increasingly valuable in today’s world. Indeed, the ability to select a few valuable pieces of information and deliberately ignore others may become a core cultural competence to be taught in school like reading and writing: “[A]n ability to ignore things would seem to remain important to the lifelong task of carving out and maintaining a space for rational agency for oneself, against the ux of environmental stimuli” (Crawford 2015). We conclude this classication of types and functions of deliberate ignorance with a few observations: 1. 2. 3. 4. 5.

Deliberate ignorance does not appear to be as peculiar a phenomenon as the cultural narrative about the insatiable human appetite for knowledge suggests. In some domains (e.g., legal theory and practice), deliberate ignorance is constitutive and pervasive. The present taxonomy is provisional and partial; other functions (e.g., blind charity or choosing to be ignorant about what is bad in other people) may be added once their essence is better understood (Driver 2001). Most types and functions of deliberate ignorance are genuinely social phenomena (Hertwig et al. 2013). In the age of information deluge, even informavores may appreciate deliberate ignorance as a way to maintain agency.

When Is Deliberate Ignorance a Good Thing? Our taxonomy is descriptive. What about the normative perspective? Is deliberately ignoring information desirable for the individual and for society? By what normative standards is ignoring information to be assessed?

12

R. Hertwig and C. Engel

Approaching these questions from a consequentialist perspective, one must identify and compare all foreseeable consequences of acquiring versus neglecting information: for the decision maker as well as for all others (potentially) affected by their choice. Take, for instance, health information. Although some researchers stress the individual and social harm of ignoring health information (Case et al. 2005; Sweeny et al. 2010), others emphasize the protective benets of doing so (Shani et al. 2012; Yaniv et al. 2004). The balance between costs and benets may depend on various subjective concerns and objective facts. One important variable is whether any action can be taken in response to the information obtained. To illustrate, let us return to James Watson, who declined information on his genetic predisposition to late-onset Alzheimer disease, which was thought to have claimed the life of his grandmother (Nyholt et al. 2009). Watson perhaps thought that any benets of knowing would be undone by the lack of medical treatment or cure available (Wheeler et al. 2008). Alternatively, he may have wanted to spare himself the dread of waiting for the onset of symptoms (Berns 2006). Is the choice not to know irrational or ethically dubious? Some researchers have suggested that individuals have a right not to know in the context of genetic predictive testing, and various international conventions have recognized this right (Wehling 2015a). Others have argued that ignorance undermines self-governance (Bortolotti 2012; Harris and Keywood 2001). When ignoring information exposes others to risk (or imminent harm), Mill’s harm principle may be invoked (Brink 2014). Not picking up one’s HIV test results may put future sexual partners or an unborn child at risk, because if the disease is treated, it is far less likely to be transmitted. A hard-nosed welfare theorist would simply sum up the utilities of all possible consequences and—akin to the notion of “efficient breach of contract” (Cooter and Ulen 2008:262–268)—entertain the notion of “efficient ignorance”: Provided the (expected) damage to the victim is smaller than the (present) gain for the person ignoring the information, society should approve of ignorance. It could do so, for instance, by exempting individuals who forego the opportunity to acquire that information from liability. Most non-economists, however, nd the concept of “efficient breach” repugnant (Lewinsohn-Zamir 2012). They are likely to see efficient ignorance in the same light, especially when the commodity in question is life and limb. A distinction that is key to Mill’s harm principle—that between consensual and nonconsensual harm—would also be a nonissue for the same adamant welfare theorist. Returning to our example of the unclaimed HIV test result, deliberate ignorance may cause consensual harm (to a consenting sexual partner aware of the risk) or nonconsensual harm (Brink 2014). The welfare theorist would reason that a consenting individual has done so either because that person is indifferent to the risk or the individual has consented by receiving compensation (sex, to continue our example). Again, most people would part

Homo Ignorans: Deliberately Choosing Not to Know

13

company with this argument, though they might accept truly voluntary consent as a justication for not claiming an HIV test result. In other cases, the welfare balance seems straightforward. If there is a risk of liability, an individual may wish to forgo information that institutions (e.g., employers, courts) or society at large will want to be known. The opposite may be true in jury decision making. An individual juror may be curious (Loewenstein 1994) or expect some private reward for nding out specic information (Kang et al. 2009). Society, however, wants courts to be impartial and therefore enforces ignorance (e.g., by barring character evidence3). If the information to be deliberately ignored is unsolicited, the normative question shifts from the legitimacy of not acquiring or using available information to the right to protect oneself against information intrusion. Many diagnostic tests inevitably produce surplus medical information that “more often than not, would have been left undiscovered” because the abnormality would not have bothered the patient during her lifetime. The problem is that once, say, a microcarcinoma has been discovered, it “cannot easily be ignored” (Volk and Ubel 2011:487), either by worried patients or by doctors faced with a litigious environment. More generally, in a medical environment that encourages excessive, often ineffective, and sometimes harmful medical care (Welch 2015), a right not to know may, paradoxically, be a fundamental right of the fully informed patient. Pondering the decision (not) to know before the information is available puts people in a double bind: They have to work out how much they want to know a piece of information before knowing what it conveys (Rosenbaum 2015). Once the information is known, the choice to ignore it may, for psychological as well as institutional reasons, be very difficult. The normative assessment of instances of deliberate ignorance is even more complex when the decision (not) to seek or use knowledge is taken on behalf of someone else. As an example, consider predictive genetic testing in childhood (Bloch and Hayden 1990), when one person’s right (desire) to know clashes with another’s right (desire) not to know. For instance, a mother may not want to know who adopted her child, but the adopted child may want to know who is her biological mother. To conclude, there is no ready-made answer to the question of when deliberate ignorance is benecial, rational, or ethically appropriate. Each class of instances must be assessed on its own merits. As we will see shortly, several variants of strategic ignorance can be modeled as the rational behavior of a utility-maximizing agent. A rational (Bayesian) agent may even pay money not to see cost-free information, counter to Good’s (1967) advice (see also Kadane et al. 2008; Pedersen and Wheeler 2013), and institutional arrangements (e.g., in the courtroom) may enforce deliberate ignorance in the service of impartiality. Of course, there is also a sinister side to deliberate ignorance, such as when it is used to evade responsibility, escape liability, or defend anti-intellectualism. 3

https://www.law.cornell.edu/rules/fre/rule_404 (accessed Jan. 13, 2020).

14

R. Hertwig and C. Engel

Finally, let us emphasize that the normative benchmark for the ethics of deliberate ignorance need not be utilitarian or consequentialist. Arguments extolling the desirability of (more) knowledge appear so intuitively persuasive because they invoke a very different normative ideal. Ever since the Enlightenment, knowledge has not only had instrumental but also moral value. Our understanding is that deliberate ignorance is not per se rational or irrational, ethical or unethical. Instead, deliberate ignorance is a cognitive tool whose success—measured in terms of individual or collective welfare—requires renewed analysis of both the actor and the environment (Arkes et al. 2016; Todd et al. 2012). Such an analysis of the ecological rationality of deliberate ignorance may also add a new dimension to the motto of the Enlightenment, Kant’s (1784) sapere aude: dare to use your own reason. The struggle for personal freedom and self-determination requires emancipation through knowledge and the courage to use one’s own reason. In a world in which knowledge (information) is not unconditionally advantageous, however, using one’s own reason can also mean choosing not to know. Research on the psychology of deliberate ignorance is in its infancy. Our objective in the rst part of this article was to demonstrate that it is an endeavor worth pursuing and to offer a taxonomy or initial structure to categorize the dazzling variety of cases of deliberate ignorance. In addition, we sought to complement the is with a discussion of the ought: How ought one think about individuals’ choosing not to acquire information, even though that information is available? Our treatment is but a rst step; many more are necessary.

Building a Theory Since deliberately ignoring information involves choice, the comprehensive choice theory in economics appears to offer an encompassing theoretical framework for deliberate ignorance. Canonical economic models take preferences as given and aim to explain choices by properties of the opportunity structure (see Trimmer et al., this volume). Furthermore, economic agents are assumed to optimize (i.e., to act as if they weigh marginal cost against marginal benet). Yet if this framework is to be adopted for deliberate ignorance, it needs to specify all expected benets from (not) acquiring information as well as all expected costs. What role does information play in an economic framework? According to the classic economics of information perspective, individuals derive utility not from information per se but from its potential material consequences (Stigler 1961). Recent ndings, however, have led to a different view: beliefs and information, the time of information, and even its avoidance can be a source of pleasure and pain (Berns 2006; Grant et al. 1998; Karlsson et al. 2009; Kreps and Porteus 1978). Furthermore, the utility that individuals derive from an outcome may depend on their anticipatory feelings (e.g., anxiety,

Homo Ignorans: Deliberately Choosing Not to Know

15

hope) about it (i.e., anticipatory utility; Eliaz and Spiegler 2006; Loewenstein 1987) or the anticipated emotional responses (e.g., disappointment) to information (e.g., bad news; Fels 2015). This might help explain individual time preferences (e.g., someone may wish to bring forward an unpleasant experience to shorten the period of dread, but delay a pleasant experience to savor the anticipation of it). An economic framework accommodates individual-specic aspects of the decision maker that may shape the choice (not) to know. These include the individual attitude to risk (the prospect of obtaining a piece of information can be seen as equivalent to entering a risky gamble for an anticipated payoff), the individual degree of patience, and the individual antic ipation of strategic actions taken by other interested actors. Moreover, it accommodates environment-specic aspects, such as availability of an effective cure (Fels 2015; Hilbe and Schmid, this volume). Despite these obvious advantages, we do not believe that an economic framework is sufficient to explain and predict deliberate ignorance for the following reasons: • •

• •

It depicts humans as “superrational” beings who swiftly (marginally) respond to subtle changes in the opportunity structure according to preferences that are assumed to be consistent across time. It describes the choice (not) to know in terms of the maximization of some kind of expected utility, yet without theories of what individuals care about in specic domains of life, it is hard to predict what utility a person aims to maximize. It does not account for inaccurate anticipation of costs and benets of a person’s choice (not) to know. It involves complex estimations and computations; commonly interpreted to be an as-if model, it models behavioral outcome, not the actual cognitive, affective, or motivational processes.

Deviating radically from this approach is the thesis that individuals, unable to implement complex processes, rely instead on heuristics. One reason to posit that at least some types of deliberate ignorance are best understood in terms of heuristics is the observed impact of emotions (Schooler, this volume; Suter et al. 2015, 2016). In affect-rich contexts, one or a few top-ranked reasons, concerns, or motives—rather than an extensive (compensatory) cost-benet calculus—may determine the choice to know or not to know. Would the use of a heuristic process rather than expected utility maximization render the choice of deliberate ignorance irrational? Some researchers have conceptualized the heuristics that people use as error prone (e.g., Kahneman 2011). Others hold that even if people could implement a complex utility-maximization calculus, they often prefer to use heuristics to save mental effort, at the price of sacricing some accuracy (e.g., Payne et al. 1993). Still another view suggests that heuristic processing of reasons, concerns, and motives can

16

R. Hertwig and C. Engel

result in choices that are adaptive and ecologically rational (Gigerenzer et al. 2011). To evaluate acts of deliberate ignorance as advantageous or disadvantageous, it is necessary to examine how instrumental those acts are in achieving the person’s functional goals, rather than evaluating whether they rely on utility maximization calculus and its exacting assumptions. As we seek to theorize the phenomenon of deliberate ignorance, it is important to look at potential parallels between deliberate ignorance and forgetting (see Schooler as well as Trimmer et al., this volume). Forgetting is a process through which previously encoded information is discarded, and is integral to the efficacy of the human memory. Forgetting fosters decision processes, such as the accuracy of inference heuristics (Schooler and Hertwig 2005), and serves key adaptive functions, including emotion regulation, through the selective forgetting of negative memories at the moment of both encoding and retrieval (see Nørby 2015). Are the adaptive functions of forgetting different from (some of) the functions of deliberate ignorance? We are not aware of an encompassing memory theory that could generate all adaptive functions of memory loss (Nørby 2015), nor are we aware of any encompassing theory of deliberate ignorance that could generate its various functions. As a theory is constructed, specic hypotheses will need to be generated. These hypotheses, in turn, can be tested using (a) survey data to measure the prevalence of and preferences for deliberate ignorance, (b) experiments to measure the reality of specic types of deliberate ignorance, and (c) eld data. All three approaches may enrich the scientic community’s knowledge of personality dimensions (e.g., risk and moral attitudes, curiosity, sensitizer vs. repressor coping styles, aspiration levels) and environmental factors (e.g., availability of medical treatment) that predict people’s information preferences. For instance, age appears to be a key factor (Hertwig et al., submitted), as older people are more inclined to choose not to know. Deliberate ignorance may thus be a mental tool that older people use to prune negativity from their lives (Carstensen 2006; Carstensen et al. 1999).

Future Challenges: To Know or Not to Know? Work on the psychology of deliberate ignorance is in its infancy, and thus it is premature to derive policy implications (see, however, Teichman et al. and Zamir and Yair, this volume). Some types of deliberate ignorance appear to have immediate prescriptive implications as an impartiality and fairness device (see MacCoun as well as Bierbrauer, this volume). For instance, if decision makers (e.g., jurors, hiring committees) agree that deliberations may be biased by certain information, then insulating themselves from this information is a reasonable course of action. A deliberate veil of ignorance may be a tool worth harnessing systematically across a wide range of institutional selection processes.

Homo Ignorans: Deliberately Choosing Not to Know

17

In addition, as suggested earlier, to maneuver through our informationally laden environment, the ability to select certain bits of information while deliberately ignoring others might be crucial to acquire. If so, the building blocks of this competence, and how they could be taught to citizens of all ages, need to be studied. Reverse engineering may help us begin to understand the methods used by those who design information: How do they manage to get people hooked? What strategies are necessary to resist them and maintain the level of agency and autonomy that most people want and need? The desire not to know is poorly understood and, in our view, not simply an “anomaly in human behavior” (Case et al. 2005:134). It is prevalent, and nuanced psychological theories are required to understand it. The phenomenon of deliberate ignorance also raises important questions. Answering these questions promises a deeper understanding of how people reckon with uncertainty and may, indeed, prefer it at times to certainty. We believe that the study of deliberate ignorance may become a new scientic frontier of great importance. If so, it would represent a promising opportunity for multiple disciplines to work together to examine the cognitive and emotional underpinnings, rationality, and ethics, as well as the sociocultural, institutional, and political implications of deliberate ignorance.

Acknowledgments The original article beneted enormously from helpful comments by Gordon Brown, Dagmar Ellerbrock, Werner Güth, Yaakov Kareev, Alexander Koch, Joachim Krueger, Tomás Lejarraga, Georg Nöldeke, Arthur Paul Pedersen, and Jan K. Woike. We are grateful to Susannah Goss and Valerie Chase for editing the manuscript and to Katja Münz for conducting the literature search. This research was supported by Grant HE 2768/7-2 from the German Research Foundation (DFG) to Ralph Hertwig.

2 The Complex Dynamics of Deliberate Ignorance and the Desire to Know in Times of Transformation The Case of Germany Dagmar Ellerbrock and Ralph Hertwig

Abstract Individuals and institutions in societies in transition face difficult questions: whether or not to seek, explore, and produce public knowledge about their harrowing past. Not disclosing painful truths can be a conduit to reconciliation, as in premodern memory politics, but it can also mask the past regime’s perpetrators, benefactors, and its victims, highlighted in modern memory politics. Using the transformations of twentieth-century Germany as a case study, this chapter argues that deliberate ignorance has always been an element of memory politics, even in the twentieth-century approach to Vergangenheitsbewältigung (coming to terms with the past), with its emphasis on knowledge, remembrance, and disclosure. Profoundly dialectic in nature, deliberate ignorance can modulate the pace of change in periods of transition and preserve social cohesion, while simultaneously undermining personal trust and institutional condence. Turning to individuals’ decisions to read or not read the les compiled on them by the East German’s Ministry for State Security, it is argued that official memory politics and individuals’ knowledge preferences need not concur. In the public records and in initial results of an empirical analysis of individuals’ choice not to read their les, highly diverse and distinct reasons for deliberate ignorance have been observed. Omnem memoriam discordiarum oblivione sempiterna delendam censui. [All recollection of civil discord should be buried in everlasting oblivion.] —Cicero, Orations

20

D. Ellerbrock and R. Hertwig Historical amnesia is a dangerous phenomenon not only because it undermines moral and intellectual integrity but also because it lays the groundwork for crimes that still lie ahead. —Noam Chomsky (2016), Who Rules the World?

In Who Rules the World? Chomsky (2016) commented on the capacity of the U.S. public and politicians to forget about the “torture memos”—a set of legal memoranda drafted during the Bush administration that argued for the legal permissibility of enhanced interrogation techniques—and to largely ignore the new paradigm that took root: torture backed by the United States and executed by U.S. allies worldwide, a practice that continued under the Obama presidency. As Allan Nairn pointed out in a blog entry from January 24, 2009: “Obama could stop backing foreign forces that torture, but he has chosen not to do so….and even if, as Obama says, ‘the United States will not torture, it can still pay, train, equip and guide foreign torturers’....Obama could stop backing foreign forces that torture, but he has chosen not to do so.1 Chomsky identied this willful ignorance as the same capacity for historical amnesia at play in other “crimes” (Chomsky 2016:43), such as the U.S. invasions of Cuba, Puerto Rico, Hawaii, and Iraq, and U.S. colonial rule in the Philippines. The painful conict between proclaimed values and actual behavior appears to be resolved by deliberately ignoring evidence that contradicts the United States’ self-image of being “a nation of moral ideals” (Chomsky 2016:32). Chomsky noted that deliberate ignorance has a price; as the oft-invoked principle states, “those who cannot remember the past are condemned to repeat it” (Santayana 2011). In a speech commemorating the fortieth anniversary of Nazi Germany’s capitulation, former West German president Richard von Weizäcker (1985:4) repeated the sentiment: “Whoever refuses to remember the inhumanity is prone to new risks of infection.” Yet the intense debate following the speech was evidence that public opinion was actually deeply divided on how to balance remembrance and historical amnesia. The historian Christian Meier (2010; see also Rieff 2017), however, offered a more differentiated perspective on the role and function of historical amnesia. In his view, forgetting atrocities in the wake of war and repressing memories and knowledge can be a conduit to reconciliation.2 Knowledge of 1 2

https://www.allannairn.org/2009/01/torture-ban-that-doesnt-ban-torture.html (accessed Jan. 16, 2020). One may ask whether the term “forgetting” is appropriate. In the title of his book, das Gebot zu vergessen, Meier (2010) speaks about the imperative to forget. Technically speaking, forgetting is the apparent loss or modication of information already encoded and stored in an individual’s long-term memory. Therefore, what Meier seems to have in mind is a consensus by those in power to ignore the crimes of the past, neither examining nor prosecuting them (with the exception of some emblematic gures), and thus neither identifying nor punishing the bulk of the perpetrators, let alone the followers. Functionally, it is as if the people in power have decided deliberately to ignore the past (Hertwig and Engel, this volume, 2016), even if individuals’ memories persist. Our use of “forgetting” in this chapter follows this denition. For further discussion on the relationship between forgetting and deliberate ignorance from a

The Complex Dynamics of Deliberate Ignorance

21

a harrowing past may perpetuate a destructive cycle of hatred and revenge; in contrast, the deliberate choice to not remember can put an end to conict. Meier listed historical instances of this function of deliberate ignorance, from Cicero’s (1913) plea for “everlasting oblivion” just two days after Caesar’s assassination to the Peace of Westphalia, which ended the Thirty Years’ War and referred to oblivo and amnestia—forgive and forget—in its introductory articles. According to Meier, collective forgetting and the political choice to not seek, explore, or produce public knowledge about a painful past are essential for managing the transition of power and social cohesion. Here we examine whether this argument may not only hold for collectives, institutions, and governments but also for individuals. Specically, we examine both collective and individual deliberate ignorance (Hertwig and Engel, this volume, 2016) in transitional societies, where the need to navigate between remembrance and deliberate ignorance is most pressing.

The Simultaneity of Forgetting and Remembrance Many past societies, from antiquity to the modern age, relied primarily on ignorance and forgetting in times of societal and political transitions (Meier 2010). In the wake of the French Revolution, however, a new priority that valued knowledge over ignorance began to emerge. With it, the codication of human rights slowly gained momentum, and their violation—whether past or present—became crimes to be prosecuted and remembered rather than forgotten. Centuries later, the horrors of the Holocaust intensied the emphasis on remembrance; this new memory model began to guide and shape collective memory, in particular in Germany (Assmann 2016b; Erll 2011; Minow 1998; Roth 2011; Tismaneanu and Iacob 2015). Instead of promoting the act of forgetting and concealing the sins of the past, disclosure was required to identify the previous regime’s perpetrators, followers, and benefactors, as well as to honor its victims. Human rights and transitional justice became the guiding principles of twentieth-century memory politics (Buckley-Zistel and Schäfer 2014). The shift toward the modern memory policy of knowledge and remembrance has partly obscured the role of deliberate ignorance. Although historians and sociologists have recently been concerned with forgetting (Assmann 2016a; Connerton 2008; Dimbath and Wehling 2011) and amnesia (Plate 2015), their work emphasized the link to traumatic experiences (Bar-On 1993; Duranti 2013; Marcowitz and Paravicini 2009; Winter 2016) and conceptualized silence about past experiences as primarily a deciency, although some sociologists have started to examine the value of non-knowledge or Nichtwissen (e.g., Gross and McGoey 2015; Wehling 2015b). The notion that silence, dened by psychological perspective, see Schooler (this volume); for concepts and mechanisms of forgetting in the elds of history, sociology, and memory studies, see Connerton (2008), Dimbath and Wehling (2011), Plate (2015), Ricoeur (2006), and Rieff (2017).

22

D. Ellerbrock and R. Hertwig

Winter (2010:3) as acts of “non-speech,” is a conscious and productive activity (Assmann 2008) is still mostly novel and unfamiliar. Since the end of World War II, Germany’s official memory policy has prioritized knowledge and recognition in the management of collective memory. Yet this does not mean that deliberate ignorance ceased to exist. Clearly, there are profound differences between premodern and modern memory politics, as described by Meier (2010). We will argue, however, that deliberate ignorance has always been, and still is, part of memory politics: it manifests differently, for example, in diverse countries, depending on social and political conditions (Jarausch 2008; Kührer-Wielach and Nowotnick 2018), and occurs on the collective level, the individual level, or both. We propose that the premodern and modern models of coming to terms with a painful past have more in common than has thus far been recognized. Forgetting and deliberate ignorance are still important tools for stabilizing social order in times of transformation. While the memory politics that emerged in the twentieth century have clearly reversed the premodern priorities of memory politics, putting knowledge and remembrance before ignorance and forgetting, the latter remain relevant, in particular on the individual level. Importantly, the memory practices of individuals are not necessarily governed by the normative power of the collective memory model. Here we explore deliberate ignorance in periods of transition on both collective and the individual levels. As we will demonstrate, individuals practice deliberate ignorance in times of transition, and not infrequently. Using the choice to not look up one’s Stasi les (i.e., les collected by East Germany’s Ministry for State Security) as a paradigmatic case, we aim to shed light on individual motives for deliberate ignorance. We demonstrate how collective preferences for information or ignorance can coincide or diverge from individual preferences and how they can change according to political circumstances. After some introductory comments about the interdependency of knowledge and power, we turn to the dynamics of disclosure and deliberate ignorance in times of power change.

Knowledge, Secrecy, and Power In the face of past misdeeds, the tension between memory and historical amnesia—forgetting or, more precisely, deliberate ignorance (see footnote 2)—is not about producing a veridical record of the past for future generations. It is about power, human rights, and identity. As Foucault (1972) argued, the production of knowledge and ignorance is directly linked to power and the lack thereof. The dynamic relationship between knowledge and power is understood as the struggle over claims of truthfulness, which invoke their own norms and habits, discursive structures, actors, organizations, and sciences (Haugaard 1997). Power and knowledge, as well as ignorance, are intertwined

The Complex Dynamics of Deliberate Ignorance

23

in a productive and constitutive relationship. Rulers know that power cannot be executed without knowledge—census data, mortality tables, tax data, and the like are crucial to running an effective public administration—and conquerors have understood that information is essential for dominating a territory. Since the twentieth century, Western societies have dened themselves as knowledge societies, where knowledge is essential for social organization and productivity (Beck et al. 1996; Gibbons et al. 2006). At the same time, the lack of knowledge—ignorance, silence, and secrets—proved to be important for stabilizing political and social order. For instance, secrets were essential to creating legitimacy in the early modern period, when individuals believed the world was created and ruled by divine power. By concealing the circumstances of their decisions, monarchs cultivated a special aura that set them apart from ordinary people and made them seem more like unknowable gods (Gestrich 1994). The complementary relationship of knowledge, ignorance, deliberate ignorance, and even the systematic production of ignorance (Proctor and Schiebinger 2008) is perhaps most exposed in transitional societies seeking to rst disrupt and then stabilize social and political order. Yet the interplay of knowledge and deliberate ignorance has been neither understood nor researched in this context. In the context of the seeming antagonism of memory and knowledge versus forgetting and silence, recent research in memory studies has stressed “the risk of a binary approach” (Dessingué and Winter 2016:1) and conjectured that an intersection of these phenomena and processes are complex and highly dialogical. In the same vein, we argue that deliberate ignorance is dialectic, dynamic, and complex, and is interwoven with memory and power.

Ousting a Knowledge Regime in Political Transformation Power requires continuous legitimization. The interaction between knowledge and ignorance is a basic instrument for not only exercising power, but also legitimizing authority. This is why a new regime of knowledge and ignorance must be installed during and after all revolutions. Establishing this new regime requires two complementary and simultaneous processes: (a) driving out the old knowledge regime by breaking its rules, uncovering its secrets, destroying its information, abolishing its symbols, and forgetting its traditions, while (b) introducing a new regime of knowledge by dening distinct norms, establishing new experts, collecting different data, establishing founding narratives, and introducing fresh rituals. The “invention of tradition” (Hobsbawm 1983) and the “silencing of the past” (Trouillot 1995) always go hand in hand. These shifts must be implemented on both collective and individual levels. This is not an easy task; power struggles can make reorganizing knowledge regimes brutally violent, and the expansive nature of knowledge and ignorance makes installing new knowledge regimes painful and confusing (an issue to which we return shortly in the context of the end of the German

24

D. Ellerbrock and R. Hertwig

Democratic Republic). Establishing new regimes of knowledge turns the world upside down: what was right and respected yesterday is a source of shame and disgrace today, and once-precious information becomes useless or even incriminating. Clashes over who has the prerogative of interpretation are often merciless, since only the winners can consolidate their power. History offers many examples of the grim establishment of new knowledge regimes: the burning of tax lists during the French Revolution; iconoclasm in the Reformation, the English Civil War, and the Bolshevik and Chinese revolutions; and the public naming and shaming of collaborators in the wake of numerous historical transitions. These and many other events imply that accusation, shaming, and degradation, instigated as top-down policies but also spontaneously and bottom up, are central elements of the rite of passage into a new knowledge regime. It was never enough to kill the king: he had to be stripped of his symbols of power, be ridiculed in front of his people, and meet his end before a raucous crowd. These rituals of degradation have been crucial in completely delegitimizing old orders, institutionalizing a new regime of knowledge, and transforming individuals’ identities (Garnkel 1956). Accusation and shaming serve a dialectical goal: social exclusion of the old elites and sympathizers of the toppled regime, and social integration for all who are willing to be ashamed of the old norms and practices and follow new ones. Criminology has long proven that shaming procedures can exert a dual effect by simultaneously stigmatizing and integrating (Braithwaite 1992). This dialectical quality makes shaming practices a powerful tool wielded by revolutionary and reformative movements. Deliberate ignorance, a tool that can balance shaming practices, has a similarly dialectical nature. Shaming delegitimizes the old social order and brings to light its social or moral corruption while simultaneously establishing a new regime of knowledge and ignorance. This is an effective way to institutionalize a change of power while modulating the continuity of social interaction. Because shaming can both destroy old knowledge regimes and help establish new ones, it is an important tool for revolutionary and reformative movements (Jacquet 2015). Where shaming produces confusion and pain, however, deliberate ignorance can offer clarity and relief. Deliberate ignorance plays an important role in social and political upheaval: It shields people from the need for shaming procedures and produces stability by balancing the ruptures of transformation and disclosure. Deliberate ignorance fosters continuity in the face of fundamental change.

The History of Deliberate Ignorance: Transformations in Twentieth- and Twenty-First Century Germany Sweeping away an old political order while simultaneously establishing a new one is the most pressing challenge faced by societies in disruptive

The Complex Dynamics of Deliberate Ignorance

25

transformation, especially in the aftermath of civil wars, oppressive regimes, mass atrocities, or a violent coup d’état. Reckoning with violent pasts is a challenge that has been faced around the world: in postcommunist European countries, postmilitary dictatorships in South America, postapartheid South Africa, in the Catholic Church (after revelations of sexual abuse) or in postcolonial societies, as they face their treatment of indigenous populations (Assmann and Conrad 2010), and, of course, Germany. In the twentieth century, Germany underwent several profound regime changes, two of which are particularly important in the context of deliberate ignorance and the collective and individual negotiations of knowledge and ignorance. The rst, the end of the Nazi dictatorship in 1945, left Germany devastated, defeated, and divided. The second, the end of the repressive regime of East Germany in 1989, led to the fall of the Berlin Wall, the end of the Cold War, and German reunication. Both transitions marked a political change of power and, due to the violent nature of the previous regimes, were deeply linked to issues of justice, human rights, and reconciliation. Germany developed a very specic way of addressing its history: Vergangenheitsbewältigung or coming to terms with (or even overcoming) the past (Adorno 1977). We will use the case of Germany to explore the complex, dialectical, and productive role of deliberate ignorance in transformational societies. To this end, we distinguish four historical periods (each of which featured a different blend of knowledge and ignorance) and their distinct memory policies. The Premodern Priority of Ignorance Challenging the modern orthodoxy of remembrance and enlightenment (Aufklärung) when facing a nation’s grim past, Meier (2010) argued that forms of institutionalized forgetting are key to establishing and maintaining a new social contract that allows perpetrators, followers, and victims of the old system to coexist. Although the state has only limited inuence over whether and how individuals remember or forget, it can suppress public remembrance and pass laws to forestall or punish public discourse that would open old wounds. Such laws were frequently passed to end civil wars. Meier noted that while there are several historical cases where legislation that imposed a veil of ignorance on the individual indeed appears to have promoted political and social integration, Germany’s individual and publicly fabricated remembrance of World War I is an important illustration of the risks of the inability to forget. Following World War I, Germany’s rst democracy, the Weimar Republic, was established. It failed for numerous reasons. In terms of knowledge management, these included the intense collective and individual remembrance of German suffering (including high reparations); the strong sense of injustice and moral blame (Article 231 of the Treaty of Versailles, the so-called “guilt clause,” which ruled that Germany was responsible for the conict); the ongoing admiration of German war heroes such as Paul von Hindenburg, who was elected as the

26

D. Ellerbrock and R. Hertwig

Weimar Republic’s second president despite his professed monarchism and hostility toward the democratic approach; and the invention of false memories which argued, for instance, that the German army had been compromised by Jews, socialists, or Bolsheviks, thus causing the country to lose the war (Vascik and Sadler 2016). In Meier’s view, if the conditions of peace do not allow the collective and individual memory to rest, the risk of revenge and revolution will persist. Post-World War II: Concurrent Practices of Deliberate Ignorance After World War II, the Allied powers tried to inhibit this destructive blend of ignorance and fabricated memories. In light of numerous war atrocities and a shocking genocide, the Allies instigated the legal prosecution of crimes against humanity. The court trials held after 1945 served a dual purpose: to punish the Nazi elite and to initiate research and education concerning the Nazi dictatorship. These measures were accompanied by media campaigns displaying disturbing images of the atrocities that had been committed (Weckel 2016). In shaming Germans about their complicity, the Allies aimed to delegitimize the Nazi regime and create support for the new German state. For a short period—from the end of the war until the new German government settled into power—Allied occupation policy followed a rigid regime of knowledge and enlightenment. This initial period of denazication, however, came to a swift end. In pursuit of their strategic goals—to overcome past hatred, build a foundation for a new Europe, and form new Cold War alliances (including with Germany)—the Allies eventually supported a policy of not wanting to know: of amnesty, silence, and repression (Mitscherlich and Mitscherlich 2007; Niethammer 1982). In fact, German memory policy in the 1950s and early 1960s followed Meier’s (2010) description of the knowledge regime following the Peloponnesian War. Like ancient Athens, West Germany put its elite on trial and, for the sake of social peace, remained silent about citizens’ suffering, complicity, and responsibility. This collectively sanctioned policy of partial deliberate ignorance was largely practiced at all levels of politics and society for approximately three decades, permitting many former members of the Nazi party and high officials of the Nazi regime to occupy powerful positions in law, medicine, academia, the military, state intelligence services, and politics. The 1968 Student Movement: A New Interplay of Public Knowledge, Exposure, Recrimination, and Private Deliberate Ignorance The balance of knowledge and ignorance was adjusted once again during the radical social and political change of 1968. As in the United States and other European countries, left-leaning students in West Germany took to the streets in the late 1960s. It has been argued that Germany’s student movement

The Complex Dynamics of Deliberate Ignorance

27

was different from its counterparts across Europe. This generational conict coalesced around the unique historical guilt of the Holocaust, namely, the complicity of protesters’ parents in the crimes of the Nazis and their subsequent conspiracy of silence (Gassert and Steinweis 2007; Kundnani 2009). The German movement initiated a complete restructuring of the knowledge regime and by the late 1970s had grown into a West German discourse on the collective memory of the Nazi past that embraced knowledge, enlightenment, and education that canonized the moral duty to remember while “demonizing forgetting” (Fuchs 2006). Continuing into the 1980s and 1990s, the ongoing memory boom produced a vast body of knowledge regarding Germany’s Nazi past and further integrated it into the public discourse across all levels of society (Assmann and Frevert 1999). Yet the veil of ignorance was not completely lifted. Recent research (von Hodenberg 2018) suggests that the desire to know who was involved in designing and executing Nazi policies and crimes was primarily focused on the public sphere. Prominent West German politicians, journalists, and judges with a Nazi past were identied and stripped of their office and social status; less conspicuous collaborators, however, remained for the most part untouched, enjoying pension payments from the West German government at the same time that victims of Nazism were ghting for nancial compensation. As von Hodenberg (2018) argued, revealing Nazi collaborators was not a goal in itself, but rather was used to discredit illiberal professors and politicians. Research into the Nazi histories of well-liked liberal gures was sloppy or simply nonexistent, while less prominent individuals were accused, sometimes baselessly, of having Nazi pasts. Moreover, public debate coincided with silence in the private sphere (Bar-On 1993; von Hodenberg 2018; Welzer et al. 2002). Although postwar generations have known that their grandparents’ and parents’ generations must have contained Nazis and perpetrators, they, through various techniques (e.g., reframing, forgetting, blanks), have refused to acknowledge the involvement of their own relatives. In a striking self-analysis, the distinguished journalist Cordt Schnibben vividly described his own struggle with coming to terms with his parents’ Nazi convictions and deeds (Schnibben 2014)—his father was a member of Operation Werewolf and played an active role in the murder of a local Nazi opponent. Describing his thoughts and emotions when, years after the death of his father, he nally found court les and letters revealing his parents’ complicity, Schnibben wrote, “I have needed more than 10 years to nd these boxes [of court les]. Because for a long time, I was not certain whether I wanted to nd them” (Schnibben 2014). Coping with disturbing family histories by denying them or sweeping them aside has also been documented in South American societies (Frei 2018). More generally, closing one’s eyes to disturbing traces of a loved one’s past and facing them only after their death indicates that, in the context of political and social transformation, the practice of deliberate ignorance is likely to be shaped by generational experiences (Burnett 2010; Mannheim 1952).

28

D. Ellerbrock and R. Hertwig

Other key determinants of deliberate ignorance may be age, gender, race, and religion. Consider, for illustration, the contemporary debates and conicting views in the context of the #MeToo movement, or revelations about physical abuse of indigenous children in residential schools, or sexual abuse in religious institutions. The Shaming Power of Public Knowledge and the Role of Deliberate Ignorance after the East German Revolution (1989) The collapse of the German Democratic Republic (GDR) was a peaceful revolution. As in other revolutions, establishing a new regime of knowledge was vital. The reference point and catalyst of the emerging protest was the Stasi, the GDR’s ministry for state security, and its vast collection of les. The public debate and the struggle between knowledge and ignorance quickly focused on the Stasi because it was perceived as a “massive machinery of observation and control” (Fulbrook 1995:54) and the cornerstone of the East German dictatorship. East German civil rights activists demanded that the Stasi les be opened in order to expose how the system of repression functioned and to identify collaborators who had violated the basic rights and well-being of their fellow citizens. East German citizens occupied Stasi offices in East Berlin, Leipzig, and other cities, chanting slogans such as meine Akte gehört mir (my le belongs to me). Clearly, disclosing how the Stasi had operated was a powerful symbol of the transfer of power. Yet even though opening up the les was the protesters’ key demand, elements of deliberate ignorance were present from the very beginning and nely measured according to the desired speed of change. For instance, having helped to rescue them from destruction (by the outgoing GDR regime), civil rights activists supported the decision to destroy the central electronic le index and the accompanying software in February 1990 (Schumann 1997). In March 1990, a committee of civil rights activists and GDR government representatives agreed that all les containing personal information should be destroyed in the near future—an agreement that had nothing to do with the end of the GDR (Gill and Schröter 1991). During these early days of transformation, it was still unclear how the information in the les should be used. Only after civil rights groups discovered that some members of the Volkskammer (East Germany’s rst and only freely elected parliament) had cooperated with the Stasi did one possible use emerge: mandatory screening of parliamentarians. Yet civil rights groups remained undecided as to whether the screening results should be published or kept condential, and even some protesters, fearing a lynch-mob mentality, called for amnesty (Der Spiegel 1990; Schumann 1997:17). In this fast-paced process of political transformation, deliberate ignorance stabilized individual identity as well as social cohesion and slowed the pace of social disruption and political change. The debate as to whether the les should be made public, used for limited and well-dened purposes in a condential setting, or destroyed without

The Complex Dynamics of Deliberate Ignorance

29

being read preoccupied and divided East Germany and was disputed in its parliament. Some parliamentarians believed opening the les was necessary to establish public trust in the government; others were afraid that the kind of information that would come to light would poison the whole country. Being suspected and named as an “IM” (informeller Mitarbeiter or informal collaborator) of the Stasi became deeply disgraceful. “If names are mentioned here in public, you can also give people a rope around their necks,” declared Ralph Geisthardt, Christian Democratic parliamentarian (Schumann 1997:16). His concern offers a way to explain why IMs might not want to read their les: to escape shame, to avoid being confronted with their own wrongdoing, and to continue to gloss over the rift between what they stood for and what they did. In these times of rapid change, full and fast disclosure presented a risk— even to prominent civil rights activists3—and produced a situation of “turmoil and mistrust during the formative period of democracy” (Marshall 1992). Deliberate ignorance helped people navigate between knowledge and silence, offering the exibility required to adjust to a profoundly novel situation. For instance, when left with no choice but to work with the same colleagues and to live next to the same neighbors, not nding out whether they had been feeding information to the Stasi may have been the veil of ignorance shielding one’s proximate world from turmoil and total mistrust. This was true for both collective and individual needs. In the summer of 1990, a compromise between full disclosure and complete ignorance seemed to be the most appropriate option. GDR Home Secretary Peter-Michael Diestel wanted to use the les to rehabilitate victims and punish perpetrators, which, according to his estimation, would take six to nine months to complete; afterward, he wanted to destroy the les to allow the country to heal. In July 1990, the Volkskammer discussed a bill concerning the use of Stasi les (“Law on the use and security of personal data of the former MfS/AfNS”). The bill gave victims and state agencies access to the les to monitor their rehabilitative history and punish perpetrators, respectively. It also limited right of access to a period of one year and stipulated that the les would be destroyed. On August 24, 1990, the Volkskammer passed a bill that banned the destruction of Stasi les; the les were secured in special archives and personal access to them was denied.4 After an emotional debate in September 1990, the Volkskammer decided to implement a mandatory Stasi check on its members. The results were to be read aloud in a meeting but behind closed doors. Again, deliberate ignorance was complementary to disclosure and was intended to regulate the disrupting 3

4

One of many examples is Ibrahim Böhme, member of the Bürgerkommitee (Citizens Committee) and chairman of the Social Democratic Party in the GDR. He resigned after being identied as an IM (Lahann 1992). Due to the pressure of East German civil right activists, the East German bill was adopted after German reunication. On November 14, 1991, the Stasi documentation law (Stasiunterlagengesetz) was passed in the Bundestag (German Federal Parliament) and, nally granted personal access to the les. For a detailed documentation of the debate, see Schumann (1997).

30

D. Ellerbrock and R. Hertwig

effects of exposure and control the pace of change. At this point, West German media entered the already heated debate, published the Stasi checks of the Volkskammer members, and uncovered Stasi collaborators within all parties (Die Tageszeitung 1990). The dimension of the Stasi networks—even in newly elected bodies—as well as the broad debate in West German media moved public opinion ever more closely toward complete disclosure. Important voices, however, still opposed this option. Lothar de Maizière, the GDR’s rst and only democratically elected prime minister, predicted that if victims had access to their les, “then there will be no neighbors, friends, colleagues, then there is only killing and manslaughter” (Dresdner-Morgenpost 1990).5 The former chancellor of West Germany, Helmut Schmidt, later declared: “My instinct would have been to burn everything unread. If it had been up to me, they would have poured everything from the Stasi legacy into the sink” (LeipzigerVolkszeitung 2002). Schmidt’s inclination to destroy Stasi les was paradigmatic for most of the West German political elite, who openly discussed their plans to destroy the still-unread Stasi les in parliament (Lintner 1991:2378). Deliberate ignorance in this case was a tool for stabilizing power by shielding the privileges and wrongdoings of the political elite—in both East and West Germany. Opposing amnesty and ignorance, East German civil activists organized public hunger strikes and a second occupation of the former Stasi offices, gaining support from the West German Social Democrats (Bock 1999). The existence of the les, victims’ access to their les, and some public access (e.g., for scientic purposes), as well as legal prosecution of oppressors were eventually entered into the German unication treaty. Key arguments were that disclosure might foster a process of self-reection and operate like a “talk therapy” (Rathenow 1990:463), and that knowledge and memory would be indispensable for the pending democratization process (Gauck 1994). Unlike the period following the collapse of the Third Reich, this time official Vergangenheitsbewältigung did not skip a generation—in this vastly different political context, it unfolded immediately. In 1992, the les were opened to the public, and trials and purges (e.g., in universities, police, and military forces) were held. While the wrongdoings of the GDR were not comparable to the atrocities of Nazism, the West German approach of Vergangenheitsbewältigung was a crucial point of reference for dealing with East German history (Lewis 2002:104). The focus was on not repeating previous shortcomings: “Because the Nazi past was not mastered, then at least the Stasi past should be mastered” (Schädlich 1993:9). East German activists claimed that the deliberate ignorance practiced collectively and individually by West Germany after World War II had not been effective in achieving the desired democratic transformation. The legacy of World War II also gained importance in another, unexpected 5

After reunication, Lothar de Maizière became a minister in Helmut Kohl’s government but resigned after he was publicly accused of being IM Czerny (Der Spiegel 1990).

The Complex Dynamics of Deliberate Ignorance

31

way: Prominent politicians like Gustav Just, the coauthor of Brandenburg’s constitution and member of state parliament, were identied through the Stasi les as former Nazis—in Just’s case, one who had personally killed six Jews. As the Washington Post reported, “Stasi les are believed to contain about 1.5 million names of persons from the Nazi era, including criminals and victims. Most…have never had to face official judgement because of the East German government’s insistence that it was the successor to the prewar anti-fascist opposition, while West Germans carried the brunt of the Nazi legacy” (Fisher 1992). What historians later called the “double past,” the entanglement of East and West German history (Habermas 1992; Klessmann et al. 1999), also applied to deliberate ignorance: The Stasi les, the vast collective endeavor to nd out everything suspicious about GDR citizens, revealed and implied instances of deliberate ignorance of both Nazi and Stasi histories. The negotiation and debate over the use of the Stasi les was also a power struggle between representatives of the old regime and its critics; at stake were careers, positions, wealth, power, and social status. The public debates and purges can be interpreted as degradation and shaming practices used to delegitimize the GDR and its supporters. While this reorganization of knowledge and ignorance was necessary to overcome the remnants of the GDR and gain justice for its victims, a peaceful way of coexisting with former collaborators6—one that offered a delicate balance between disclosure and concealment7—was required. Enlightening individuals and citizenry about the misdeeds of the old regime was deemed necessary to install justice, yet at the same time, collectively and legally agreed-upon deliberate ignorance was thought to be a key factor in maintaining social cohesion and peace. Balancing both knowledge and ignorance is a challenge that has spurred debates over the GDR’s Vergangenheitsbewältigung since 1989. These debates on collective remembrance have entered the public sphere and can be reconstructed and studied (Buddensiek 2017), but researchers have paid little attention to the role of deliberate ignorance in individuals’ personal struggles over the same trade-off: How much knowledge and how much ignorance should a person have about their past life in the GDR? How much knowledge is needed for a society to uphold social peace, morality, and justice? How much ignorance is essential to social interaction and cohesion? 6

7

The number of collaborators and the categories of collaboration (Who was a collaborator? How should one judge involuntary interactions with the Stasi that resulted in les?) are still highly controversial. For instance, in 1989 there were about 189,000 IMs, but from 1950 to 1989 the Ministry of State Security had registered a total of 624,000 (Gieseke 2001; MüllerEnbergs 1998). Recently the focus of research has shifted to “respondents” (Auskunftspersonen); estimates of the number of respondents range between 7%–18% of the population, many of whom were not aware that they had been in contact with the Stasi because agents often used fake identities (Booß and Müller-Enbergs 2014). For instance, some les have not yet been opened to the public. Personal information of third parties is being redacted to reduce the risk of retaliatory acts.

32

D. Ellerbrock and R. Hertwig

The Stasi Files and Individuals’ Reasons for Deliberate Ignorance In 1991 the Federal Authority for the Records of the State Security Service of the former Democratic Republic (BStU) was founded to house the Stasi les. Since its inauguration, the BStU has promoted the value of enlightenment and self-determination on individual and collective levels. For instance, in an interview, Roland Jahn, who in 2011 became the head of the BStU and who, as a civil rights activist in the GDR, was repeatedly arrested and eventually expatriated in 1983, explained (Finger 2012): One should not voluntarily give up the opportunity to know something.…I know that from my own experience. When I inspected my les I learned that I was expelled from university because of a tutor’s spy report, and that while I was in prison my lawyer Wolfgang Schnur was an informant and not just my counsellor and friend. The Stasi had controlled my life, taken away my self-determination. Knowing that helped me to retrieve the life that had been stolen from me.…I was disappointed, but I was no longer deceived. This is how many victims of spying feel; the les frighten them, but also free them.

Like other heads of the BStU (e.g., Joachim Gauck), Jahn has strong normative views about the liberating effects of reading one’s le: By doing so, victims of the Stasi are able to recapture their stolen lives and nd freedom in truth. Jahn also highlighted the need for secret conicts to come to light: “The conict was there. It was just not visible. I can only forgive what I know about. I can only forgive the person I know” (Finger 2012). Over the years, the BStU’s annual reports have thoroughly documented individuals’ reasons and experiences of reading their Stasi les, as have many memoirs by civil rights activists (Birthler 2014; Jahn and Wensierski 2005; Schädlich 1993). Much less is known about those who decided not to access their personal les—indeed, even their number is unknown.8 Why have they chosen deliberate ignorance? Is it an individual expression of what Meier (2010) described as the time-honored practice of taming one’s appetite for anger and retaliation? Or do the motives have other roots: fear of shame, distrust of the information in the les, or something else altogether? To the best of our knowledge, the choice to not read one’s Stasi le, this act of deliberate ignorance, has not yet been studied, nor has any similar phenomenon in another transformational society been examined. To ll this void, Hertwig, Ellerbrock, Möwitz, and Dallacker (in preparation) have begun to 8

The BStU does not know the total number of les, so it cannot estimate the proportion of the population who have or have not accessed their le. According to the BStU, a total of 3,225,676 people (as of December 31, 2018) have asked to see their les (these and the following statistics can be found at https://www.bstu.de); this does not mean that they all have a le, however. In about 40% of the cases, the request is a repeated request (estimated on the basis of the statistics between 2011 and 2018). In 2017 and 2018, about 25 years after the les were made available, approximately 55,000 people requested access for the rst time. One way to interpret their behavior is that they practiced deliberate ignorance for more than two decades.

The Complex Dynamics of Deliberate Ignorance

33

empirically study the reasons of those who do not wish to view their records, using surveys and structured oral history interviews; the latter were selected to understand how this personal decision was embedded in social settings and how it might have changed over time due to new political or personal circumstances (e.g., retirement). As work is still under way, we will focus here on the potential reasons that can be discerned from the public record. Many individuals, including some prominent gures, have publicly stated that they have not read their le and have no intention to do so— they have also explained why. Deliberate Ignorance in the Service of Cohesion and Cooperation A commonly cited reason for not accessing one’s le is that even after the collapse of the GDR, many people had no choice but to continue to work with the same colleagues. Finding out that their colleagues (and possibly friends) had been feeding information to the Stasi would make future collaboration very difficult. This was a concern for Claus Weselsky, a prominent trade union leader. Asked in a radio interview with the Westdeutscher Rundfunk Köln in 2015, whether he had read his le, he answered: “No, I have not because I am quite certain that I would have come across names of people who had been part of my immediate environment. I do not want to know.” One interpretation of this justication is that a person “forgets” the sins of the past by forsaking the opportunity to learn who committed them. Deliberate Ignorance in the Service of Protecting Oneself from Shame Christa Wolf, one of the most important and acclaimed postwar German writers of the second half of the twentieth century—and perhaps the GDR’s most important writer—was both a Stasi victim (with no fewer than 42 volumes in her le) and an informant (one volume). She served as a sporadic informant to the Stasi between 1959 and 1962 and was carefully monitored for many years by the Stasi. It was Wolf herself who eventually published her perpetrator’s le (Täterakte) after the German media had exposed her past as an IM while she was in Los Angeles (Gitlin 1993). According to her last book, City of Angels, the revelation came as a complete surprise to her. She had totally forgotten the collaboration—later calling it “a case for Dr. Freud, a classic case of repression” (Gitlin 1993)—despite being a writer whose work dealt profoundly with Germany’s Nazi past and the themes of silence, repression, and denial of knowledge. Wolf’s case illustrates another reason for individual deliberate ignorance. Assuming that a person has not simply forgotten about past behaviors that they would now perceive as shameful or humiliating—including complicity, indelity, ideological wrongheadedness, and the betrayal of family and friends— that person may choose not to relive them. Deliberate ignorance may serve as a tool to keep one’s past behavior a secret from others and even from oneself

34

D. Ellerbrock and R. Hertwig

and protect oneself from shame, profound cognitive dissonance (Golman et al. 2017), and the threat to one’s self-perceived identity posed by unwanted memories. Deliberate Ignorance in the Service of Protecting Oneself from Great Betrayal and Regret Reading a le can feel terrible. One of the most famous cases of this experience is that of Vera Lengsfeld, a civil rights activist in the GDR. Her le contained reports from more than 60 Stasi agents and informers. One of the sources, however, was special, delivering detailed, even intimate knowledge of her private life. His code name was Donald. He was her husband. Others have had similar experiences: The writer Hans Joachim Schädlich found out that his elder brother had informed on him; the actor Ulrich Mühe, who played a Stasi officer in the lm The Lives of Others (2006), believed to have found evidence that, along with four of his fellow actors at East Berlin’s Deutsches Theater, his own wife had informed on him (though he lost his case against his then ex-wife in court). People who suspect that family members or close friends had informed on them may decide not to risk conrming their suspicions. In addition, the public suffering of people like Vera Lengsfeld has led others to choose to not read their le (Finger 2012). Deliberate Ignorance in the Service of Preserving One’s Identity Vera Lengsfeld changed her name from Vera Wollenberger after her devastating discovery and subsequent divorce. Indeed, nobody who read their own Stasi les stayed the same. This was true for victims, perpetrators, and everyone in between. As reported by Gitlin (1993) in the New York Times, Christa Wolf stated: The point now is not to justify or to excuse, but to explain this to myself….It horries me that there is a language in these les, a sort of Stasi language, that I myself was speaking, and that I can no longer identify with at all.

Deliberate ignorance can be a way to avoid painful questions: Who was I then? Who am I today? It may also preserve the illusion that even in times of profound upheaval, a person can remain unchanged. Deliberate Ignorance as Resistance against the Claim to Truth For Timothy Garton Ash, reading his le was an intellectual delight: “But what a gift to memory is a Stasi le. Far better than a madeleine” (Garton Ash 1997:12). This may not be surprising because as a British citizen, Garton Ash did not suffer dire consequences to his personal well-being as a result of his surveillance by the Stasi, which occurred during his visits to the GDR. His

The Complex Dynamics of Deliberate Ignorance

35

delight, so it appears, was also not tainted by suspicions although they are part of historians’ professional DNA. Historians are well aware of the limitations of any kind of historical source and are especially sensitive to the trustworthiness of secret service les; they know that these les are profoundly shaped by context and the myriad interests of the bookkeepers (Großbölting and Kittel 2019). A similar awareness also moved West German writer and Nobel Prize laureate Günter Grass. Having been a victim of unrelenting surveillance by the Stasi (his le contained over 1,200 pages), he was critical of the decision to open the les and reveal who cooperated with the secret police in East Germany. For good reasons, Grass (cited in Schlüter 2012) deeply mistrusted the claim to truth that was attributed to the content of the les and refused for many years to read his: These Stasi les were like a poison because they were seen as valid documents. What they said had to be true. This cast suspicion on an excessive number of people—often with good reason, but, unfortunately, often without—because people trusted the statements and did not consider that large parts were exaggerated or even made up.

Grass asked critical questions about the source of the information, thus denying the Stasi the power to retrospectively destroy his friendships with East German writers. In particular, he did not feel entitled to judge the difficult decisions that informants may have faced. Most East Germans with rsthand Stasi experiences shared this perspective. As Vera Lengsfeld explained, “reading one’s Stasi les is like looking in a distorting mirror” (Wollenberger 1992). The Stasi les do not offer knowledge, but rather information, which must be laboriously contextualized (Jones 2009). This prompts new questions and uncertainties. The les are not themselves a source of truth, and while they may ultimately lead to knowledge, they may also prompt further confusion. In this respect deliberate ignorance may be an option for stepping out of the relentless turmoil of ignorance and knowledge in times of transformation. Deliberate Ignorance as Resistance against Absolute and Hypocritical Norms Another reason for the decision not to read one’s le, which emerged in the interviews and surveys analyzed by Hertwig et al. (in preparation), is the refusal to participate in what some perceived as collective shaming, an act of hypocrisy by the victors of the Cold War, or an expression of neocolonial Western attitudes toward the East. In the years following German reunication, the view on life in East Germany was binary and inexible. That people may have been both a victim and an informant (as in Wolf’s case) was not accepted, nor was the fact that the Stasi les included many different, partly overlapping, categories of informant and victim. For the sake of sensationalism and melodrama, things had to be either black or white (Lewis 2002:106). In the light of this

36

D. Ellerbrock and R. Hertwig

absolute normativity, deliberate ignorance was a tool for escaping a potential witch hunt or for simply avoiding rigid judgments that did not correspond to one’s experience of life in the GDR. In the interviews, people who were, or still are, committed socialists, people who identied with the GDR’s Weltanschauung, and people who worked in positions that required loyalty to the state typically expressed this rationale. In the eyes of these individuals, there is no essential difference between the intelligence agencies of a state like the GDR and those of Western capitalist states such as the United States, the United Kingdom, or West Germany. Indeed, states have secret services and they are interested in collecting data on citizens deemed to be suspicious. Garton Ash (1997:235) stressed the arbitrary nature of judging the morality of spying on one’s citizens: “Good when done for a free country, bad when done for a dictatorship? Right for us, wrong for them.” In the eyes of some, opening the Stasi les was an act of hypocrisy, and not reading one’s le is a protest against the debasement of East Germany and its citizens. The public record as well as the initial results of Hertwig et al.’s surveys and interviews demonstrate that there is no single motive behind choosing to remain ignorant about one’s Stasi les. Instead, reasons for deliberate ignorance come in many shapes and sizes, including shielding oneself from traumatic experiences or fearing an inability to trust others. This has interesting implications for modeling deliberate ignorance and raises the question of whether the wealth of distinct motives can be captured by one single modeling framework (for more on this, see Brown and Walasek, this volume). We conclude our brief treatment of the collective and individual memory and desire to (not) know in transformational societies with a set of propositions.

Deliberate Ignorance in the Memory Politics during Transformational Periods Proposition 1: Deliberate ignorance has always been an element of memory politics. It is even present in the Enlightenment-based twentieth-century approach of Vergangenheitsbewältigung, which prioritizes knowledge, remembrance, and disclosure. This means that the premodern and modern approaches of dealing with a harrowing past do not represent categorically distinct attitudes—rather, they differ in how they blend and prioritize knowledge and deliberate ignorance. Proposition 2: Deliberate ignorance is a dialectical tool that, due to the intimate link between knowledge and power, can be used either to stabilize a regime’s power or to delegitimize a regime by undermining personal trust and institutional condence.

The Complex Dynamics of Deliberate Ignorance

37

Proposition 3: In the process of disclosing the secrets and sins of the previous regime, deliberate ignorance can modulate the pace of change, slowing it and even creating revisionist effects. Since disclosure often disrupts political and social structures, deliberate ignorance may help preserve peace and social cohesion. Deliberate ignorance’s dialectical nature, and the conditions under which it is likely to have benecial or detrimental effects for individuals and societies, is worth studying. Proposition 4: Following Ricoeur’s (2006) demonstration that memories can be creative, we claim that deliberate ignorance is highly productive and has the power to invent social orders and individual identities. Simmel (1908/1992) highlighted the power of secrecy in forming groups and Popitz (2016) stressed the power of ignorance to modulate conict and maintain norms. Building on these lines of arguments, the productive effects of deliberate ignorance require analysis. Proposition 5: The official memory politics of some contemporary transformational societies, such as Spain after Franco, have given greater space to ignorance and forgetting, while others, like postapartheid South Africa, have based their policy on memory and reconciliation. This diversity has emerged despite procedures of transitional justice institutionalized in supranational organizations such as the United Nations and the European Union since the 1980s, even though postconict trials, truth commissions, and retribution have been concluded to produce a more durable peace (Buckley-Zistel 2014; Lie et al. 2012).9 Proposition 6: Official collective memory politics and individuals’ knowledge preferences need not concur. This can be seen in the way Germans have reckoned with aspects of their collective, and sometimes personal, Nazi history and with Stasi les personal to their families. The prevalence of deliberate ignorance in these contexts, however, is largely unknown and requires empirical study. Proposition 7: Germany’s collective memory politics underwent drastic changes throughout the twentieth century. Individual memory politics also seem to be malleable and dynamic across time. For instance, in 2018, 26,875 new requests to view Stasi les were made—26 years after the les became available. The reasons behind such late changes of heart have not yet been studied. The case of Schnibben (2014), who researched and revealed his parents’ Nazi past only after their death, suggests that distinct changes in an individual’s 9

Transitional justice must master the dilemma of “not trading off peace for justice or justice for peace” (Williams and Nagy 2012:5). Speaking the truth, revealing atrocities, and establishing memorial narratives that include the victims and their suffering are commonly accepted as indispensable steps toward establishing trust in new democratic regimes (Buckley-Zistel and Schäfer 2014), despite the fact that these measures sometimes fail to break the cycle of violence (Anderson 2018:168–171).

38

D. Ellerbrock and R. Hertwig

proximate social network through death, illness, or divorce may trigger new knowledge preferences. We suggest that deliberate ignorance, like memory (Dessingué and Winter 2016; Rothberg 2009), is a highly dynamic concept that can change over time as well as according to social, political, and individual circumstances. Proposition 8: Preserving peace and social cohesion in one’s proximate social environment has been identied as one reason for deliberate ignorance on the individual level (von Hodenberg 2018). Yet the motives for individual deliberate ignorance appear to be substantially more diverse, spanning a wide range of concerns, motives, and contingencies. These await classication and a full understanding of the underlying psychology. Proposition 9: Collective memory politics offer strong normative claims about what is good for a society in transition. As the discussions in Meier (2010) and Rieff (2017) demonstrate, the issue of the normative implications of deliberate ignorance is far from settled. The normative issues that naturally emerge on the level of individual deliberate ignorance are even less understood (see also Krueger et al., this volume).

Summary Deliberate ignorance on a social level can serve to deny the past or avoid responsibility and accountability, but it can also foster peace and cooperation. Political philosophers from Machiavelli to Rousseau described securing peace as the rst and most essential task of government. This is particularly relevant for transitional societies, where peace is necessary for reconciliation: Information and truth are indispensable to human rights, public criticism is necessary to uncover corruption and exploitation, transparency is crucial for fair procedures, and knowledge is essential to justice. But truth and information executed as absolute principles may result in a climate of anger, hatred, and vengeance. Deliberate ignorance can be a tool for helping to balance and regulate the disruptive effects that a ood of knowledge may bring. Deliberate ignorance has a profoundly dialectical nature—let us no longer ignore it.

Acknowledgments We are very grateful to the reviewers of this chapter and to the participants of the Ernst Strüngmann Forum for their many helpful comments. We are especially indebted to Deb Ain for her myriad editorial comments.

3 Utilizing Strategic Ignorance in Negotiations Sarah Auster and Jason Dana

Abstract This chapter analyzes the role of information in strategic decision-making settings. It considers several situations in which it could be individually advantageous to deliberately ignore information, particularly when this ignorance can be signaled to the other parties in the decision, and introduces purely psychological reasons why a negotiating party might want to ignore information. In some situations, information actually constrains the action set available to the individual. Examples involve inadvertently leaking private information to the other side, knowledge triggering one’s own moral constraints, and knowledge biasing the individual in ways that will harm the negotiation. Even if information acquisition is completely private, a behavioral agent will sometimes negotiate better by deliberately avoiding information.

Introduction In the game of American football, there is a surprise play called the “naked bootleg.” Often, when a team is very close to the goal line, the quarterback takes the ball and hands it off to a back who has a running start toward the defensive line. That player runs behind a group of blockers and tries to bash his way through the defense to score. In the naked bootleg, everything is set up to look like a hand off, except that after the snap, the quarterback fakes and keeps the ball, rolling out in the opposite direction to the blockers, thus exposing himself as “naked.” The entire play is predicated on deception: If the defense correctly reads that the quarterback is keeping the ball, the play will most likely fail. In a 2013 matchup between the Denver Broncos and Dallas Cowboys, Peyton Manning (quarterback for the Broncos) executed a perfect example of the play to score a touchdown—his rst rushing touchdown in 62 games

40

S. Auster and J. Dana

(Petchesky 2013). In an interview following the game, Manning revealed an interesting bit of strategy: he had told the offense that he would hand the ball off to the running back (i.e., they were informed about the wrong play). He told only the running back that he would not be handing off the ball, as Manning did not want the back to panic and try to rip the ball away when the handoff failed to materialize. Why would a team want its own players ignorant as to what it is doing? In this case, the obvious answer is that if they did not truly believe that they were blocking for a traditional running play, they might not be able to fool the defense into thinking it was a traditional running play. Knowledge of the real play was useless to the rest of the offensive players because they would execute as if it were a traditional running play in either event. To the extent that they were psychologically incapable of ignoring useless knowledge, or incapable of preventing themselves from conveying it to the other team, that knowledge could be harmful if it made their performance less convincing. Ignorance made them better able to perform in a strategic interaction. Rational players in this situation should choose to be ignorant. In this chapter, we explore the value of deliberate ignorance in negotiations and related areas. Specically, we suggest that there are situations in which knowledge effectively limits an agent’s action space, and thus the agent is better off without this knowledge. A negotiation is dened as a decision-making process by which two or more parties agree how to allocate scarce resources. Thus, the situations that we consider here are characterized by strategic interaction between parties who cannot have everything they want. This denition allows that parties may be asymmetrically informed. It does not include as negotiations, however, situations in which one party can unilaterally determine the outcome for all parties (see Dana et al. 2007), or where one’s own payoff is not determined by the joint decisions of the self and others. While some theoretical research has documented situations in which ignorance brings strategic advantage in bargaining, less research has been conducted into whether people do use ignorance in such situations. Even less research has been conducted into information avoidance that, for purely psychological reasons, could lead to better outcomes in a negotiation. We begin by summarizing some reasons why a standard decision theoretic agent would exercise deliberate ignorance in a negotiation, and descriptive research into whether people actually avoid information in these situations. Then, we speculate on psychological reasons why a person in a negotiation would benet from ignorance apart from reasons given by standard decision models. To be clear, the subject matter will not be information that one defers for the purpose of also keeping others from receiving it. Rather, these are situations in which the decision maker is made worse off by having the information for their own use.

Utilizing Strategic Ignorance in Negotiations

41

Strategic versus Individual Decision Making Recent papers have catalogued a variety of situations in which an individual decision maker benets by ignoring or even refusing costless information (Golman et al. 2017; Hertwig and Engel 2016; Kadane et al. 2008). This effort adds to our understanding of decision making on its face because standard decision theoretic models, like those which form the foundations of modern economic analysis, assume that information has a nonnegative value. The same is not true, however, of standard game theoretic models. When decision makers’ payoffs are determined by their own behavior as well as by the behavior of others, information can be detrimental. For example, if Player 1 knows something that Player 2 does not, and Player 2 understands this, an information asymmetry exists between players. Such asymmetries can lead to decient outcomes such that even the player “advantaged” with more information (Player 1) may prefer not to possess the information, or at least prefer that Player 2 not know that Player 1 has more information. To see how information can damage outcomes in a strategic interaction, imagine playing a single round of a game with one other person. You announce rst “red” or “black” after which the other player announces “red” or “black.” A card is then turned over from a deck of playing cards. If both players have announced the correct color, both receive a payoff of 1. If neither has announced the correct color, both receive a payoff of 0. If one player has announced the correct color and the other has not, the player with the correct color gets 50 while the other player gets 0. If both parties understand the game, you might expect your opponent to announce the opposite color that you select, which would greatly increase the expected payoffs for both of you. (Imagine a payoff much larger than 50 if your intuition doesn’t match.) Now imagine that before announcing your color, you get to look at the card that will be ipped, but your opponent cannot. What would you do? There is no personal gain in knowingly announcing the wrong color; you would simply receive zero. Knowing that, your opponent might reason that it is best to announce the same color that you do. This extra bit of knowledge has now effectively ruined your ability to coordinate with your opponent in a way that is good (in expectation) for both parties. You would be better off if you did not view the card, or if your opponent thought you did not view the card. The possibility for knowledge to ruin coordination is not limited to situations of asymmetric information. In social dilemmas, defection is no longer a clear equilibrium if it is common knowledge that there will be repeated play. Once it is common knowledge that the players are in their nal round of play, however, there is no longer a reason to cooperate, and players are collectively disadvantaged. There are likewise many reasons why publicly ignoring information in a negotiation could be individually benecial. Ignoring information can serve diverse functions: from strengthening one’s bargaining position to

42

S. Auster and J. Dana

solving the “hold-up” problem to combatting adverse selection and solving principal-agent problems.

Normative Reasons for Deliberate Ignorance in a Negotiation Why would a standard decision theoretic agent refuse some information during negotiations? Perhaps the rst writer to suggest the use of deliberate ignorance as a tool in bargaining was Thomas Schelling (1956). Schelling considered bargaining problems in which one party wishes to convince another of something; for instance, a buyer who wishes to convey that she will not pay a seller more than X (i.e., her reservation price is X). Because such preferences are private, and parties are known to bluff in search of a good deal, there is a need to make such commitments credible. Deliberate ignorance is one method through which a bargaining party could communicate a credible commitment. For example, labor union leaders could publicly avoid meeting with their membership to signal to management that there is no intention to end the strike without a better offer. In situations where it is difficult to write complete contracts, there is a wellknown problem called the holdup problem. When one party has made a prior commitment to a relationship with another party, the latter can “hold up” the former for the value of that commitment. For example, if an automobile manufacturer developed an exclusive relationship with a rm that provided certain automobile parts for production, the parts supplier could change its prices in times of increased demand and the manufacturer would be put in a poor position to negotiate. The possibility of a holdup can lead to underinvestment in relationships that would otherwise be protable. The holdup problem is solved if the vulnerable party, the manufacturer in the above example, can keep its information (e.g., sales projections) private. Then, they could not be held up for the value of the surplus created by the agreement. Rogerson (1992) shows that a variety of solutions to the holdup problem exist when information is asymmetric and suggests that the party that has the bargaining advantage should precommit to allowing the vulnerable party to keep its information private. By doing so, the vulnerable party can trust that they will not be exploited and invest more. This idea was extended by Lau (2008) and Hermalin and Katz (2009), who developed specic conditions under which the party that would hold bargaining power does best to avoid learning the information that gives them power. The holdup problem is thus one in which too much bargaining power is, ironically, harmful because it can cause valuable deals to break down. Remaining deliberately ignorant is, therefore, a mechanism that can be used to cede some power, if that ignorance can be credibly communicated to the other party. There are other contracting situations that provide an interesting context in which deliberately avoiding information can lead to better outcomes. Crémer

Utilizing Strategic Ignorance in Negotiations

43

(1995) modeled the value of “arm’s length” relationships in principal-agent problems with moral hazard and renegotiation. The principal chooses only to observe the agent’s production (in this case, they are a rm and a supplier) and not to seek additional information about the causes of subpar production, thus not allowing for “excuses.” Ignorance in this case credibly commits the principal to not accepting excuses, even if they are good. These arm’s length relationships may be benecial as they create better incentives and raise overall production. Roesler and Szentes (2017) analyzed the role of information in a bilateral trade problem between a monopolistic seller and a privately informed buyer. Their work shows that the optimal amount of information for the buyer is partial because if the seller knows the buyer is fully informed, the seller nds it optimal to charge a higher price. Adverse selection (Akerlof 1970) introduces another situation in which parties would benet by deliberately avoiding information, provided the choice to remain uninformed is publicly observable. Adverse selection occurs when sellers have private information about the quality of the goods they own. In the classic example, a used car may be of high quality (a “peach”) or low quality (a “lemon”), with buyers being uncertain of which they are getting. Buyers will thus be willing to pay a price that is an average of the different qualities. If sellers know which car they are holding, however, they will only nd it protable to sell lemons, driving peaches from the market and, ultimately, causing the market to break down: a buyer should not want to buy a car that a seller is willing to let go. The same problem arises in insurance markets if clients have private information about their risk type. Berkman (this volume) considers the problem of genetic testing. If people knew their own genetic information and insurers did not, it could cause a breakdown in health insurance markets because the people wanting certain kinds of insurance would only be the people who were particularly likely to incur payouts (see Rothschild and Stiglitz 1976). If individuals are required to report to insurers whether they received genetic testing, not getting tested offers an advantage, even when the tests are free, because insurers would understand that the insurance is particularly attractive to people who tested positive and would avoid covering them. It is thus clear that if parties were uninformed on key dimensions, adverse selection problems can be circumvented and protable trades allowed to happen. This solution, however, is only possible if the decision to obtain information can be observed by the insurance rms.

Descriptive Results on Deliberate Ignorance in a Negotiation Do people make strategic use of ignorance in bargaining where the opportunities arise? The limited evidence from laboratory experiments suggests that they

44

S. Auster and J. Dana

do. Perhaps the most important evidence comes from Conrads and Irlenbusch (2013) and Poulsen and Roos (2010). Conrads and Irlenbusch assigned subjects to play one of a variety of take-itor-leave-it bargaining games where one player, the proposer, chooses an offer and the other, the responder, can accept the offer or reject it and leave both players with nothing. One of these offers was always better for the proposer, but whether it was better or worse for the responder was sometimes left uncertain to the proposer. When the proposer did not know the responder’s possible outcomes, almost no offers were rejected. There would be no point in the responder punishing the proposer at a cost to self when the proposer did not even know which offer was fair to the responder. More interesting were cases where the proposer could choose to reveal this information. If the proposer’s decision to reveal the information was public, several proposers chose not to reveal, and unfair offers were rejected at a lower rate. This result conrms Schelling’s original conjecture that communicating deliberate ignorance could strengthen bargaining position. Here, it allowed the proposer to seek a favorable outcome while blunting any inference of intention to be unfair. As further evidence that this behavior was strategic, Conrads and Irlenbusch also ran a condition in which the proposer’s decision to remain ignorant would be private and unknown to the responder, thus destroying the strategic aspect of ignorance. Few proposers in this condition decided to remain ignorant, conrming that indeed their ignorance was strategically deliberate in the public reveal condition. Poulsen and Roos (2010) experimented with a similar Nash bargaining game. Pairs of players made demands for shares of a resource pie. If their demands did not exceed 100% of the pie, they got their demands. If they exceeded 100%, both players got nothing. One player was allowed to make the demand rst, essentially transferring the game to a take-it-or-leave-it ultimatum that gives the rst mover a strategic advantage (if the rst mover demands more than half, the second mover will have to demand less or else leave with nothing). Poulsen and Roos allowed second movers to choose not to see the rst mover’s offer and to make this choice public before the rst mover chose. Doing this would essentially transform the game to a simultaneous choice, where the focal equilibrium is a fty-fty split. After some practice, over 80% of second movers employed deliberate ignorance to enhance their bargaining power. Like Conrads and Irlenbusch (2013), Poulsen and Roos (2010) also used a private ignorance condition where the second mover could choose not to see the rst mover’s demand, but the rst mover would not know it. When the decision to reveal was private, over 80% of second movers wanted to see the rst mover’s demand, conrming again that the deliberate choice of ignorance in the public reveal condition was strategic. Although they do not directly investigate deliberate ignorance, experiments by Sloof et al. (2007) suggest that people would benet from ignorance in a real holdup problem. They ran a laboratory experiment in which subjects assigned to the role of buyers could choose to pay a cost that made a seller’s

Utilizing Strategic Ignorance in Negotiations

45

goods more valuable, which thus created more surplus. The seller, however, set the price of the goods and the buyer had to transact if the exchange was protable. The investment was thus at risk for a holdup; if sellers know about the investment, they can raise the price to capture all the excess benet. The experiment manipulated whether the decision to invest was public or private. When the decision to invest was public, buyers anticipated holdup and invested less often, destroying welfare and lowering even sellers’ earnings. Sellers were not given the opportunity to blind themselves to the investment in this study. The results played out, however, such that sellers would indeed benet if they could publicly signal ignorance, as theory suggests.

Psychological Reasons for Deliberate Ignorance in a Negotiation Whether individuals benet from deliberate ignorance in negotiations for purely psychological reasons—that is, when information does not normatively change the structure of the bargaining task—is, at least empirically, a frontier topic. We are aware of little evidence that bears directly on this question. Owing to the ideas and results above, however, we can speculate that deliberate ignorance could be valuable in several ways when we admit a richer psychology on the part of the negotiating players. Specically, we identify three reasons why information can actually serve to limit an agent’s action space, and thus make the agent worse off: 1. 2. 3.

Knowledge could unintentionally “leak” to another party. Knowledge could invoke moral image constraints. Knowledge could lead to self-serving bias in the agent’s interpretation of what is fair, and fairness constrains actions.

Leaking Knowledge As the chapter’s opening example about calling a naked bootleg play in football suggests, deliberate ignorance could be useful in conveying a credible commitment to a course of action. As noted earlier, bargaining situations often entail the parties trying to convince each other of matters that privately held preference. Signals that one is “serious” become important in the process. As Schelling (1956) notes, “if a man knocks at a door and says that he will stab himself on the porch unless given $10, he is more likely to get the $10 if his eyes are bloodshot.” Short of making one’s eyes bloodshot, how does one convey commitment to an outcome when there is otherwise reason to doubt the commitment? Frank (1988, 2011) suggests that people evolve moral emotions to solve problems such as cooperation in one-shot social dilemmas or even bargaining from poor power in a take-it-or-leave-it ultimatum. He recalls a humorous

46

S. Auster and J. Dana

experience from a concert he attended in his youth: A dog walked up and, seeing a man who was in a drug-induced stupor, urinated on him. Would this dog have attempted that with any of the more alert concertgoers? Probably not, as it would be deterred by the possibility of receiving a swift kick. Similarly, how does one know to whom one should and should not make small ultimatum offers? Absent moral emotions, we might conclude that no matter what the responder claims, they would take something rather than nothing. But if we anticipate the moral emotion of anger in the responder, we might fear making a small take-it-or-leave-it offer because anger makes it satisfying to punish the offer, even at a small cost to the self. The psychological problem that negotiating agents face is that moral emotions may be difficult to fake. Simply put, people can be bad emotional liars and, therefore, accidentally pass information to the other party. Trivers has written extensively about the value of self-deception in convincing others (for a recent summary, see Trivers 2011b). It is difficult to convince others of something one does not believe, and Trivers argues that self-deception evolved in humans to solve the problem of credibly signaling to others. Therefore, people have incentives to manage their own beliefs. Rather than considering all information and forming Bayesian posterior beliefs, they might be better served to seek information selectively to manipulate their beliefs. We argue specically that when information cannot be easily ignored, there is a risk of signaling one’s private information to the other party in subtle or even unwitting ways. Thus, information changes the set of feasible actions and can have a negative value. Returning to the naked bootleg story, it is intuitively appealing that the outcome may be better if players do not know what play they are running, in part because the players need to react exactly the same, regardless of whether an actual goal-line running play or a naked bootleg is called. The reason for lying to them about the play, however, is best understood as preventing knowledge from being involuntarily signaled or leaked to the other team. Similarly, in bargaining situations, a potential buyer might be concerned about appearing too eager to buy the object and prefer not to learn the exact valuation before engaging in the negotiation. Unintentionally conveying one’s beliefs and preferences can limit the possible outcomes in a negotiation. Deliberate ignorance through choosing one’s informational signals can thus be an important self-disciplining device for negotiating with others. Ironically, moral emotions themselves can limit the individual’s action set in a benecial way that negotiators may try to signal. A subject of lore in negotiations is that some people will have a fake angry phone call before speaking with someone in a potential negotiation. By showing the other party that one is angry, there is the thought that they will be afraid to push you too far toward your reservation value, as you might be “crazy” enough to reject a protable but unfair offer if in a state of anger. All of which brings up an interesting question: Why should the man on the porch with bloodshot eyes or the negotiator who has lost control of his or her emotions get a better outcome than a

Utilizing Strategic Ignorance in Negotiations

47

“rational” bargainer? Subjects with economics training give lower offers in ultimatum games, but they also accept lower offers (Carter and Irons 1991). This nding shows a perverse effect of “understanding” the game: if one were proposing an ultimatum to a student with philosophy training and a keen interest in social justice, it would be unwise to make a low offer. The true “economist” could then wind up worse off because she fears rejection from “stubborn” or “irrational” counterparts, yet can be relied upon to accept small offers. Moral Image Constraints People are powerfully constrained by moral image, yet they also seek ways to avoid this constraint so that they can be more self-interested (Dana et al. 2011). For example, a burgeoning literature on social preferences demonstrates that people do not like to appear unfair, either to themselves or to others. Though not a literal constraint, subjects in economic experiments apparently feel constrained by moral image. Even when they have total bargaining power, subjects will often share an experimental surplus to appear fair, but become more selsh when image concerns can be avoided through ignorance (Thunström et al. 2016; Van der Weele 2012). Dana et al. (2007) demonstrate this pattern using a modication of a simple dictator game. When subjects were allowed to choose between $6 for themselves and $1 to an anonymous other subject, or $5 for both, more than three quarters chose the “fair” $5 for both options. In a subsequent manipulation, however, subjects could choose between $6 and $5 for themselves while remaining uncertain about the impact it had on the other subject. In this manipulation, the payoffs were either conicting, as described above, or aligned such that $6 for the dictator gave $5 to the other subject and $5 for the dictator gave $1 to the other subject, as decided by a coin ip prior to the experimental session. Close to half of dictators did not acquire the information of which game was being played, even though it required simply clicking a button on the computer screen. As a result, most chose $6 for themselves, ultimately securing less than half the number of $5–$5 outcomes as when the outcomes were known. Apparently, people abide by fairness, but would happily rely on ignorance so as not to have to abide by fairness. The result of this study, however, does not clarify whether the source of the moral constraint is looking unfairly toward one’s self (not revealing means you will never know you were unfair) or looking unfairly toward others (the other does not know whether you revealed, thus providing plausible deniability for being unfair). Further experiments by Dana et al. (2007) showed that players strategically took advantage of plausible deniability, where it existed, to be unfair without appearing so. That some players are willing to be unfair only when it does not appear so is interesting in this context because, again, the other subject is anonymous and cannot punish unfair behavior. These results suggest that even the imagined disapproval of the other party constrains behavior. Deliberate ignorance of the impacts of one’s behavior on others expands the

48

S. Auster and J. Dana

action space to options that people will not allow themselves to choose when they know that they will appear unfair. Self-Serving Bias Another psychological reason why a negotiator would benet from deliberate ignorance is the failure to remain objective. Babcock et al. (1995) demonstrated that self-serving biases could lead to a breakdown in mutually valuable negotiations. Subjects read materials from a legal case either before or after they were assigned to the role (plaintiff or defendant) they would be negotiating. They also privately predicted how the judge on the case would rule and were paid for their accuracy. They then attempted to negotiate a settlement from a surplus provided by the experimenter. The longer they went without settling, the more the surplus shrunk, before a neutral judge ultimately decided the case. Thus, they had an incentive to reach a settlement on their own. When subjects were assigned their roles before reading the case materials, rather than after, they were more likely to reach a costly impasse, suggesting that they could not process information objectively once they knew which side they wanted it to favor. Buttressing this interpretation, the gap between the plaintiff’s private prediction of the judge’s award and the defendant’s prediction was larger when the roles were learned before reading the case rather than afterward. Because there is no strategic advantage to inating the private and incentivized prediction of the judge’s award, it appears that subjects were unable to remain objective even if they wanted to be. Even though they were biased to favor their own side, the resulting bargaining failure was personally costly to the subjects, and thus they would have been better off had they been able to remain objective. In this situation, deliberate ignorance could have been benecial (e.g., ignoring information about the roles while learning the facts). Just as a teacher might use blind grading of exams or the philosopher implores us to get behind a “veil of ignorance” in evaluating distributional justice, the Babcock et al. (1995) studies demonstrate the benets of ignorance in disciplining bias (see also MacCoun, this volume). Bias can cause failures to reach mutually benecial agreements if the parties care about fairness. What is less clear is whether people appreciate these effects, and whether they would desire to use deliberate ignorance to negotiate more objectively.

Conclusion Research into the phenomenon of deliberate ignorance is a somewhat new and intriguing eld as applied to individual decision making, because standard analyses of decision making hold that ignorance should not work to the advantage of a decision maker. Negotiation adds a layer of complexity to the topic because it entails convincing others to agree to decisions, and little research

Utilizing Strategic Ignorance in Negotiations

49

speaks directly to the empirical and psychological aspects of deliberate ignorance in negotiations. In this chapter, we have speculated that for a number of behavioral reasons, remaining deliberately ignorant can actually expand an agent’s choice set in a negotiation.

4 Blinding to Remove Biases in Science and Society Robert J. MacCoun Abstract This chapter examines the use of blinding methods to potentially bias information to improve the validity and/or fairness of judgments in scientic data analysis, scientic peer review, and the screening of job applicants. Some of the major ndings in empirical tests of these procedures are reviewed, addressing potential concerns with blinding, and identifying directions for new theory and research.

Introduction In this chapter, I examine the promise, and the limitations, of the use of methods of blinding as one way to achieve deliberate ignorance (see Hertwig and Engel, this volume, 2016) in situations where a decision maker’s knowledge of some variables might bias judgments or create unfairness in the decision process. Readers will be familiar with the notion of blinding in at least two ways. First, everyone has seen depictions of the Roman goddess Iustitia (Justice), whose scales and blindfold depict the aspiration for unbiased judgment in legal systems around the world. Second, double-blinding (of patients and physicians) in medical trials is one of a handful of methodological principles (with placebos and sample size) familiar to most lay people. A recent edited volume (Robertson and Kesselheim 2016) offers a thorough treatment of blinding in medical science, forensic science, and legal procedures, and so I will only make brief mentions of those literatures here. In this essay I will examine blinding in three domains, deployed in pursuit of two different normative goals (see Table 4.1). According to Gosseries and Parr (2005), the fact “that transparency and accountability are social goods is taken as self-evident in contemporary democracies.” As Louis Brandeis famously put it: “Sunlight is said to be the best of disinfectants, electric light the most efficient policeman.” Transparency refers

52

R. J. MacCoun

Table 4.1

Domains and goals. Domains of blinding Data analysis Scientic peer review The job market

Goal of blinding Validity Validity, fairness Fairness

to openness and visibility, while accountability implies that the actor must be able to explain his or her choices, and that there are consequences for those choices. In a 2006 essay, I argued that, whatever its abstract merits might be, there are psychological constraints that make true transparency and accountability difficult to achieve, and that can lead to unintended and undesirable effects, and I reviewed theory and evidence for four propositions: 1. 2. 3. 4.

Introspective access to our cognitions is very limited. Accountability can have perverse effects. Group processes can actually amplify individual biases. Being explicit can distort goals and the willingness to make tradeoffs.

The Rawlsian tradition in philosophy offers a rich debate on the merits of a “veil of ignorance” as a guarantor of unbiased assessments of social distribution and welfare. Although the details may differ, the underlying logic seems basically the same as that used to motivate blinding in job screening, data collection, data analysis, and other situations. To make the logic more concrete, I will use Egon Brunswik’s “lens-model” approach to investigating the quality and determinants of human judgment (Cooksey 1996; Dhami et al. 2004; Hammond and Stewart 2001; Karelaia and Hogarth 2008). Figure 4.1 shows a typical lens-model diagram. The right side of the “lens” depicts the true relationships among a set of “cues” or predictor variables and some outcome of interest. The left side of the lens shows the relationships among these cues and a judgment (prediction, decision) made by some judge (referee, editor, scientist, selection committee)—their implicit “judgment policy.” I vary the thickness of the arrows to show the strength of the relationships on each side. By comparing the judgment to the outcome, we can assess the validity of the judgment. But a lens-model analysis tells us more by allowing us to compare the signs and magnitudes of the arrows on each side of the lens. It can show where judges are using a “bad cue” or missing a “good cue,” in which case we might intervene with training, blinding, or simply replacing the judge with the algorithmic model on the right side of the lens. Figure 4.1 is of course an oversimplication. Typical lens-model applications depict a multiple regression or path coefficient for each link, along with additional links showing cue intercorrelations. A more ambitious extension might be to depict each side of the lens as a directional acyclic graph (Pearl 2000) which could show that the causal structure of the judgment process (left side) misrepresents the

Blinding to Remove Biases in Science and Society Cue utilization

53

Cue validity Cue൬ Cue2

Judgment

Outcome

Cue3 Cue൯

Figure 4.1 A simplied example of the lens-model approach to assessing the validity of judgments. The left side of the diagram depicts the human judgment process, where the cues are predictor variables, and the thickness of the lines represents the weight placed on cues (which here are shown as positive, for simplicity). The right side of the diagram depicts the objective relationship between various cues and a later observable outcome that corresponds to the judgment (e.g., job performance if hired). A comparison of the cue utilization weights and the cue validity weights reveals cues that are being underutilized (here, Cue3) or overweighted (here, Cue4).

causal structure that produces the outcomes (right side); for example, a judgment might overutilize a cue that is actually a spurious correlate (no causation) or even a consequence (reverse causation) of the outcome. Although I have not seen it used in this way, the lens-model framework provides an explicit framework for thinking about how and when to blind effectively. Blinding is appropriate when current judgments give undue weight to a particular cue or use a cue that is actually spurious, as seen in Figure 4.2. Blinding may be unnecessary when a valid cue is used appropriately, or when an invalid cue is being ignored. But a lens-model analysis might also show that blinding (whether of humans or of algorithms) might have unintended consequences when good and bad cues are intercorrelated, a point I return to later. The lens model is most useful for questions of validity: What are the true predictors of an outcome and does the judge have a valid mental model? It does not readily depict cue utilization with respect to other normative criteria. Cue utilization

Cue validity Cue1

Judgment

Cue2 Cue3

Outcome

Cue4

Figure 4.2 Blinding the judge by blocking or obscuring a cue that would bias the judgment.

54

R. J. MacCoun

In particular, as noted in Table 4.1, some applications of blinding are motivated by concerns about fairness rather than (or in addition to) validity. Even then, the lens model can clarify our discussions of fairness. Is a cue “unfair” because it has low validity, or are some cues unfair even when they are valid predictors? Are there normative reasons to retain some cues even when they are low in validity? Blinding in Data Analysis In the course of analyzing data, the analyst must make a host of judgments about what variables to include, how to handle outliers and other data anomalies, what statistical tests and estimators to use, and so on. It is well established (see MacCoun 1998) that such decisions are often biased by examinations of the data, which can reveal whether a particular approach will produce a test result that is favorable to a preferred (or abhorred) hypothesis. Although this problem plagues all empirical disciplines, its effects on the replicability of psychological research are now well known. I had the pleasure of teaching an undergraduate course for several years with Nobel Laureate physicist Saul Perlmutter, and when he heard me lecturing about psychology’s problems with conrmation bias and replicability, he asked: “Don’t you perturb your data before analyzing them?” I had no idea how to interpret this kinky-sounding question, but then he explained that many lab groups, in particle physics and cosmology, routinely add noise or bias to their data before analyzing it, so that any preconceptions or careerist motivations can’t bias their inferences. A blinding method is selected to facilitate intermediate analytic decisions while precluding choices that would favor one hypothesis over others. The blind is then lifted once all analytic decisions are made. We subsequently coauthored two papers describing a variety of data-blinding methods and advocating their use in other empirical disciplines (MacCoun and Perlmutter 2015, 2017). These are the basic approaches and terminology: • • • •

Noising: Add a random deviate to each data point. Biasing: Add a systematic offset to each data point. Cell scrambling: Swap the labels of different cells (arms) of the experimental design. Row scrambling: Swap the labels on each row of the data matrix, so that observations from the same cell are no longer grouped together.

Two others we did not discuss are: • •

Salting: Adding fake data points to a real data set Masking: Simply hiding or anonymizing the identity of a data unit

Masking, of course, is the kind of blinding that is used in peer review and in job screening procedures, but this list shows that there are many other possibilities worth considering. In simulations, we found that these blinding methods had

Blinding to Remove Biases in Science and Society

55

different effects on what was and was not obscured in the data, suggesting that they might be suitable for different situations or purposes. The empirical literature on data blinding and its consequences is still very sparse, and we argued that like any other intervention, data blinding should be assessed to establish its benets, costs, and any boundary conditions on its effectiveness. Blinding in Scientic Peer Review Two decades ago (MacCoun 1998), I reviewed evidence on the myriad forms of bias that occur when people use and interpret research data, suggesting that traditional remedies like peer review are only partial solutions. Evidence since then (especially in my own discipline of psychology) suggests that if anything, I probably understated the problem. Carroll (2018), adapting a famous quip by Winston Churchill, recently argued that peer review is “the worst way to judge research, except all the others.” There are hundreds of papers critiquing the peer review system, dozens of empirical papers on inter-referee reliabilities, and a handful examining the question of double-blind reviewing (i.e., blinding of author identity, since blinding of reviewer identity has long been the norm). Journals that use double-blind reviewing are still the exception, not the rule. In an interdisciplinary sample of journals, Walker and Rocha da Silva (2015) found that 70% used single-blinding (author names visible to reviewers) and 20% used double-blinding. At least one major journal (American Economic Review) has abandoned the practice and there is growing support in the “open science” movement for the use of fully unblinded peer review, in which referee reports are signed and publicly archived. The paucity of evidence on these procedures explains how two opposite strategies are each being endorsed as solutions to the same set of problems. Like blinded-data analysis, blinding in peer review has been primarily motivated by the goal of increasing decision validity, but it is also seen as a mechanism for promoting fairness. The most well-known studies focus on blinding to improve the quality of published research—a validity criterion. McNutt et al. (1990) reported what appears to be the rst double-blinded experiment on double-blinded review, an experiment in which the Journal of General Internal Medicine sent 137 manuscripts to pairs of reviewers, one of whom was randomly selected, to receive an anonymized version of the submission. Editors—themselves blinded to the selection—rated the quality of reviews as signicantly greater for double-blinded reviews, although the effect was small (3.5 vs. 3.1 on a ve-point scale). Blinding did not affect the rate at which reviewers signed their reviews, and signing was unrelated to quality ratings. Around the same time, Blank (1991) reported an experiment in the American Economic Review, in which 1,498 manuscripts were randomly assigned to receive either single- or double-blind peer review. Double-blinded manuscripts had a higher referee response rate, were accepted at a lower rate, and received

56

R. J. MacCoun

signicantly more critical ratings. Blinding had less effect on manuscripts from top- and low-ranked institutions than on those in the middle of the pack. One caveat is that an editorial assistant “automatically assigned any paper that she felt could not be handled as a blind paper to the nonblind category” (Blank 1991:1050). Both of these studies have a criterion problem, and a self-referential one at that: If reviewing processes are awed, can we really infer whether blinding improves matters by using acceptance rates and subjective quality ratings? Okike et al. (2016) addressed this problem by randomizing whether a decoy manuscript contained ve “subtle errors.” Like the earlier studies, they found lower acceptance rates and quality ratings for double-blind manuscripts. However, they were not able to detect a difference in the frequency with which the planted errors were detected. To the extent that reviewer biases favor certain categories of authors— white males, elite universities, Americans—then efforts to improve the validity of peer review also serve to make it a more fair system. Still, fewer empirical studies have directly addressed this criterion. Blank’s 1991 experiment was unable to detect an effect of acceptance rates for female authors, but cautions that only 8% of the papers had a primary author who was female. Budden et al. (2008) argue that double-blinding at the journal Behavioral Ecology led to an increase in accepted papers with female rst authors, though a number of subsequent critiques (reviewed by Lee et al. 2013) indicate that the apparent nding was probably artifactual. Tomkins et al. (2017) report that doubleblinding of submissions to a computing conference reduced the inuence of author fame and institutional prestige on acceptance rates. Blinding in the Job Market Discrimination on the basis of economically irrelevant or legally protected categories (by gender, race, ethnicity, sexual orientation, religion, or ideology) is the subject of vast empirical literatures in economics, sociology, psychology, and other disciplines. Many of these studies are “observational” in the econometric sense, meaning that they involve multivariate analysis of correlational data. Two methods have been helpful in overcoming the myriad problems with causal identication in such studies. Correspondence studies are controlled experiments (usually “in the eld”) in which an assessor is randomly assigned a job or school application in which potentially biasing demographic or other characteristics are varied (through deception) while holding other (more probative) information constant (see Pager and Shepherd 2008). In a meta-analysis of 738 different tests from 43 separate studies, Zschirnt and Ruedin (2016:1128) nd that “[e]quivalent minority candidates need to send around 50% more applications to be invited for an interview than majority candidates.” Audit studies are eld experiments in which matched pairs of actors differ in some visual demographic characteristic

Blinding to Remove Biases in Science and Society

57

but are otherwise given identical fake credentials and trained to behave similarly. Pager and Shepherd’s (2008:187) review cites audit estimates of white advantage ranging from 50%–240%. There are a variety of proposed solutions to these forms of discrimination, including legal sanctions against discriminators, legal remedies for the discriminated, various affirmative action policies, and training and education, including “implicit-bias” training. My focus in this chapter is exclusively on the use of various methods of blinding or anonymity designed to make it difficult or impossible for the decision maker to react to potentially biasing factors. In 2000, Claudia Goldin and Cecilia Rouse published what is probably the most famous study of blinding in the marketplace, a paper that has been cited almost 1,400 times (as of 2/1/19) according to Google Scholar. After documenting robust strong biases against women in the classical music industry, Goldin and Rouse note that major orchestras have gradually adopted a blind audition procedure, in which the auditioning musician performs behind a screen so that the selection committee can hear but not see them (Goldin and Rouse 2000:721): In blind auditions (or audition rounds) a screen is used to hide the identity of the player from the committee. The screens we have seen are either large pieces of heavy (but sound-porous) cloth, sometimes suspended from the ceiling of the symphony hall, or what look like large room dividers. Some orchestras also roll out a carpet to muffle footsteps that could betray the sex of the candidate. Each candidate for a blind audition is given a number, and the jury rates the candidate’s performance next to their number on a sheet of paper. Only the personnel manager knows the mapping from number to name and from name to other personal information.

Goldin and Rouse (2000:716) assembled roster data and audition data for eleven different orchestras: Among the major orchestras, one still does not have any blind round to their audition procedure (Cleveland) and one adopted the screen in 1952 for the preliminary round (Boston Symphony Orchestra), decades before the others. Most other orchestras shifted to blind preliminaries from the early 1970s to the late 1980s. The variation in screen adoption at various rounds in the audition process allows us to assess its use as a treatment.

Using difference-in-difference and xed effects methods, the authors argue that blind auditions have had a profound effect on orchestral hiring. For example, the audition data set suggests that “the screen increases—by 50%—the probability that a woman will be advanced from certain preliminary rounds and increases by severalfold the likelihood that a woman will be selected in the nal round.” Similar analyses of the orchestra roster data suggest that up to 30% of the increase in female representation in orchestras in the 1970–1996 period is attributable to blind auditioning.

58

R. J. MacCoun

The logic of blinding in orchestra auditions is premised on the compelling intuition that musical excellence should be judged by auditory and not visual criteria. Surprisingly, Tsay (2013) found that participants “reliably select the actual winners of live music competitions based on silent video recordings, but neither musical novices nor professional musicians were able to identify the winners based on sound recordings or recordings with both video and sound.” Blind auditions have not eliminated gender imbalance. According to an analysis in The Washington Post (Edgers 2018), “although women make up nearly 40% of the country’s top orchestras, when it comes to the principal, or titled, slots, 240 of 305—or 79%—are men. The gap is even greater in the “big ve”—the orchestras in Boston, Chicago, Cleveland, Philadelphia, and New York. Women occupy just 12 of 73 principal positions in those orchestras.” In 2000, Goldin and Rouse noted that most orchestras unblinded the late rounds of auditions, and The Washington Post analysis suggests that this was still true in 2018. Despite the well-deserved attention that the Goldin and Rouse analysis has received, the use of physical screens is not very representative of actual blinding practices in the marketplace. More typically, blinding is done by redacting information on a document or a computer screen. Most of these studies use the term “anonymity” rather than blinding, but I prefer the latter term, both for continuity, and because “anonymity” can also refer to issues of privacy, condentiality, or secrecy, where the goals and the context often differ. Aslund and Skans (2012) report on a nonexperimental study of anonymous job applications in Gothenburg, Sweden from 2004–2006. Using a differencesin-differences model, they found that anonymity increased the rate of interview callbacks for both women and ethnic minorities, but that women, not minorities, received an increase in job offers. In a 2011 unpublished paper, Bøg and Kranendonk describe two experiments in a Dutch city from 2006–2007. Participation by municipal departments was voluntary. In the rst experiment, seven departments were randomly assigned to use either standard or anonymous screening procedures for job applications during the test period. In the second experiment, these assignments were reversed. Note that because the logic of random assignment is based on the law of large numbers, this is a far weaker design than random assignment at the level of the individual application. The authors found that the majorityminority gap in interview callbacks was reduced by the experiment. But in fact, there were similar rates of interview invitations and job offers for minority candidates in the treatment and control conditions, and the reduced gap was produced by fewer callbacks for majority applicants in the anonymous condition. Behaghel et al. (2015) report a study of anonymous application procedures in a French public employment service from 2010–2011. Private-sector rms who agreed to participate received either anonymous or standard applications from the employment service. The unit of randomization was the job vacancy

Blinding to Remove Biases in Science and Society

59

rather than the rm (though not the job applicant), so this design clearly improves on Bøg and Kranendonk (unpublished). Unexpectedly, the authors found that “participating rms become less likely to interview and hire minority candidates when receiving anonymous résumés.” The authors attribute this result to two factors. First, the decision to participate in the experiment may have screened out those rms most likely to discriminate. Second, among the participating rms, anonymization prevented them from providing more favorable treatment to minorities. Anonymization did help women applicants but only to a limited extent because for many vacancies, applicants were either all male or all female. Krause et al. (2012) studied ve private and three public-sector German organizations. Like Behaghel et al. (2015), they found that anonymity had unintended consequences. Female applicants actually fared better than males under standard applications, and blinding removed this advantage. Applicants in a nonrandomized blinded sample were compared to two different comparison groups—standard applications from the previous year, or applications from the study cohort that were not blinded. Results were mixed; under some circumstances minorities fared better with blinded applications, but under other circumstances they fared somewhat worse. The authors conclude that “the introduction of anonymous job applications can lead to a reduction of discrimination—if discrimination is present in the initial situation. Anonymous job application can also have no effects if no discrimination is present initially, and they can stop measures such as affirmative action that may have been present before. In any case, the effects of anonymous job applications depend on the initial situation” (Krause et al. 2012:12). Between 1993 and 2010, the U.S. military adopted a personnel policy that is clearly a form of deliberate ignorance, and can be viewed as a form of blinding (with the onus of concealment placed on the employee rather than the employer). Under this “Don’t ask, don’t tell” policy, gay and lesbian service personnel were permitted to serve in the military provided that they concealed their sexual orientation from their peers. This kind of mandated selfconcealment has serious limitations. Whereas other blinding approaches are temporary, “Don’t ask, don’t tell” required ongoing blinding for the course of service—something which proved very difficult to maintain for sexual orientation and impossible to achieve for visible attributes like gender or race. And, of course, the motivation for blinding was very different; whereas blinding in the application process is usually designed to protect the applicant, “Don’t ask, don’t tell” was essentially designed to protect the unit from the applicant. As I documented elsewhere as part of an assessment that contributed to the repeal of this policy (MacCoun 1993; MacCoun and Hix 2010), this logic was based on a false premise: the idea that knowledge of a unit member’s gay or lesbian orientation would somehow impair the unit’s ability to work together to accomplish its mission.

60

R. J. MacCoun

Concerns about Blinding In this review, I have highlighted the potential benets of blinding as a way of achieving deliberate ignorance when some kinds of knowledge would jeopardize the validity and/or fairness of judgments in science and the marketplace. But blinding also has some potential drawbacks and limitations. The following ve issues constitute a research agenda for a comprehensive assessment of blinding. Does Blinding Actually Blind? In their study of orchestras, Goldin and Rouse (2000:722) note, but dismiss, the possibility that listeners can still infer gender from auditory cues. They suggest that because “the candidates play only predetermined and brief excerpts from the orchestral repertoire,” there is “little or no room for individuality to be expressed and not much time for it to be detected.” In the peer review literature (see Largent and Snodgrass 2016), a sizeable fraction of reviewers (25%–50%) believe they can identify the masked author; in some cases, they misidentify the author, which is arguably worse than no blinding at all. In our simulations of data blinding, we found that in some situations, adding noise or bias to data failed to obscure the true experimental outcome. I once participated in a professional meeting in which we tried and failed to identify a foolproof placebo for trials of LSD psychotherapy. Does Blinding Do More Harm than Good? As we have seen, blinded job screening can’t eliminate discrimination that isn’t there, and it can block the application of normatively prescribed biases like affirmative action. In the clinical trial context, Meinert (1998) argues that “[m]asking should not be imposed if it entails avoidable risks for patients. Masked monitoring denies the monitors the key information they need to perform in a competent fashion, and incompetent monitoring poses a risk to research subjects.” This concern is hardly groundless, but it is surely an argument for smart blinding rather than no blinding. MacCoun and Perlmutter (2015:188) argue that “when safety is at stake, such as in some clinical trials, it often makes sense to set up an unblinded safety monitor while the rest of the analytical team is in the dark.” According to the 2013 statement of the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) group,1 an international consortium of clinical trial experts: To maintain the overall quality and legitimacy of the clinical trial, code breaks should occur only in exceptional circumstances when knowledge of the 1

https://www.spirit-statement.org/emergency-unblinding/, accessed on October 18, 2019

Blinding to Remove Biases in Science and Society

61

actual treatment is absolutely essential for further management of the patient.… Unblinding should not necessarily be a reason for study drug discontinuation.

Other dangers seem more remote. Cain et al. (2005) have found that disclosures of a conict of interest can “morally license” agents to act in a more biased fashion, but blinding seems less likely to have this effect because it mechanically blocks the agent from acting on their biases. Various lines of research indicate that being anonymous can promote (or reveal) antisocial impulses and actions (Postmes and Spears 1998), but this seems unlikely in the domains examined here because anonymity is not being offered to active participants in the process where blinding occurs (data analysis, review of applications, etc.). Will Biases “Find a Way”? To adapt a line from Jurassic Park, another concern is that even if blinding works, somehow biases will nd a way—that is, blocking a bias through blinding will just open a path to a different manifestation of bias. This could happen in several different ways. When a target attribute is inaccessible or difcult to cognitively process, individuals often substitute one cue for another, a process Brunswick called “vicarious functioning” (see Gigerenzer and Kurz 2001; Kahneman and Frederick 2002). For example, if reviewers do not know a performer’s gender, they may put more weight on presumed proxy variables like the volume or dynamics of the music. If the substitute variable is actually a good cue (relative to some normative system), so much the better. But there are ways in which substitution could make things as bad, or worse than the original situation. In a strong case of taste discrimination, the judge may reject all candidates rather than risk the possibility of selecting a member of the disliked class. Or the judge may reject all candidates who have a proxy cue that the judge associates with the disliked class. There’s a troubling real-world example. Based on evidence that prison records were making it difficult for many African American men to nd jobs, many jurisdictions adopted “ban the box” policies that prohibited employers from including a “criminal history” checkbox on job application forms. Unfortunately, there is convincing observational (Holzer et al. 2006) and experimental (Agan and Starr 2018) evidence that this policy has the opposite effect—it signicantly reduces the hiring of members of groups that employers associate with criminality. In essence, when blinding blocks employers from considering criminal justice information, they will often use race or ethnicity as a proxy, potentially replacing a smaller category (men with criminal records) with a larger one (men of color). Given a set of available variables, what correlational and causal structures are most conducive to effective, ineffective, or even pernicious applications of blinding? This is a topic that merits further theory and research.

62

R. J. MacCoun

Does Blinding Crowd Out Better Solutions? In psychology and sociology there has been a lively debate about the relative merits of “color blindness” versus “multiculturalism” as remedies for racial and ethnic discrimination. For example, Boddie (2018) complains that The problem is that no one is colorblind, and acting as if we are makes us worse off, not better.…While whites may be conscious of others’ race, they often are not conscious of their own because they do not have to be. Colorblindness, therefore, forces race underground. It turns people of color into tokens and entrenches whiteness as the default.

Plaut et al. (2018:204) offer a nuanced empirical review of the tradeoffs inherent in the choice between color blindness and multiculturalism, concluding: Color blindness, while often heralded as a remedy for racism, can foster negative outcomes for people of color (e.g., interpersonal discrimination). Moreover, color blindness serves to reify the social order, as it allows Whites to see themselves as nonprejudiced, can be used to defend current racial hierarchies, and diminishes sensitivity to racism. Multiculturalism can provoke threat and prejudice in Whites, but multicultural practices can positively affect outcomes and participation of people of color in different institutional arenas. Yet it also has the potential to caricature and demotivate them and mask discrimination.

Does blinding crowd out other data collection and analysis strategies in science? Possibly. Many have argued that the conditions required to implement a proper double-blind randomized trial create unrepresentative—and hence misleading—circumstances. And I suppose blinded data collection could crowd out preregistration and other bias-control policies if we let it. In data analysis, a bigger concern is that blinding might blunt the possibility of making unanticipated discoveries in the data. Blind methods allow the analyst to supplement preregistered analyses with more exploratory analyses, while still minimizing the effects of wishful thinking on interpretation. Blinding When Normative Systems Collide The logic of blinding is relatively straightforward when there is a single normative system (e.g., “nd the truth”) for dening bad cues. In most domains, there are multiple normative systems making claims on our conduct—truthfulness, fairness, collegiality, and the like. I don’t think anything in my review points to stark differences in how blinding can or should work for these different normative systems. But certainly, things get more tricky when there are conicting normative demands—for example, validity versus fairness. Then, a cue might be “good” with respect to one system but “bad” with respect to another. These issues have been explored in depth in the professional literature on the psychometrics of ability testing and assessment, but they have not been solved or resolved, and I suspect similar issues will arise in applications of blinding in other domains.

Deep Structure

5 The Deep Structure of Deliberate Ignorance Mapping the Terrain Barry Schwartz, Peter J. Richerson, Benjamin E. Berkman, Jens Frankenreiter, David Hagmann, Derek M. Isaacowitz, Thorsten Pachur, Lael J. Schooler, and Peter Wehling Abstract This chapter explores the “deep structure” of deliberate ignorance, dened as an individual’s or collective’s intentional choice to create a short- or long-term barrier to information for the individual or collective who made the choice. This denition is used to identify clear cases while acknowledging that the key terms of the denition (deliberate and ignorance) admit of ambiguity. It is argued that the frequency, forms, and functions of deliberate ignorance may vary across individuals as well as domains of information. Potential causal variables are suggested (e.g., the utility of the information, the nature of the information environment, the level of relevant parties who initiate and are affected by deliberate ignorance, and the legal, ethical, and social context within which deliberate ignorance occurs) and possible consequences are explored for the actors who engage in deliberate ignorance. Finally, the potential time course of deliberate ignorance is discussed within an episode of deliberate ignorance itself, across life-span development as well as cultural and biological evolutionary time.

Introduction “It is always better to have more information than less.” “The more information you have, the better the decision you will make.” Statements like these Group photos (top left to bottom right) Barry Schwartz, Pete Richerson, David Hagmann, Derek Isaacowitz, Peter Wehling, Benjamin Berkman, Jens Frankenreiter, Lael Schooler, David Hagmann, Thorsten Pachur, Benjamin Berkman, Barry Schwartz, Lael Schooler, Pete Richerson, Derek Isaacowitz, Jens Frankenreiter, Thorsten Pachur and Lael Schooler, Barry Schwartz, Peter Wehling, Jens Frankenreiter, Pete Richerson

66

B. Schwartz et al.

seem uncontroversial, especially if information is easy to come by and potential benets are high. It is the obviousness of the value of information that led Hertwig and Engel to describe a set of circumstances in which people deliberately avoid information that is easy to come by (see Hertwig and Engel, this volume, 2016). They call this set of phenomena “deliberate ignorance.” For discussion of a related set of phenomena, which they refer to as “information avoidance,” see Golman et al. (2017); for a case study of “intentional ignorance” in the medical domain, see Owens (2017). Hertwig and Engel dene deliberate ignorance as “a conscious choice not to seek information or knowledge, especially where acquisition costs are small and potential benets are large.” Consider the following cases: • • • •

A person takes an HIV test, receives an envelope with the results, but does not open the envelope. A person receives a quarterly statement from a retirement investment fund, opens the envelope but puts the document, unread, in a le folder. A scholarly journal requires that all submitted manuscripts must be free of any information that identies the authors. A personnel department scrubs gender, race, and ethnicity from its job applications.

These are all examples in which information possibly, or even probably, has high signal value, yet it is avoided. What might explain this desire for less information? Building on Hertwig and Engel’s attempts to characterize deliberate ignorance, describe various contexts in which it occurs, and delineate various functions it might serve, our goals here are as follows: •



• •

To dene and delimit deliberate ignorance, distinguish it from other (perhaps related) phenomena, and begin to characterize mechanisms and cognitive functions that implement deliberate ignorance. Our aim is to identify the clear cases of deliberate ignorance and leave more ambiguous cases for future inquiry. To explore the extent to which the phenomenon of deliberate ignorance, the functions it serves, and the factors that affect it are common across different actors as well as different domains of experience (e.g., personal medical information, personal nancial information), and the extent to which there may be domain specicity to deliberate ignorance. To discuss some of the psychological and cultural mechanisms that may be involved in the phenomenon of deliberate ignorance. To identify potential key variables, both with respect to causal factors that inuence and consequences that follow deliberate ignorance. We will emphasize consequences of deliberate ignorance that are, at present, least well studied and least well theorized.

The Deep Structure of Deliberate Ignorance •

67

To examine deliberate ignorance as it may operate at different time scales, both developmental and evolutionary.

Hertwig and Engel identied six different types of functions that deliberate ignorance might serve: 1. 2. 3. 4. 5. 6.

To heighten suspense and surprise (e.g., when we don’t want to know the ending of a thriller when we sit down to watch it, or the sex of an unborn child). To provide a strategic advantage in certain competitive bargaining situations, an idea that originated with the counterintuitive observations of Schelling (1956; see also Dana 2006). To enable people to manage and sustain their limited cognitive resources (e.g., Crawford 2015). To maintain impartiality and fairness (see MacCoun, this volume). To enhance performance (e.g., Kluger and DeNisi 1996). To regulate emotions, such as avoidance of worry or regret (e.g., Howell and Shepperd 2013; Karlsson et al. 2009; Yaniv et al. 2004).

In this chapter, we focus on a subset of these functions. We do not discuss surprise and suspense maintenance because we think in cases like this, it is not the information itself, but the anticipation of having it revealed in the future (temporary information avoidance) that seems crucial. We also do not discuss cognitive resource management because this is already reasonably wellstudied and well-theorized (see Sims 2003; for a paper that launched the study of the “economics of information,” see Stigler 1961), and we do not discuss strategic ignorance, because it has been a topic of research for more than half a century in experimental game theory and other contexts. However, there are two qualications to this last exclusion. First, the focus of research on strategic ignorance has largely been on interpersonal settings, like experimental games or negotiations. Less studied and theorized about is what might be called intrapersonal strategic situations, in which, for example, ignoring some information might make it easier for someone to fulll a long-term goal or execute a plan (e.g., Carrillo and Mariotti 2000; Wooley and Risen 2018). Second, in Schelling’s (1956) groundbreaking work on bargaining and negotiation, many of his examples involved strategic binding (i.e., deliberately limiting one’s options) rather than strategic blinding (i.e., deliberately limiting one’s information). The latter is a clear case of deliberate ignorance whereas the former is not.

What Counts as Deliberate Ignorance In generating a denition of deliberate ignorance that helps us identify clearcut examples, we do not claim that cases falling outside of this narrow denition should be excluded from future analyses. We merely wish to start with the

68

B. Schwartz et al.

cleanest cases to help us understand what is special about the concept. Each of the key terms in the denition, deliberate and ignorance, are associated with complexities and uncertainties. To begin, let us consider the denition of deliberate ignorance provided by Hertwig and Engel (p. 5, this volume): [T]he conscious individual or collective choice not to seek or use information (or knowledge; we use the terms interchangeably). We are particularly interested in situations where the marginal acquisition costs are negligible and the potential benets potentially large, such that—from the perspective of the economics of information (Stigler 1961)—acquiring information would seem to be rational (Martinelli 2006).

Expanding on this denition (our additions are italicized), an individual’s or collective’s intentional choice to create a short- or long-term barrier to information for the individual or collective who made the choice,

we will explain how our denition differs from that by Hertwig and Engel. First, we consider all decisions that create barriers to information as potential candidates for deliberate ignorance. This denition covers most decisions not to seek or use information as described by Hertwig and Engel, but it covers other situations as well. Most importantly, we consider situations in which an actor adopts measures that make it harder to access certain information in the future as falling under the denition of deliberate ignorance. For example, an investor might opt not to receive quarterly portfolio reports online. A patient might opt not to be sent a report of blood work results unless there is a problem. One particularly important dimension where our denition is potentially broader than that adopted by Hertwig and Engel lies in its treatment of decisions to make previously accessible information inaccessible in the future. Hertwig and Engel’s denition seems to exclude such acts from the denition of deliberate ignorance. Our denition includes them. Our reason for this decision is that we think that the factors that motivate actors not to access information might also motivate them to destroy it. For instance, if not knowing certain information conveys a strategic advantage, it does not matter whether an actor does not access it in the rst place or manages to effectively “forget” it (see Schooler, this volume). Such decisions could also raise questions that require special attention. We exclude from the denition those decisions that affect exclusively the ability of others to access information. It is possible to think of many cases in which it is in an actor’s interest to withhold information from others or even actively deceive them. If successful, such actions might lead to ignorance in others. From the perspective of the targets of such actions, however, this ignorance cannot be considered “deliberate.” By contrast, were an actor to employ other agents in an attempt to erect barriers to information, we would consider this an act of deliberate ignorance by the actor.

The Deep Structure of Deliberate Ignorance

69

Perhaps the most important element in our denition of deliberate ignorance is the adoption of an intentionality requirement. We dene intentional as comprising a voluntary element in addition to knowledge about the potential consequences of an action. In the context of our denition, intentional choices refer exclusively to choices the goals of which include the creation of barriers to information. Most importantly, this requirement is not fullled if the creation of barriers to information is a side effect of some other action. For example, a decision to involve an agent will almost invariably create some barriers to information for the principal. However, these barriers to information are likely not the primary goal of the choice to involve an agent. Similarly, we understand intentionality to exclude most cases in which information is not accessed for reasons related to the cost of accessing and processing this information. This also implies that those cases on which Hertwig and Engel are not particularly focused (cases in which the marginal acquisition costs are nonnegligible or the potential benets are small) are generally excluded from our denition of deliberate ignorance. Perhaps more accurately, while not excluded from the denition, they are not a focus of interest, being neither surprising nor puzzling. Finally, we use the word “intentional” rather than “conscious” (as in the original denition) to highlight that the goal of a decision is to erect barriers to information. One may be quite conscious of the decision to le away a quarterly investment report without reading it and yet do it without the intent of avoiding information. People may le away unread reports because they are too busy to attend to them, or think they don’t know enough to act on the information or that they will study the report on the weekend when they have more time. In none of these cases is it the person’s intention to withhold the information from scrutiny. The intentionality requirement clearly narrows the scope of instances of deliberate ignorance, but we think it puts the focus on the most interesting and puzzling phenomena. Complexities of “Deliberate” “Deliberate” implies intentionality and usually consciousness. That said, we believe that there can be something of a continuum of deliberateness. For example, consider the possibility that what starts out as deliberate becomes habitual. After some reection, a person decides not to read their rst quarterly investment statement. The same decision is made quicker the next time, and the next, and so on, until stuffing it in a folder becomes a mindless habit. In this sort of case, the ignorance is no longer deliberate, though it was at rst. Similarly, imagine a professor who decides, as a matter of policy, not to consult student grades on previous assignments in grading the current one. This is a case of deliberate ignorance. Subsequently, the professor simply enacts the policy, without thinking. Just as buckling a seat belt can go from deliberate to automatic, so can decisions to ignore information.

70

B. Schwartz et al.

As another type of example, consider deliberate ignorance in certain social situations. A person decides not to ask a friend how her troubled marriage is going. This may be quite deliberate—an adherence to norms of politeness and personal privacy. This example is not discontinuous with many examples in which people do not ask questions so as not to pry or be impolite without engaging in much reection about the matter. One typically adheres to norms of politeness without giving the matter much thought. The social norm of “respect for privacy” may be quite powerful. We avert our gaze from a friend’s open browser window. We avoid looking at our romantic partner’s emails and texts. We may do so to maintain social ties or to avoid shame and embarrassment. As Elias (1978) argues in his classic work, The Civilizing Process: The History of Manners, the history of Western society from the Middle Ages to the nineteenth century represented a gradual transformation in people’s ideas concerning manners and bodily propriety. Central to this transformation were decisive changes in feelings of shame, repugnance, and embarrassment that attended a wide range of human bodily functions such as eating, spitting, blowing one’s nose, urinating, or defecating. These changes in manners and associated feelings may change what we deliberately do not want to know. We do not want to know, for instance, what other people do in their bathrooms or bedrooms, or things that trigger feelings of shame on someone else’s behalf. Moreover, as social norms change, the tendency to pursue deliberate ignorance may change. At the same time that privacy regarding bodily functions is enhanced, the explosion of social media may already have transformed people’s notions of privacy regarding other information about their daily lives. Related to politeness, relations of trust are partly dened by one’s unwillingness to check up on someone to make sure something has been done. This may be deliberate, in the sense that merely asking the question would be a violation of trust. Alternatively, it may be quite automatic, wrapped up in the very notion of trust. A slogan often heard in the domain of foreign relations—“trust, but verify”—is quite literally a contradiction.1 As with most social processes, behavior dictated by concerns about politeness or trust can change as social norms change. Consider the adage that “it takes a village to raise a child.” This implies that one’s fellow villagers are authorized to step into the parental role if action is needed. Whereas this intrusiveness may be socially acceptable in some places, and may have been socially acceptable at some times in other places, presently, many people in many places would regard adopting a parental role as a member of the “village” as a deep violation. Cuddihy (1974, 1978) has written extensively about how the 1

Trust, of course, is a very broad phenomenon. Trust does not necessarily equate to an unwillingness to check up on someone. Controlling other people is often costly from an economic point of view; that is, it requires time, effort, and potentially sophistication that may not be available to the principal. Thus, trust may be cost minimizing rather than norm preserving.

The Deep Structure of Deliberate Ignorance

71

boundaries between public and private, or between self and other, have differed historically among religious groups in the United States. From the perspective of our denition of deliberate ignorance, socially embedded practices like politeness and trust pose a problem in that they may not (always) be intentional. Nonetheless, we think they should be included in the denition because the range of phenomena they encompass may be vast.2 Complexities of “Ignorance” The central distinction to be made here is between ignorance and error. Ignorance is not knowing. Error is false belief: “knowing” what isn’t true (see Lewandowski, this volume). Being wrong is not the same as being ignorant. It is unremarkable that people make errors. More remarkable is that people are ignorant, by choice. In this connection consider the pervasive phenomenon of conrmation bias (e.g., Wason and Johnson-Laird 1972). The “bias” is that people overvalue evidence that can conrm a hypothesis and undervalue (or do not value at all) evidence that can falsify a hypothesis. By not valuing falsifying evidence, are people being deliberately ignorant? We think the right answer to this question is that it depends. Unmotivated conrmation bias reects error; people erroneously evaluate the evidentiary usefulness of various pieces of information. They overvalue some information and undervalue other information. Motivated conrmation bias, by contrast, may be ignorance, not error, in that under these circumstances, it is at least possible that people appreciate evidentiary usefulness of information but choose to ignore it nonetheless (see Dawson et al. 2002). We hasten to add in this context that although the word “bias” in “conrmation bias” implies that information seeking is nonoptimal, there may be some circumstances in which conrmation bias serves people well (see, e.g., Hahn and Harris 2014; Oaksford and Chater 2001). Imagine, for example, a researcher embarking on a new line of investigation of a phenomenon that is difficult to produce in the laboratory. A selective focus on successes may keep the researcher engaged in efforts to produce the phenomenon more reliably and robustly. Attention to disconrmation might nip the research enterprise in the bud. In summary, we think that deliberate ignorance is best thought of as a kind of “natural category” (Rosch 1973; Wittgenstein 1953). There are clear, prototypical examples: Should the HIV test result envelope be opened or the quarterly investment report read? There are also other examples whose membership in the category is graded, in large part, by how deliberate they are and how much they reect ignorance rather than error. Natural categories possess no “necessary and sufficient features” in the way that articial, scientic 2

It should be noted that choosing not to ask a question may be due to a fear of the social consequences of asking. Effectively, the actor is choosing ignorance, but not because of a lack of wanting to know.

72

B. Schwartz et al.

categories do (e.g., a geometric shape either is or is not a triangle). These categories possess instances that resemble each other just as members of a family resemble each other. Very good examples of a category are prototypical. Other examples become increasingly less good as they are less like the prototype. Wittgenstein famously identied “tools” and “games” as examples of natural categories. A prototypical tool might have been a hammer or a screwdriver when he made his observations. These days, it might be a computer, a cell phone, or an app. We think it is a wise research strategy to focus studies of deliberate ignorance on prototypical examples and then extend research outward to less-clear examples as the concept of deliberate ignorance becomes better understood.

Domain Specicity of Deliberate Ignorance As we have dened it, deliberate ignorance can be found across widely different domains of information, yet it is still possible for its incidence, character, causes, and consequences to be domain specic. People may be more likely to pass up information in domains where expertise has been acknowledged than in domains that are more matters of personal preference. For instance, people might defer to nancial advisors or doctors but pore over available information about restaurant options in a city they will be visiting. In addition, the degree to which individuals can understand information, or think they can understand information, may vary across domains. A geneticist, for example, is likely to have substantial background knowledge relevant to health but may not have any knowledge related to nances. A given piece of information (e.g., the results of a genetic test or the composition of an index fund) may therefore have different value to the geneticist than to an economist. Consequently, interest in receiving information may differ across the domains: the geneticist may be more willing to trust an expert and remain ignorant about nances, while taking an active interest in the detailed results of the genetic test. The general point here is that instances of deliberate ignorance may be a function of the difference between how much one already knows and how much one needs to know for the information to be effective. Since individuals’ depth of knowledge will vary across domains, as will the informational complexity of domains, domain specicity seems to be highly likely. Interestingly, the geneticist may still exhibit greater deliberate ignorance in the health domain than the nancial domain: deliberate ignorance requires awareness that the information exists in the rst place, thus the geneticist has more opportunities to be “deliberately” ignorant in the health domain. Information in some domains may, moreover, be inherently more informative independent of relevant knowledge. A genetic test for Huntington disease, for example, is a near perfect diagnostic of the underlying

The Deep Structure of Deliberate Ignorance

73

condition, leaving no room for doubt once results are obtained. Other information, particularly in cases that involve others’ judgments or evaluations, can be subject to substantial noise. Learning that a colleague is (stubbornly) unpersuaded by a new theory does not resolve whether the theory is indeed useful. Moreover, information in some domains may inherently be more informative in guiding decisions. Someone who learns of a treatable or curable medical condition, for instance, can take concrete actions to improve future outcomes. Opportunities to respond to information that one is unattractive, on the other hand, may be limited. While holding accurate information may still confer some advantages (e.g., better calibrating expectations on the dating market), these benets are more nebulous. People also may have (potentially motivated or biased) beliefs about how actionable or informative information is. For instance, people’s beliefs about how painful it is to talk to those with opposing political views may be exaggerated (Dorison et al. 2019). The desire to seek or avoid information may further differ across domains as a result of prevailing social norms. If friends are likely to discuss current political events, choosing not to read the latest news may impose a social cost. Not only may it preclude participation in discussion, but you may be judged adversely for failing to adhere to a “duty” to be informed. In other cases, the decision not to remain ignorant may violate social norms, a point we made above regarding privacy and trust. When we see an open browser window with our signicant other’s emails displayed, succumbing to the temptation to read the emails (and hence acquire potentially new and useful information) is likely to be judged unfavorably. These examples raise the question of whether the motivations and strategies for deliberate ignorance differ fundamentally across domains or whether there are indeed commonalities reective of “information preferences” that hold across domains. Previous work in psychology, economics, and other disciplines has found substantial avoidance across consequential domains: information about health (Oster et al. 2013) or nances (Sicherman et al. 2016), among others. There are several possible domain-specic inuences on deliberate ignorance. Information in some domains may have instrumental value, in others it may have hedonic value, and in still others it may have both. Deliberate ignorance may be a function of what kind of value the ignored information may have. Knowledge of one’s portfolio can have both instrumental and hedonic value; knowing you were well thought of by one of your college professors some years ago may have only hedonic value; knowing which route from Berkeley to Palo Alto has less traffic may have only instrumental value. In addition, there may be domain specicity in what one is expected to know. When someone says that some issue is “above my pay grade,” domain specicity regarding “who is in charge or who is the expert” may partly be at play.

74

B. Schwartz et al.

Domain specicity may also vary across time and culture. Consider attitudes about whether people should trust or rely on experts.3 “Trust the expert” as guidance may change as a culture’s attitude about “expertise” changes. “Rely on the expert” (Wegwarth and Gigerenzer 2013) may change as a culture’s attitude about who bears responsibility for outcomes changes (for the distinction between “trusting” as a social practice of deliberate ignorance and “relying on” as a form of imposed and seemingly unavoidable ignorance, see Townley 2011). For instance, healthcare in the United States has become much more consumer driven over the last fty years. Prescription drugs are advertised to patients, who, of course, cannot go out and just purchase them. Doctors are admonished not to be paternalistic and to make certain that patients realize that the decision rests with them. This change in cultural attitude toward the role of expertise in decision making may have made patients less “deliberately ignorant” in 2019 than they were in 1969. Finally, domain specicity may play a role in whether an individual’s attitude toward information is “I don’t need to know,” “I can’t know (it’s too complicated),” or “I don’t want to know.” Which of these responses a particular piece of information provokes may affect the frequency of deliberate ignorance as well as its consequences. What is called the “illusion of explanatory depth” might be relevant to choosing not to know (e.g., Keil 2006; Sloman and Fernbach 2017). The illusion of explanatory depth refers to the fact that most people say yes when asked if they know how a toilet (a zipper or a bicycle) works. Yet when asked to explain how it works, most people’s knowledge is very shallow. Discovering this fact about their ignorance encourages people to change their view about how well they understand. If some deliberate ignorance stems from an “I don’t need to know” attitude, and some originates from an “I already know enough about that” attitude, then explanatory depth-type manipulations may reduce deliberate ignorance. The importance of domain specicity is, of course, unknown at this time. Also unknown are the dimensions along which we can most usefully characterize domains (e.g., by function, by complexity, or something else). Asking “Who is the expert here?” might help parse domains usefully. Other possible useful distinctions among domains are whether the information is actionable or not, and whether it is hedonically charged or not. Despite well-documented information-avoidance behavior, some people routinely get tested for sexually transmitted diseases, frequently (and maybe excessively) check the value of their portfolios, or expose themselves to political views contrary to their own. This suggests that information preferences may be an important source of individual differences, similar to time and risk preferences. In addition, whether the potential benet of information is small 3

The distinction between “trust” and “rely on” is meant to capture the possibility that we may rely on others in some domains whether or not we trust them and their expertise.

The Deep Structure of Deliberate Ignorance

75

or large may be answered quite differently by different social actors. Consider the example of the genetic test for Huntington disease: For some individuals “at risk,” there might be a huge benet in denitively knowing whether or not they carry the respective genetic variant. For others, this information might be a source of potential harm in that it will destroy uncertainty as an indispensable resource for leading a self-determined life. There are, in other words, no unequivocal and easily generalizable criteria for judging whether (and for whom) the benet of knowing something is small or large. We may learn about individual differences and domain specicity by measuring people’s willingness to remain ignorant in different domains. Is the decision to learn about one’s health predictive of the decision to learn about one’s attractiveness? If so, this suggests there is an underlying common factor. Relatedly, we may wonder whether ndings on deliberate ignorance are driven by a small subset of the population that wishes to remain ignorant about information very broadly, or whether a large fraction (or even a majority) of people deliberately choose to remain ignorant in at least some settings. One measure of information preferences is a scale developed by Ho et al. (2018): respondents are asked to imagine themselves in a series of hypothetical scenarios in which they can choose to obtain (or not obtain) information. The scenarios cover three domains that span many high-stakes decisions, and for which there exists empirical evidence of avoidance: personal health (e.g., the choice to obtain information about life expectancy); personal nance (e.g., the choice to learn about alternative investments that could have been pursued); and personal characteristics (e.g., one’s attractiveness). Ho et al. (2018) nd that items from each of the domains load onto domain-specic latent factors, and these latent variables load onto a general factor. Moreover, the general factor is predictive of consequential information acquisition in the three domains. Suggestive of general information preferences, the scale is also able to predict the decision to acquire information outside of these particular domains; namely, the decision to learn about the gender wage gap in one’s industry, the consequences of climate change to one’s local community, and unfavorable information about one’s political party. In addition, there are small to moderate correlations between information preference and dispositional factors, with some variation across domains. These ndings suggest that domain specicity may be superimposed on some person-level preferences related to domaingeneral deliberate ignorance.

Psychological Mechanisms That Implement Deliberate Ignorance Since deliberate ignorance can manifest itself in various ways and circumstances, there are also likely to be somewhat different types of mechanisms involved. On a general level, one important distinction is between (a) mechanisms for situations in which the agent knows about the existence of the

76

B. Schwartz et al.

relevant information but has not yet acquired it, so that the information is not yet internally represented (e.g., the HIV test result), and (b) mechanisms for situations in which the information has already previously been acquired and is internally represented (e.g., the example of the ute episode discussed by Schooler, this volume). Below, we discuss these two types of mechanisms and their cognitive requirements in turn. In situations in which the information is not internally represented, ignoring it means not including it in the search rule one uses during information search, although it may very well be considered relevant in principle for the task at hand. Implementing deliberate ignorance in such a situation involves, in particular, executive functions of planning and selective attention. In addition, it is important that when the environment is searched and the to-be-ignored information is encountered by chance, it has to be recognized as such, which requires constant monitoring and matching with the current task goal. Importantly, relative to purely exploratory information search, targeted information search (i.e., search that focuses on some specic kinds of information and explicitly excludes others) has been shown to entail cognitive costs. For instance, Fechner et al. (2018) developed a model that implemented various decision strategies that differed in their search rule within a given cognitive architecture. One decision strategy gathered all relevant attributes it could nd in the environment and its search rule did not mandate a particular order in which attributes had to be inspected. The other strategy processed attributes in a particular sequence and stopped information search as soon as a given attribute allowed it to discriminate between the options; all other attributes were ignored. Because the strategies were implemented in a common cognitive architecture, Fechner et al. could determine the cognitive costs (in terms of predicted response time) that the strategies produced for processes of information retrieval, action coordination, perception, and motor responses. It turned out that although the second strategy often ignored some of the attributes available in the environment, it produced higher cognitive costs than the strategy which considered all available information. These costs were produced, in particular, by the strategy’s search rule, which mandated a focus on specic attributes and the exclusion of others. These results were subsequently conrmed in an empirical study. Situations in which the to-be-ignored information is already represented in the cognitive system involve further complexities. Due to the architecture of the cognitive system, if a piece of information is generally relevant for a task at hand, its activation in memory will be enhanced. Ignoring this information, therefore, requires an active downregulation of the cognitive system, for instance, by processes of suppression and inhibitory control. These processes involve considerable mental effort (with pronounced individual differences). In addition, processes of inhibition decline in older age (Hasher and Zacks 1988; but see Rey-Mermet and Gade 2018).

The Deep Structure of Deliberate Ignorance

77

In summary, deliberate ignorance will often make specic demands on the cognitive system. Based on the requirement prole, one can make predictions regarding individual differences in engagement in deliberate ignorance, how it might change across the life span, and which situational variables (e.g., time pressure, dual task load) will modulate engagement in deliberate ignorance.

Causal and Consequential Variables In the consideration of possible variables that might inuence deliberate ignorance, we divide our discussion into two major parts. First, we identify variables that may affect the likelihood of deliberate ignorance—potential causal variables. Then we identify possible effects of deliberate ignorance. This discussion is intended to be the rst, not the last, word on how we might subject deliberate ignorance to further empirical investigation. Potential Causal Variables We identied four categories of causal factors: utility of the information; information environment; level of relevant parties; and legal, ethical, and social considerations. Virtually all of these variables have implications for normative assessment of deliberate ignorance.4 Utility of the Information As indicated in our discussion of possible domain specicity, the type of information (e.g., health, nancial, personal characteristics, political) that is potentially being ignored might be salient. We might make different normative or policy judgments about cases depending on this variable. Across domains, however, we suspect that judgments about the signicance and appropriateness of deliberate ignorance will focus on the extent to which the ignored information is perceived to have net utility. This judgment will likely have both causal and normative impacts. There are a number of factors relevant to perceived net utility (see Table 5.1). Across these factors, the core task is to assess the possible benets and harms of having the information. While it might be difficult to list comprehensively and measure precisely all the possible effects a piece of information could have, the goal is to try to determine at least whether or not knowledge of the information would create a net benet or harm. Relevant informational 4

It should be noted that in the context of the Enlightenment-based attitude (where more information is always better than less), instances of deliberate ignorance tend to carry almost automatic negative normative judgments. That is, one must defend or justify decisions to remain ignorant, whereas decisions to acquire knowledge need no justication.

78

B. Schwartz et al.

Table 5.1

Variables that may affect the utility of information.

Variables Benet/risk of knowing Uncertainty Timing of relevance Actionability

Accessibility

Attributes • • • • • • • • • • • • • •

Magnitude Duration Marginal value compared to baseline Quality of evidence Applicability to an individual Decisiveness of information Temporal gap between risk and benet manifestation Life stage Magnitude Direct medical or preventative action Indirect action Small marginal impact Understandability Held by experts

characteristics might include the magnitude and duration of the risk or benet. It is also relevant to consider the marginal value of the information, particularly as it compares to the baseline of already acquired information. Information that makes someone newly aware of an issue will be more valuable than information that merely adds detail to an already established area of knowledge. The magnitude and direction of the information’s utility will be extremely salient and will often serve as an initial threshold question when analyzing an instance of deliberate ignorance. When information has higher potential net benet it will be easier to make a normative claim that deliberate ignorance is inappropriate; information with lower or negative value will not generally be associated with disapproval. A related consideration is the uncertainty associated with a piece of information. We use “uncertainty” in a number of different ways: 1.

2.

3.

Uncertainty related to the quality or amount of evidence available (which raises questions about the certainty with which conclusions can be drawn): Even if a patient denitely has a particular genetic variant, there might be weaker (e.g., single case report) or stronger (e.g., population-level data) evidence about the link between that genotype and a particular pathogenic phenotype. Uncertainty about the applicability of information to a particular person: Well-characterized genetic variants are often still only partially penetrant; that is, only a subset of people with that variant actually manifest the disease. In any particular patient, there is uncertainty about whether or not having that variant will prove to be relevant to health. Uncertainty about the decisiveness of a particular piece of information: Particularly when information is associated with future consequences,

The Deep Structure of Deliberate Ignorance

4.

79

it might be unclear how relevant the expected outcome might be for a person’s future given the unpredictability of life. For example, knowing at age thirty that you are predisposed to develop cancer in your sixties would be irrelevant if you die of a different cause beforehand. Uncertainty about the importance of the gap between how much one knows already and how much one needs to know: If a person already feels well informed, that person may justiably ignore new information when it is offered.

As with utility, uncertainty will be relevant to the moral judgments we make about deliberate ignorance. Highly uncertain information will be less useful and will be more open to morally justied deliberate ignorance. This relates to a third variable: timing of relevance. Certain kinds of information will be immediately relevant, while others might have delayed relevance. This will create situations where the harms and benets of the information might be distant in time from each other. For instance, learning that you will likely develop a disease carries an immediate psychological cost, but the medical intervention might not be possible until some distant future time. Conversely, a company might not rigorously investigate a possible safety issue to protect short-term share prices, leading to a more distant risk of liability. When thinking about timing of relevance, it is also important to consider the life stage of the person involved (e.g., childhood, reproductive years, retirement age) because harms and benets can shift over time. A fourth variable, actionability, is particularly relevant when potential benets are assessed. It is a key factor when making moral judgments about deliberate ignorance. We will usually only condemn someone for choosing to remain ignorant when they have foregone the opportunity to take an important action that would have been prompted by knowledge of the information. Actionability can be classied in four different ways: 1.

2.

3.

Specic actionability, where there is an intervention one can take that is directly related to the piece of information. An example of this would be a medical intervention taken as a result of the news that one has a particular diagnosis. Indirect actionability, where one does something in response to information, but not directly related to the possibility of altering the condition revealed by that information. This would include a decision to spend a signicant portion of your retirement fund to travel the world after learning that you will soon become debilitated by an illness. Actionability related to socially relevant information (where an individual’s action can only have a miniscule impact). A clear example of this is climate change; one can take a direct action in response to information about climate change, but that individual action has little meaning on its own. In cases like this, strong norms may arise to stigmatize such ignorance. When dilemmas of cooperation exist, it may be

80

B. Schwartz et al.

4.

necessary for a large majority to act in the common good even if each individual’s impact is negligible. Actionability of personal interest (where the information is only relevant to satisfy a curiosity). There are potential things one can do with this information (e.g., tell a friend or relative about one’s ancestry test results) but these actions have only limited consequences.

A fth variable relates to the accessibility of the information. If information is not readily accessible, people may feel like it is not worth having. For instance, people might not read the important nancial disclosure information that their mutual fund sends out each year because of a perception that they will not be able to understand it. Relatedly, if the information is being held by an expert (e.g., doctor, nancial planner), there might be a tendency to think that it is not one’s responsibility to acquire it. This again raises the distinction suggested above between trusting an expert and relying on an expert. Information Environment As our brief discussion of cognitive mechanisms involved in deliberate ignorance showed, both acquiring and ignoring information can be cognitively effortful and costly. Externalizing the information, with permanent storage and easy access, can lower some of these costs. Thus, the information medium (e.g., written, oral, digital), as well as the costs (metaphorical and nancial) of storage and retrieval, may affect the likelihood of deliberate ignorance. High permanence, easy access, and low cost may convey that a deliberate ignorance decision is easily reversed and thus increase the likelihood of deliberate ignorance. This implies, of course, that instances of deliberate ignorance will only increase as the digital sourcing and storage of information increases, suggesting the apparent paradox that the more information is made available to people for easy access, the less inclined they will be to avail themselves of it. Level of Relevant Parties The range of actors or parties who can decide to remain ignorant or who are impacted by a decision to remain ignorant is great (see Figure 5.1). Individuals can decide to remain ignorant in a way that only (or predominantly) impacts themselves. For example, an individual can choose not to have a suspicious mole looked at by a doctor, or choose not to examine the performance of his retirement account. Individuals can also join with close third parties (e.g., relatives) to remain ignorant in a way that directly impacts themselves as well as the third party. A couple, for instance, might decide not to investigate the cause of their infertility problems, or putative siblings could decide not to seek genetic testing to see if they are actually related. An organization can decide

81

The Deep Structure of Deliberate Ignorance

Person Directly Kept Ignorant

Only individual

Actor Who Decides to Be Ignorant Individual Organization plus 3rd party (e.g., a family)

Society

Only individual 1. Skin mole 2. Investment Individual plus 3rd party (e.g., a family) Organization

Society

1. Cause of infertility 2. Genetic relatedness 1. Facebook and Russian trolls 2. “Blind” auditions 1. Stasi les 2. Gun injuries

Figure 5.1 Levels of action of deliberate ignorance. In this matrix, only the cells on the major diagonal are instances of deliberate ignorance, strictly dened. However, deliberate ignorance will often have effects on others (externalities) so that the other matrix cells are relevant, especially with regard to normative considerations. Items in the cells along the diagonal refer to examples in the text.

to remain ignorant in a way that impacts itself as well as individuals in that organization or unrelated third parties. For example, Facebook could opt not to rigorously investigate Russian election interference on its platform, or a symphony orchestra could adopt a policy to have blind auditions. Similarly, a society (i.e., its political representatives and dominant social groups) might decide to remain ignorant in a way that impacts itself as well as the individuals and organizations in that society as well as unrelated third parties. For example, the United States currently prohibits funding of public health research related to gun injuries, or some transitional societies choose not to render accessible existing intelligence on citizens collected by past regimes (see Ellerbrock and Hertwig, this volume). It is important to note that our conception of deliberate means that the choice to not know something can only be made by the entity that will remain ignorant, as depicted in Figure 5.1. A choice to remain ignorant can certainly have a profound impact on other parties, but those other parties will not have engaged in deliberate ignorance. Characteristics of the Actor In some contexts, the characteristics of the actor (e.g., tolerance for ambiguity, need for closure, openness, neuroticism, and other personality variables) may also play a causal role in deliberate ignorance. So, also, do people’s beliefs

82

B. Schwartz et al.

about the domain in question as well as people’s hedonic needs and goals. Also relevant is whether an actor’s decision to remain ignorant is an isolated instance or a repeated action. A single decision might simply be evidence of a temporary choice to remain ignorant, but serial decisions may provide evidence of stronger feelings. This variable could have interesting moral relevance, as we might more strongly condemn someone who repeatedly refuses to acquire important information (or we might more strongly praise someone who repeatedly refuses to acquire information with a net harm). In addition, repeated instances of deliberate ignorance may dramatically increase the magnitude of the consequences of that ignorance. Legal, Ethical, and Cultural Considerations As a nal category, it is important to consider the legal, ethical, and cultural contexts of information. Certain kinds of information are socially valued or might have morally relevant social benets such that one might be more expected to acquire the information. Social and cultural valuation of information and knowledge is, however, not always “rational” or “functional.” It is not necessarily the most important and useful knowledge that is most appreciated. In contrast, certain types of knowledge (e.g., of esoteric arts or haute cuisine) are often culturally valued because they express a position of social superiority or exclusivity. Similarly, certain types of information are socially disvalued or have morally relevant social costs such that one might be discouraged to seek or accept it. These factors can impact the cost and benet both of acquiring and of having information (for discussion on divergent social and cultural evaluations of knowledge and ignorance, see Gross and McGoey 2015; High et al. 2012). Legal and ethical considerations can be a lever if we wish, as a matter of policy, to discourage (or, in rarer cases, encourage) deliberate ignorance. Relatedly, moral culpability for the consequences of actions owing from having a piece of information can inuence one’s decision to know or not to know. Consequences of Deliberate Ignorance There is a wide variety of possible consequences that are worthy of study, on both scientic and policy/welfare grounds. We enumerate some potential consequences here. In some cases, the instrumental consequences of ignorance will be positive overall, as in the examples of trusting or of blinding in peer reviews or job applications. In others, the instrumental consequences will be mostly negative, as in failing to take or get the results of medical tests that might reveal conditions that are treatable. In these types of cases, the potentially positive hedonic consequences of ignorance might be more important to the actor than the negative instrumental consequences. This suggests that there is utility, or benet,

The Deep Structure of Deliberate Ignorance

83

in holding some beliefs that new information might threaten (e.g., “I might not have cancer” or “I might be able to retire in material comfort”) quite apart from what one does as a result of those beliefs. Golman et al. (2017, 2019) has modeled such belief utilities. There may also be utility in self-consistency, leading to deliberate ignorance of information posing a threat to the image that one is consistent.5 Under a different facet of consistency, ignorance may help one maintain consistency with the values and attributes of one’s desired community. Community solidarity is often important to people, and discovering that you are less attuned to your group, or that your group is less attuned to you, can have a utility impact. Indeed, such information can damage social relationships. We make this distinction between instrumental and hedonic consequences as if they are easily distinguished, but sometimes they are not. In many, if not most, cases, the consequences of deliberate ignorance may be both instrumental and hedonic. Consider the example of testing for Huntington disease: Not wanting to get tested does not simply have a “hedonic” value for people at risk (“I might not have the Huntington gene”). It is also “instrumental” in that it enables them to live a life with an (almost) open future. Likewise, knowing that one carries the Huntington gene has limited clinical instrumental value since, to date, there is no prevention, treatment, or cure for the condition. In addition, we must pay attention to the possibility of externalities (i.e., of effects on third parties) and their normative implications. They may also have behavioral implications, modifying the likelihood that one will actually display deliberate ignorance. Finally, it is possible that the same act of deliberate ignorance can have different consequences (even differently valenced consequences) at different points in time. We provide two illustrations. First, if one deliberately ignores information about possible options in a consumer decision, one could miss out on an option that is better than the one actually chosen (negative). On the other hand, one might enjoy and value the chosen option more, owing to reduced regret and/or lowered expectations (positive) (see Schwartz 2004). Thus, one makes a worse choice but feels better about it. Schwartz (2004) distinguishes between two different sorts of goals that may inform decision making, by inducing two different information search-stopping rules. When maximizing, one seeks the best, which requires exhaustive search of the options. When satiscing, one seeks good enough, which usually does not require exhaustive search of the options. In deciding to satisce, a person is being deliberately ignorant about information regarding options not considered. Schwartz (2004) suggests that maximizing leads 5

Note that this presumed process of belief utility maintenance does not include the well-studied phenomenon of biased updating of beliefs since in that case, the information is not ignored; instead, only some of it is actually assimilated into the person’s self-view.

84

B. Schwartz et al.

to better objective decisions but worse subjective ones, a result conrmed in a study of hundreds of college seniors looking for jobs. Maximizers got better jobs, but felt worse about the jobs they got. Thus, deliberate ignorance of options is a cost at the time of choice, but a benet when experiencing the results of the choice (Iyengar et al. 2006). Second, if a person does not check his/her portfolio frequently, opportunities will be missed to improve it. Frequently checking the portfolio, however, leads to more inspections in which the portfolio’s value may decrease, which owing to loss aversion (e.g., Kahneman and Tversky 1979, 1984) will make a person feel worse about the portfolio. Looking at the portfolio infrequently enables day-to-day market uctuations to smooth out, so that the historical tendency of equities to rise in value dominates what a person sees on inspection. Thus, by ignoring portfolio performance, a person might earn less money but be happier about it (Benartzi and Thaler 1995), though under some circumstances, ignoring the portfolio can equate to more money being earned (Sicherman et al. 2016). It is possible that there are actually many instances of this kind in which ignorance produces multiple heterogeneous effects.

Deliberate Ignorance and Temporal Scale There are at least three types of temporal scales that can be considered when evaluating the deep structure of deliberate ignorance. The rst is the temporal space of the decision to remain deliberately ignorant itself. Within that space, at least three phases can be distinguished, borrowing from the Rubicon model (for a summary, see Heckhausen 2007): • • •

the deliberation phase, in which the actor considers whether or not to remain deliberately ignorant the implementation phase, in which that decision is enacted the outcome phase, in which the consequences of the decision unfold

Distinguishing among these three phases is important not only for differentiating among causes, symptoms, and consequences of deliberate ignorance; it also carries implications for the presumed functions of deliberate ignorance itself. Many of the functions involve affect regulation, such as trying to avoid being upset, worried, or regretful. As such, the decision to remain deliberately ignorant for affect regulation purposes involves a prediction—an affective forecast—about how the actor will feel having or not having some information. Ample research suggests that individuals may not be accurate in their predictions, or affective forecasting, so predictions about how particular states of knowledge may inuence affective states may also not be terribly accurate (for reviews, see Schwartz and Sommers 2013; Wilson and Gilbert 2003). Affective forecasting errors include failure to anticipate hedonic adaptation. In

The Deep Structure of Deliberate Ignorance

85

the case of information, this would be a failure to appreciate that the hedonic impact of bad news will be reduced over time. Errors also include what is called “focalism” (Wilson and Gilbert 2003), the tendency to inate the importance of bad news by focusing on the aspects of life that will be affected by it and ignoring the aspects of life (possibly many) that will be unaffected. Affective forecasting errors are especially relevant to the case of deliberate ignorance because deciding not to acquire some information out of concern that it will lead to negative affect may itself cause negative affect, even in the absence of the information itself (in the case of regret, see Schwartz 2004). The important implication of the possibility of misprediction is that the affective causes of a decision to be deliberately ignorant may not map perfectly onto the affective outcomes of being deliberately ignorant. This may be especially relevant in cases where the decision to remain deliberately ignorant must be repeated over time. In the example of maximizing versus satiscing discussed above (Schwartz 2004), a person might quite reasonably expect that a better objective decision will lead to a better subjective state and thus insist on very high standards for the decision. Such a person, adopting such a strategy, will often be disappointed—not once, but repeatedly. Distinguishing among deliberation, implementation, and consequences as well as an acknowledgment that people often mispredict affective consequences raises what might be called the phenomenology of deliberate ignorance: What does it feel like to nd out that relevant information is available? What does it feel like to choose not to examine that information? Having made that decision, what does it feel like to go forward without the information? Is a decision to ignore, once made, then forgotten? Or is it revisited again and again? For example, having decided not to review the quarterly statement of your retirement portfolio, do you cast the matter aside and forget about it, or do you continue to pay attention to the refusal to know? Do you impose recriminations on yourself later, when you discover that your portfolio has suffered a downturn? The phenomenology of deliberate ignorance is of interest in its own right, but it may also be of interest in determining whether ignorance actually achieves the affect regulation it is meant to provide. A second, broader temporal scale involves the lifespan of the actor. There may be interesting developmental shifts involving deliberate ignorance. For example, children may be especially curious; evidence from Gigerenzer and Garcia-Retamero (2017) suggests older adults (in this case, age 51 yr and over) were more likely than younger adults to say they did not want to get information, such as when they would die. In addition, Hertwig, Woike, and Schupp (in preparation) observed that the strongest predictor of deliberate ignorance was chronological age (age 14 to > 80 yr): The older a person was, the more likely s/he exercised deliberate ignorance. Evidence of age differences in the components of deliberate ignorance, such as in affect regulation goals, might therefore lead to interesting age differences in the decision itself. To the extent that deliberate ignorance does vary by age, it raises interesting questions

86

B. Schwartz et al.

about what mechanisms (e.g., learning, cognitive changes, motivational shifts) may contribute to the change. However, age differences in deliberate ignorance should not be presumed. There may be substantial age similarity in affect regulation processes (Livingstone and Isaacowitz 2019). That older individuals may not want to know when they will die may be taken as evidence that older age increases deliberate ignorance; however, a longer time window might reveal that the same individuals would have made the same decision on other health topics earlier in life as well. A nal, broader temporal scale involves evolution. Canonical examples of deliberate ignorance focus on individual decisions, but evolutionary considerations may be relevant for two reasons. First, biological and cultural evolution equip us with skills and dispositions that are relevant to individual examples of deliberate ignorance. For example, people often ignore otherwise costless information because knowing itself is aversive. There may be biological reasons for this. A simple, true item of information, if acquired, may set in motion a complex and effortful train of cognitive activity that is aversive. For example, learning that a spouse has been unfaithful may lead to a consideration of alternative actions in confronting (or not) the errant spouse, the spouse’s reactions to the confrontation, and the reactions of friends, relatives, children, and legal authorities to the couple’s joint actions. Some of these considerations will involve cultural norms. If a spousal transgression occurs in a culture of honor societies, and if the spouse knows, the knower is under obligation to beat or kill the spouse or the person with whom they have been unfaithful. On the other hand, among the Tupi speakers of South America, for example, women are quite free to have affairs with men who are not their social husbands, and any expression of jealousy will expose the husband to censure (Walker et al. 2012). In the former case, knowledge of indelity will set into motion complex calculations; in the latter, perhaps, not so much. Second, evolution may lead to adaptations or maladaptations that are analogous to deliberate ignorance, in the same way that Darwin coined the term “natural selection” by analogy with articial selection. Animal breeders favored genetic variants that made livestock tame and unwary of humans. In effect, livestock were bred to ignore the exploitative intentions of their owners. Perhaps, Darwin thought, “natural selection” worked in a similar fashion. What may be most interesting here is to look for cases in which natural selection, or analogous processes in cultural evolution, favored ignoring seemingly useful information. For example, Europeans react swiftly and decisively to the buzzing sound of a rattlesnake’s rattle even though Europeans have only a very shallow evolutionary history of interaction with New World rattlesnakes. How could natural selection have favored Europeans reacting adaptively to rattlesnakes’ rattling? They are not “deliberately” ignorant of the sound, though it seems they ought to be. As far as we are aware, this is an unsolved puzzle. Cultural evolution generates cases that are reminiscent of deliberate ignorance. Lewandowsky (this volume) offers the example of a constructivist

The Deep Structure of Deliberate Ignorance

87

populism in which a group of people seem to deny plain material facts as if the world is actually built according to how they would like to have it. Anthropogenic climate change denial is one such example. Cultural evolutionists have considered two models that might be helpful in understanding these types of cases. Boyd and Richerson (1985) considered a model of prestige-based bias. People often use apparent success as a cue to determine who they should imitate. Boyd and Richerson imagined a display trait, such as the size of yams that farmers bring to public celebrations on the island of Pohnpei in the Pacic. The farmer with the biggest yams might originally, and reasonably, have been judged to be the best farmer, and young farmers might have attempted to imitate his practices. This system, however, can run away maladaptively, as ever bigger yams become preferred. In fact, on Pohnpei, ceremonial yams evolved to weigh a hundred kilos and their growers were accorded much respect. The skills for growing them, however, became uncorrelated with ordinary farming success. Many exaggerated cultural practices evolve, such as elaborate, costly, nonfunctional rituals. The affected individuals seemingly remain ignorant of the costs of such practices. On the other hand, students of ritual often suggest that costly rituals actually do have functions in proportion to their costs (Rappaport 1979), much as apparent cases of deliberate ignorance at the individual level have, upon close examination, sensible functions. Consider another example: symbolic markers of groups. Young children prefer to imitate adults who speak the same dialect as they do (Kinzler et al. 2009). McElreath et al. (2003) showed that neutral symbolic group markers, combined with a bias to imitate those like you on the symbolic marker, can evolve when groups differ in their norms for solving games of coordination. Other groups may be sources of useful information as well, but if imitating them is sufficiently likely to mis-coordinate you with your neighbors, it will be adaptive to remain ignorant of them. An interesting case is the one in which individuals choose to live in socially constructed realities that insulate them from learning adaptive facts, as in the current climate change denial situation. This could arise when a symbolic marker is not neutral. Climate change deniers refuse to accept actionable evidence about rising sea levels even when they own a valuable property on the seashore. Perhaps inadvertently, or due to malicious propaganda, denial has become a marker of belonging to an emotionally salient group, such as a resident of a noncoastal state in contemporary United States. Whether by mistake or by imposition, this case is analogous to error, which is outside the narrowly dened cases of individual deliberate ignorance. It should be emphasized that each of these evolutionary or cultural examples is only analogically related to deliberate ignorance as precisely dened at the beginning of this chapter. The examples either lack deliberateness unambiguously, or their deliberateness is somewhat speculative. Thus, our discussion in this section is meant to be evocative, not decisive.

88

B. Schwartz et al.

Conclusion Throughout our discussions, we aimed to delineate and clarify the phenomenon of deliberate ignorance. We have been restrictive in our denition of deliberate ignorance, to make clear cases salient and push others that merely resemble these cases to the background. In doing so, we have tried to focus attention on the class of phenomena most in need of additional empirical and theoretical investigation. In relation to other aspects of deliberate ignorance being investigated, we hope this claries what phenomena modelers need to model, which normative implications need to be evaluated, and which institutional policy concerns need to be addressed. We say this in full recognition that there may be a great deal that deliberate ignorance, strictly dened, has in common with the other members of its “Wittgensteinian” family. Going forward, as researchers continue to investigate deliberate ignorance and its effects, care must be given to distinguish the intended and expected consequences of deliberate ignorance from its actual consequences. If someone is deliberately ignorant for good reason but the effect backres, we might treat this as an error in forecasting that needs correction, rather than a rebuke of deliberate ignorance per se. Institutionally, Enlightenment ideology notwithstanding, we think it unwise to design programs to reduce deliberate ignorance indiscriminately. Instead, we urge the development of guidelines that help individuals and institutions judge accurately when deliberate ignorance will enhance welfare (see Bierbrauer, this volume) and when it will reduce it.

6 How Forgetting Aids Homo Ignorans Lael J. Schooler Abstract Can some functions of deliberate ignorance be achieved through processes that govern forgetting? This chapter expands on this question and considers how processes critical to encoding, retrieving, and forgetting information in memory might help to achieve some of the functions attributed to deliberate ignorance. Consideration is given to whether both deliberate ignorance and forgetting are devices that can help with “information management” (e.g., by helping with information overload). The ACT-R model of memory, which holds that human memory can be understood as an information-management system, is used to illustrate how forgetting can function as a “performanceenhancing device” (e.g., by showing how the recognition heuristic, a simple inference strategy, depends on forgetting to perform well). Constructive processes of memory, which include forgetting, are explored for their ability to regulate emotions (by putting aside or reshaping memories of past experiences) and to serve as “strategic devices” (to avoid responsibility and improve an individual’s ability to deceive more generally). Although memory shares functions with deliberate ignorance, this chapter nds that the best strategy to stay ignorant of a piece of information is to never encode it in the rst place.

Introduction Dear Dr. McGaugh, As I sit here trying to gure out where to begin explaining why I am writing you and your colleague (LC) I just hope somehow you can help me. I am thirty-four years old and since I was eleven I have had this unbelievable ability to recall my past…

This is how AJ starts her letter to James McGaugh (Parker et al. 2006:35), as she asks for help to manage her extraordinary autobiographical memory. But why has she sought help? In her words (Parker et al. 2006:35): I think about the past all the time….It’s like a running movie that never stops. It’s like a split screen. I’ll be talking to someone and seeing something else…Like

90

L. J. Schooler we’re sitting here talking and I’m talking to you and in my head I’m thinking about something that happened to me in December 1982, December 17, 1982, it was a Friday, I started to work at G’s (a store)….

AJ spends much of her time lost in the past, unable to resist retrieving the next memory that has been automatically cued by the last one. Her extraordinary autobiographical memory, however, lets her down in other aspects of her life. She depends on post-it notes to organize her day and forgets which of her ve keys ts which lock. Nor did she do exceptionally well at school. For example, she even had trouble memorizing important dates in her history classes, although she can accurately retrieve the dates of newsworthy events from her life. She writes that “most have called it [her memory] a gift but I call it a burden” (Parker et al. 2006:35). Technological advances mean that soon we may all have to choose whether we want extraordinary memories akin to those of AJ. This choice is central to an episode of the Black Mirror series (a British science ction anthology series for television), entitled “The Entire History of You.” Set in a future where an implant behind the ear lets people replay video from their past on the back of their eyelids or on a screen for others to see, the show focuses on the problems that result from the protagonist having access to a perfect record of his life. He dwells on a parting comment made by a job interviewer and obsessively reviews scenes from a party, looking for evidence of his wife’s indelity. In the end, he goes through great pains to remove the implant, and in so doing he deliberately chooses a degree of ignorance of his past. Hertwig and Engel (p. 3, this volume) dene deliberate ignorance as the conscious individual or collective choice not to seek or use information (or knowledge; we use the terms interchangeably). We are particularly interested in situations where the marginal acquisition costs are negligible and the potential benets potentially large, such that—from the perspective of the economics of information (Stigler 1961)—acquiring information would seem to be rational (Martinelli 2006).

Hertwig and Engel ask whether some functions of deliberate ignorance can be achieved through processes that govern forgetting? Here, I expand on this question to consider how the processes critical to encoding, retrieving, and forgetting information in memory help to achieve some of the functions Hertwig and Engel attribute to deliberate ignorance. First, consideration is given to Hertwig and Engel’s suggestion that both deliberate ignorance and forgetting are devices that can help with “information management” by helping with information overload. Thereafter, an outline is provided of the ACT-R model of memory, whose key tenet is that human memory can be understood as an information-management system (Anderson et al. 1997). Second, using the ACT-R model of memory, I illustrate how forgetting can function as a “performance-enhancing device” by showing how the recognition heuristic, a simple inference strategy, depends on forgetting to perform well. Third, I explore how

91

How Forgetting Aids Homo Ignorans

the constructive processes of memory, which include forgetting, can help with “emotion regulation” by putting aside or reshaping memories of past experiences. Fourth, these same constructive processes are then shown to work as a “strategic device” to avoid responsibility and improve an individual’s ability to deceive more generally. In conclusion, although memory shares functions with deliberate ignorance, our best strategy is to stay ignorant of a piece of information is to never encode it in the rst place.

Memory Functions as an Information-Management Device Hertwig and Engel (this volume, 2016) suggest that both deliberate ignorance and forgetting help the cognitive system with “information management.” How and why forgetting might be functional has been central to a proposal for understanding the adaptiveness of the memory system, called the rational analysis of memory (Anderson and Milson 1989; Schooler and Anderson 2017). This framework assumes that there is some cost, C, associated with retrieving a memory. This cost may reect metabolic expenditure in maintaining and retrieving a memory as well as the time to search and to consider the memory. If the memory proves to be useful to the current purposes, there is some gain, G, in accessing the memory. The problem facing the memory system is to come up with a scheme that minimizes the retrieval costs while maximizing the gains. The rational analysis also proposes that the memory system can, in effect, assign some probability, P, to a memory being relevant in advance of retrieving it. Given these three quantities, an adaptive memory system would search memories in order of their expected utilities, PG – C, and stop considering memories when a memory with a probability P is about to be retrieved such that PG < C.

(6.1) 

This predicts that people will be able to retrieve most rapidly memories that are most likely to be relevant to their current needs and avoid recalling memories that are unlikely to be relevant. It may help to communicate the idea behind the rational analysis by noting that human memory solves a problem much like the Internet search engine Google is designed to accomplish. The essential analogy being that Google searches for websites germane to the search query and the memory system searches for memories relevant in the current context. Need probability is the probability that the information contained in a particular memory (or website) is needed in a particular context. In the case of Google, context would be the collection of search terms used to initiate search, perhaps along with the documents, emails, and other materials Google uses to tailor search results. In the case of the cognitive system, context would be salient elements of the environment or memories active in working memory. The memory system prioritizes

92

L. J. Schooler

the most relevant memories by making them available in working memory, by making others more or less available in long-term memory, and perhaps by forgetting others completely. Guided by the constraints of the rational analysis, Anderson developed a memory model for the ACT-R theory of cognition, a unied framework for simulating human cognition and behavior (for an overview, see Anderson et al. 1997). Here I present a stripped-down version of this memory model, but one that retains enough of the spirit to the original to investigate the parallels between forgetting and deliberate ignorance. If i refers to a memory, then its activation, Ai, reects the probability of the memory being needed in a given context. The key equation for calculating Ai, the activation of memory I, is

Ai

Bi  ¦ Wq S qi

(6.2) 

qC

where Bi is its base level activation, reecting the past frequency and recency of needing the memory; Sqi is the strength of association between q (an element of the current context, C) and memory i; and Wq reects the attention paid to element q in the context. Bi can be thought of as memory i’s resting activation level: Bi

L ln Fi  D ln R i .

(6.3)

Essentially, a memory’s resting activation is an increasing function of Fi, how frequently the memory has been resident in working memory, either through being encoded from the environment or retrieved from long-term memory, and a decreasing function of Ri, how long it has been since the memory was last resident in working memory. The parameters L and D can be thought of as learning and decay rates, respectively. Equation 6.3 shows that all things being equal, a memory will be increasingly accessible the more time it has spent in working memory, and subsequently less accessible the longer it has been since it was last in working memory. The second part of Equation 6.2 corresponds to the association of various elements of the context to the memory:

S qi

p (i | q) . p i

(6.4) 

Association is captured by associative ratios, p(i|q)/p(i). The denominator of this ratio, p(i), is the base rate probability of needing memory i across all past contexts. The numerator is the conditional probability, p(i|q), of needing memory i in the presence of some cue, q. The larger the ratio, the better the indicator q is that memory i will be needed. The overall associative strength of the context to the memory is the sum of all the associative ratios of each of the individual cues in the context. The probability of retrieving a memory increases with increasing activation, whereas the time it takes to retrieve a memory decreases with increasing activation;

How Forgetting Aids Homo Ignorans

93

that is, higher activation values correspond to faster retrieval times. In short, understanding forgetting as a form of “information management” was central to the development of the ACT-R model of memory. ACT-R lends quantitative precision to how this information management works for human memory.

Forgetting Functions as a Performance-Enhancing Device A properly functioning memory achieves many of the functions that Hertwig and Engel attribute to deliberate ignorance (this volume, 2016). For example, Schooler and Hertwig (2005) have shown how ACT-R’s model of memory can be a “performance-enhancing device.” Schooler and Hertwig (2005) explored whether forgetting could aid a simple decision strategy known as the recognition heuristic (Goldstein and Gigerenzer 2002). In its simplest form, the recognition heuristic is a strategy to rank two alternatives according to some criterion of interest. The classic example is ranking two cities according to their population. Suppose you are asked to choose which city, Durban or Johannesburg, has more inhabitants. Personally, I could only guess. In contrast, if I were asked the same question for Cape Town or Tembisa, I would wager Cape Town has the greater population based on the observation that I recognize Cape Town but not Tembisa. This strategy of relying on recognition works to the extent that recognition is correlated with the criterion of interest, which in this case is population. Whether there is a strong correlation between the criterion and recognition depends on a chain of correlations: larger cities tend to be mentioned in the media and conversations more often than smaller cities, and how often a city is mentioned correlates with how likely it is to be recognized. To illustrate how this chain of correlations works, we can take the number of results that Google retrieves when queried with the name of a city as a proxy for how often it is mentioned. For these South African cities, there is a perfect correlation between population and the number of results retrieved: Cape Town (127 million results, 3.4 million inhabitants), Durban (90 million results, 3.1 million inhabitants), and Tembisa (4.2 million results, 500 thousand inhabitants).1 With the correlation of environmental frequency and population in place, let us turn to recognition. Equation 6.3 implies a positive correlation between activation and environmental frequency, but a negative correlation between activation and recency. We can model whether or not a city is recognized by whether or not the activation of a memory associated with the name of that city exceeds the retrieval threshold described in Equation 6.1. Suppose that our actual exposure to these city names in a given year is one millionth of the results that Google returns; that is, 127 mentions for Cape Town, 90 mentions for Durban, and 4 mentions for Tembisa. According to Equation 6.3, if we ignore forgetting 1

See https://www.geonames.org/ZA/largest-cities-in-south-africa.html (accessed January 27, 2019).

94

L. J. Schooler

(i.e., setting D to 0) and set the learning rate, L, to the convenient value of 1, the respective activations for Cape Town, Durban, and Tembisa would be 4.8 (= ln 127), 4.5 (= ln 90) and 1.4 (= ln 4). For convenience, assume a recognition threshold of 1.5, so that Cape Town and Durban would be recognized, because their activations are above 1.5, but Tembisa would not be, because its activation falls below 1.5. In this case, the recognition heuristic would correctly decide that Cape Town and Durban are larger than Tembisa, but would have to guess when deciding between Cape Town and Durban. Next, assume another year goes by with the same environmental frequencies, so that the total number of times the three cities are mentioned in the course of two years is 254, 180, and 8, raising their activations to 5.5, 5.2, and 2.1. Focusing on frequency alone, the recognition heuristic would have to guess when deciding among all the cities, because now Tembisa’s activation of 2.1 would exceed the recognition threshold of 1.5. However, when forgetting is taken into account, as captured by the second part of Equation 6.3, the base level activation decays over time. Consider the activation associated with Tembisa on the rst day of the third year. Assuming a decay rate of .2 and a recency of 90 days since the last mention of Tembisa (i.e., 90  365/4): on the rst day of the third year, the activation of Tembisa 1.2 (= ln 8 – .2 × ln 90), would fall below the recognition threshold, enabling the recognition heuristic to be applied. The heuristic would now correctly choose which city has the most inhabitants when Tembisa is one of the options. In sum, forgetting helps maintain the correlations required for the recognition heuristic to be an effective decision strategy by reducing the chances that some memories (i.e., those associated with smaller cities) will be retrieved. As we will see next, reducing the chances of retrieving memories can help regulate emotions.

Memory Functions as an Emotion-Regulation Device With respect to affect regulation, Hertwig and Engel (this volume, 2016) point to examples of people deliberately wanting to be ignorant of medical test results, because knowing the results could force them to face emotionally taxing decisions. In another example, they suggest that people may not want to know the pay of their colleagues so that they can avoid envy. These external sources of emotionally disturbing information would at least require making an appointment to take a test or actively searching for a colleague’s pay. In contrast, the “marginal acquisition costs” of information stored in memory are close to 0. How then can a person protect themselves from emotionally disturbing information lurking in their memory? Within the ACT-R framework, an interaction between the strategic behaviors under conscious control (at least to some extent) and the mechanisms inherent to the architecture could help control the retrieval of distressing memories. Again, it helps to think about the process of memory retrieval as a

How Forgetting Aids Homo Ignorans

95

Google search. The conscious mechanisms would be akin to making sure that searches do not include terms that are likely to return upsetting websites. For example, I might avoid searching with terms such as “Fox News” or “ISIS.” Google may also have mechanisms, of which I am unaware, designed to protect me from upsetting information by forgetting the most disturbing information that the Internet has to offer. To illustrate how ACT-R’s memory processes could lead to forgetting that could help guard against the retrieval of emotionally disturbing memories, consider an anecdote from my past, including some specic details that we will return to later in the chapter. It is an event I would prefer to forget, has some emotional content, but is not all that embarrassing, at least given the passage of time. No doubt you can dredge up an embarrassing memory of your own. The situation involved my twelfth grade English class and its recitation of part of Chaucer’s Canterbury Tales for a school assembly. Mr. McCown, our teacher, wanted to have musical accompaniment. Having for many years played the ute, but never in public, I volunteered. At the rehearsal, the rst notes out of my ute sounded much like a child’s rst attempts to make a sound by blowing across the top of a Coke bottle. To the delight of my friends sitting high in the gymnasium bleachers, my performance went downhill from there, ending with Mr. McCown saying, “I’m not here to give music lessons.” I took the sheet music with me, practiced it under the direction of my ute teacher, and was redeemed the day of the school assembly with a passable performance. Although now, knowing what I do about how memory works, I have my doubts that the assembly turned out as well as I remember that it did. I will refer to this as the ute memory, although within ACT-R it would be modeled by a complex of associated memories. Let’s rst consider the implications of Equation 6.1, the stopping rule PG < C, where C is the cost associated with retrieving a memory, P is the probability of needing a memory, and G is the gain associated with retrieving the memory. Note that in the ACT-R framework there is a one-to-one mapping between a memory’s activation and P. For most ACT-R analyses, G has been assumed to be xed and the same for all memories. However, consider what happens when this assumption is relaxed. My ute memory is denitely tinged with the embarrassment of that day. These negative emotions could be seen as reducing the gains, G, associated with retrieving the memory. The reduction in G means that increased levels of activation would be needed to retrieve my ute memory in comparison to a more positive memory. In Norby’s (2015) review of functional forgetting, he mentions that memories with negative emotions tend to be retrieved somewhat less well than those associated with positive emotions. Now, imagine a memory with strong negative emotional content, such as a memory of an assault. These negative memories could take G from being a net gain to a loss. In this case, the expected value of the memory could be negative, implying that the memory would be unlikely to be retrieved however low the retrieval cost, C, might

96

L. J. Schooler

be. Curiously, when G is negative, the higher the activation, the less likely it would be that the corresponding memory would be retrieved. The combination of a negative G and the standard ACT-R retrieval mechanisms would be one way to implement repression in the ACT-R framework. I want to emphasize that ACT-R does not make any hard predictions here. For example, the system could ascribe positive gains to a horric memory or the memory could be rehearsed to such an extent that even the smallest gains would put its activation above threshold. As this is to the best of my knowledge the rst time that manipulating G has been proposed as a way to incorporate inhibitory mechanisms into ACT-R, it is an open question as to whether G should be under deliberate control. Decidedly, deliberate strategies that could be used to avoid negative memories operate through the part of Equation 6.2 that handles the association of a memory to the current context. In ACT-R, my ute memory would be represented as a collection of memories, associated with each other and various retrieval cues, analogous to the key words used in a Google search. The cues might be Mr. McCown, a ute, gymnasium bleachers, Chaucer’s Canterbury Tales, or feeling embarrassed. A deliberate strategy would be to avoid cues associated with the ute memory (e.g., by avoiding going back to my high school gymnasium). An interference-based strategy would be to reinforce associations between these cues and other memories. For example, when I think of Mr. McCown, I could try to remember all the positive memories I have of him as a teacher (he was among the best) or the burgundy shirt and jeans he routinely wore to class. In ACT-R, such an interference-based strategy would increase the associations between Mr. McCown and these other positive memories and simultaneously decrease the associations that connect Mr. McCown and the ute memory, lowering the activation of the memories that compose the ute memory, and, in turn, lowering the probability that they would be retrieved. The base level strength would reinforce the effects of these deliberate strategies. Each time a memory is retrieved, it increases its base level activation. Preventing these retrievals would allow the base level activation of these memories to be forgotten (i.e., decay away), perhaps even leading to completely forgetting the disturbing memory. Another strategy to avoid disturbing memories depends on the observation that memories are as much constructed as they are retrieved. There is ample evidence that people often mistake an imagined event for a real one and, further, that these false memories may be facilitated by techniques encouraged in some forms of psychotherapy (Loftus 1997) and legal investigations (Ceci and Huffman 1997). In the ute example, I can think of two places where I suspect these constructive processes may be operating. The rst is that when I initially recounted the memory for this chapter, I did not remember what Mr. McCown was wearing. Now, as I think back on the day, I clearly remember Mr. McCown in his burgundy outt. Most likely, I lled in the missing information about what he was wearing with my general knowledge about what he tended to

How Forgetting Aids Homo Ignorans

97

wear, and most likely he was dressed all in burgundy that day. As I mentioned earlier, I now have a clear memory of my performance at the assembly going okay. Does it really matter whether it did go well or that I just imagined that it did? The constructive nature of memory provides us with the opportunity to ll in forgotten information or replace remembered information with new information that is less emotionally charged. A related idea is being used to treat patients with posttraumatic stress disorder. The basic technique is to have patients remember their traumatic stay on an intensive care unit and then give them propranolol, which causes a reduction in stress hormones, so that when the memory is reconsolidated it becomes dissociated with the high levels of stress that were originally associated with it (Gardner and Griffiths 2014). A careful understanding of the constructive nature of memory could guide the development of a toolbox of techniques that may help us deliberately forget the mundane memories of a ubbed high school performance and even the traumatic memories of being in an intensive care unit. Next, we will see how some of these same constructive memory processes and techniques can help us strategically avoid responsibility by improving our ability to deceive others by deceiving ourselves.

Memory Functions as a Strategic Device to Avoid Responsibility and Enhance Deception Hertwig and Engel (this volume, 2016) suggest that deliberate ignorance can be used as “a strategy for avoiding liability in a social or even legal sense.” Forgetting can serve a similar function. For example, in his senate conrmation hearings to become the attorney general of the United States of America in 2017, Jeffrey Sessions answered “I don’t recall” or something to that effect on 47 distinct topics (Lapowsky 2017). Whether he had truly forgotten the information or not, the answer reduces his accountability and risk of exposure by preventing subsequent questions on the topic. In the intelligence committee conrmation hearings, Sessions failed to recall a conversation with Sergey Kislyak, the Russian ambassador to the United States at the time. Undoubtedly, the committee would have wanted to learn more about the details of the conversation that could have shed light on connections between Donald Trump’s close associates and the Russian government. We will never know for sure whether Sessions forgot his conversation with Kislyak or not. Even if Sessions one day reveals that he did remember the conversation, it could well be that he remembers a reconstructed memory of the event. Whereas feigning forgetting may be a deliberate strategy used to avoid liability, true forgetting may serve the important evolutionary function of deception. Trivers (2000, 2011a) has argued on evolutionary grounds that deception is an important capacity that helps conspecics acquire resources from each other. On one side is an individual trying to deceive and on the

98

L. J. Schooler

other an individual trying to detect the deception. The cycle of deception and detection leads to an evolutionary arms race. As argued by von Hippel and Trivers (2011), some of the cues used to detect deception result from the cognitive demanding compensations required to mask signs of nervousness. For example, when the deceiver attempts to control signs of nervousness in their face, the pitch of their voice may also rise. They argue that because many of the signs of deception are mitigated through self-deception, evolution would favor individuals who can deceive themselves. Self-deception reduces the cognitive load of deception, not only by reducing the cognitive demands associated with masking nervousness, but also by the demands required to consciously maintain both the false narrative behind a deception and the true narrative. Although there were a mere 47 events that Sessions could not recall, there were 87 occasions on which he was asked about these 47 events.3 If he were pretending not to remember, he would be faced with the cognitively demanding task of keeping track of how he responded previously. One way to reduce cognitive load is to maintain a single narrative of past events, supported by memories consistent with the deception. According to von Hippel and Trivers (2011), the constructive nature of memory, as described above, is one of the primary mechanisms used to massage memories into a story that supports deception. Essentially, if the deceiver believes their story to be true then they are less likely to produce the telltale signs associated with deception, such as those associated with nervousness and an inconsistent retelling of past events.

Conclusion Memory processes, including forgetting, can achieve functions that have been ascribed to deliberate ignorance. Each of the functions—information management, performance enhancement, emotional regulation, strategic deception—illustrate the potential benets of limiting the amount of information an individual considers. The chapter began with examples of people with near perfect memories, implicitly contrasting their memories with the run-of-themill memories of regular folks. We often complain about how much we forget, losing sight of how phenomenally good we are at retrieving the information we have stored in our memories. In point of fact, most of us are blessed and cursed with extraordinary memories. The extraordinary memories that most of us possess mean we need to carefully guard the information we allow ourselves to know about. Notwithstanding the various ways in which we can reconstruct some memories and forget others, once information is in memory there is a good chance that it will not be forgotten and that it will be retrieved effortlessly. Once we know, for instance, that we have tested positive for the HIV virus or the Tay–Sachs gene, we are unlikely to be able to forget the test results. Despite the fact that memory and forgetting share functions with

How Forgetting Aids Homo Ignorans

99

deliberate ignorance, our memories are so good that we need to use the tools of deliberate ignorance to shield our memories from learning information we want to remain hidden.

7 Willful Construction of Ignorance A Tale of Two Ontologies Stephan Lewandowsky Abstract From Iraq’s mythical weapons of mass destruction (WMD) to Donald Trump’s record of more than ten daily false or misleading statements, deception and false claims have been an integral part of political discourse for quite some time. Nonetheless, Trump’s blatant disregard for the truth has given rise to much concern about the dawn of a “posttruth” era. The author argues that there are striking differences between the tacit ontologies of truth underlying the WMD deception and Trump’s false claims, respectively. Whereas the WMD campaign contested a single reality, Trump’s false claims often repudiate the very idea of external truths that exist independently of anyone’s opinion. The author considers this ontological shift from realism to extreme constructivism to be the most critical aspect of the current “post-truth” malaise. He notes that an extreme constructivist “truth” has formed an essential aspect of historical fascism and Nazism, as well as of contemporary populist movements, and that those conceptions are incompatible with liberal-democratic norms of truth-seeking. The author concludes by pointing toward potential solutions of the “post-truth” crisis.

Willful Construction of Ignorance: A Tale of Two Ontologies Simply stated, there is no doubt that Saddam Hussein now has weapons of mass destruction. —U.S. Vice-President Dick Cheney, August 26, 2002 Just remember, what you’re seeing and what you’re reading is not what’s happening. —U.S. President Donald Trump, July 24, 2018 Down with intelligence! Long live death! —General José Millán Astray, October 12, 1936

On March 20, 2003, American troops and their allies invaded Iraq, having vowed to rid the country of its weapons of mass destruction (WMD) that were

102

S. Lewandowsky

threatening the world. Except there were none in Iraq at the time. This conclusion became official in September 2004 with publication of the Duelfer Report, which was based on a thorough search of the country by the U.S. Government’s Iraq Survey Group. The Duelfer Report was met with bipartisan acceptance in Congress and by President Bush.1 In striking contrast to the absence of actual WMDs on the ground, many Americans continued to believe in their existence for at least a decade. In survey after survey, up to 50% of respondents expressed the belief that WMDs had been found in Iraq after the invasion. This pattern was observed from late 2003 (Kull et al. 2003; Kull et al. 2006; Lewandowsky et al. 2005), through 2004 (Kull et al. 2006), and again in 2006, 2007, and 2008 (Jacobson 2010), and nally again in 2014.2 Those ndings are remarkable for at least two reasons: First, they illustrate the resilience of misconceptions to correction (Lewandowsky et al. 2012). Throughout this period, there was no shortage of information about the absence of WMDs, and yet that abundance of information did not appear to make a dent in the public’s misconception. There is even evidence that attempts to correct misconceptions ironically increased people’s belief in WMDs (Nyhan and Reier 2010),3 or entrench other misconceptions surrounding the invasion of Iraq (Prasad et al. 2009). Second, those widespread mistaken beliefs did not arise from some cognitive accident but were carefully constructed by the U.S. and U.K. governments through a deceptive campaign to mobilize public opinion for the invasion. There are now multiple peer-reviewed analyses of the pre-invasion deception and propaganda efforts by the governments of the United States (Altheide and Grimes 2005; Arsenault and Castells 2006; Kaufmann 2004; Seagren and Henderson 2018) and the United Kingdom (Herring and Robinson 2014a, b; Robinson 2017; Thomas 2017). This deliberate campaign successfully constructed lasting public ignorance about the ground truth in Iraq, albeit at a political cost. After accepting the Prime Minister’s actions on Iraq for a considerable time (Baum and Groeling 2010; Kriner and Wilson 2016), the British public ultimately turned on Tony Blair, who is now the least popular among all living former or current Prime Ministers (Curtis 2018). The chimerical Iraqi WMDs have turned into a poster boy for the effectiveness of “organized persuasive communication” (Bakir et al. 2018), which seeks to convince the public of a reality that is, in fact, nonexistent. Other examples of the carefully curated and deliberate creation of ignorance include the efforts of the tobacco industry to undermine the public’s recognition of the health risks from smoking (Proctor 2011) as well as similar efforts by an 1 2 3

Transcript: Bush Responds to WMD Report, FDCH E-Media, Thursday, October 7, 2004; 2:02 PM, https://wapo.st/2V2WRVr (accessed Feb. 14, 2020). http://publicmind.fdu.edu/2015/false/ (accessed Feb. 15, 2020). This effect depends on details of the wording of the question, and with different questions this ironic backre effect is not observed (Wood and Porter 2018).

Willful Construction of Ignorance: A Tale of Two Ontologies

103

array of vested interests and ideological operators to deny the fact that greenhouse gas emissions are altering the Earth’s climate (Dunlap and McCright 2010; Oreskes and Conway 2010). At rst glance, this deliberate construction of ignorance in others and without their consent, also known as agnotology (Proctor 2008), seems to have little connection to the deliberate ignorance exercised by a person or with their consent (Hertwig and Engel, this volume, 2016). For example, it is common practice to perform musical auditions blindly, with the candidate performing behind a curtain (Goldin and Rouse 2000) to minimize bias of the selection committee. As I will show, however, the social construction of deliberate public ignorance is intimately related to more personal forms of deliberate ignorance. Indeed, the latter may be part of the solution to the former. Fast forward from WMDs to November 9, 2016, the day after Trump was elected president of the United States. The election result was a shock and surprise to many around the world. The U.K. tabloid, The Sun, tweeted4 that “the Simpsons’ most absurd prediction in its 27-year history has come true,” with the headline simply proclaiming “D’OH!” One reason for the widespread shock was Trump’s record of dishonesty during the campaign: Politifact identied 70% of his statements as “mostly false,” “false,” or “pants on re” lies. The opposing candidate, Hillary Clinton, came in a distant second, with barely more than 25% of her statements falling into the same categories. Nonetheless, a few days before the election, a Washington Post-ABC poll found that Trump had opened an 8% lead over Clinton in terms of honesty and trustworthiness. Perhaps unsurprisingly, Oxford Dictionaries declared “post-truth” to be the “international word of the year” in 2016, reecting the 2,000% increase in its usage during that year (McDermott 2019). As of May, 2019, the Washington Post had catalogued more than 10,000 untruths uttered by Trump during his subsequent presidency, with a daily average of more than 15 erroneous claims during 2018, compared to only around 5 daily untruths in 2017 (Kessler 2018b). President Trump has responded to those fact checks by accusing the media of being “enemies of the people” who spread “fake news.” By contrast, Trump has remained largely silent on rumors and conspiracy theories that are actually fake but target his political opponents. For example, nearly 50% of Trump voters entertained the possibility that Hillary Clinton was connected to a child sex ring being run out of the basement of a pizzeria in Washington, D.C. (Kafka 2016). This conspiracy theory originated with a tweet by a white supremacist and then entered the mainstream through Facebook, ultimately prompting a man to re a semiautomatic assault rie inside the restaurant (Kafka 2016). There is no record of Trump disavowing those rumors; in fact, members of his transition team even helped to promote the “pizzagate” conspiracy (Bender and Hanna 2016). 4

https://twitter.com/TheSun/status/796479369048899586 (accessed Jan. 27, 2020).

104

S. Lewandowsky

In apparent contrast to Trump’s record of falsehoods as campaigner and president, his polling data have remained remarkably steady during his presidency. As reported by FiveThirtyEight,5 Trump’s domestic net approval ratings after 716 days in office were only modestly lower than those of some of his predecessors (Reagan and Carter) at the same point in their presidencies. Moreover, an Ipsos poll from August 8, 2018,6 revealed that 29% of the public agreed with Trump’s assertion that the news media are the “enemy of the American people,” and this rose to a plurality of 48% among Republicans. At rst glance, Trump’s record of inaccuracy and the curated deceptions surrounding Iraqi WMDs share much in common: both involve dishonesty and the widespread dissemination of misinformation that successfully renders part of the public ignorant, or at least confused, about reality. There are, however, some important differences. Here I focus on the role that is, at least tacitly, assigned to reality in the two cases. The tacit ontology of misinformation was explored by McCright and Dunlap (2017), and I adopt one of their proposed dimensions of classication. In the case of Iraqi WMDs, the false information about their existence was curated by the U.S. and U.K. governments. The U.K. government painstakingly compiled it into “dossiers” (Herring and Robinson 2014a) that were based on government “intelligence” (Thomas 2017). We now know that those dossiers were deceptive (Herring and Robinson 2014a), but perhaps surprisingly, there is relatively little evidence of outright fabrication by U.K. officials, although some intelligence sources were clearly prone to fabrication (Thomas 2017). The U.S. and U.K. governments, therefore, displayed an ontological commitment to a form of realism. They accepted that there was a ground truth and relied on empirical notations, such as “evidence” or “intelligence,” to contest the state of that ground truth in Iraq. The fact that Iraqi reality turned out to be different from that which was constructed by the U.S. and U.K. governments does not negate the further fact that the WMD campaign was about a single, albeit contested, reality. Now compare that to the ontology employed by Trump and his entourage and acolytes.7 Trump’s false statements cover an extremely broad range of issues, from lying about hush money payments to a pornographic actress (Kessler 2018a) to the invention of six nonexistent new steel plants (Kessler 2017) to blaming a newspaper for “fake news” about himself, when in fact he was never mentioned in the article in question (Cerabino 2018). One notable attribute of many of these false statements is that, unlike the more nuanced 5 6 7

https://projects.vethirtyeight.com/trump-approval-ratings/ (accessed Jan. 27, 2020). https://www.ipsos.com/en-us/news-polls/americans-views-media-2018-08-07 (accessed Jan. 27, 2020) Although many analyses have justiably focused on the behavior of the U.S. president, it is important not to lose sight of the fact that Trump is underpinned by an infrastructure of media outlets, websites, conspiracy theorists, and pundits that shares and supports his ontology (Giroux 2018). Similarly, populist movements that eschew conventional notions of truth are active in many other western countries.

Willful Construction of Ignorance: A Tale of Two Ontologies

105

WMD claims based on government intelligence, they are readily and rapidly shown to be false. Indeed, some of the claims (e.g., that people went out in their boats to watch Hurricane Harvey; Selby 2018) have an almost operatic quality and are not readily explainable by political expediency.8 The obvious falsehoods of some of those statements have been interpreted as showing Trump’s “complete disinterest even in old-fashioned lying” (Waisbord 2018a:29). This type of misinformation is not carefully curated but is showered onto the public as a blizzard of confusing and often contradictory statements. McCright and Dunlap (2017) used the label “shock and chaos” to describe this type of misinformation. Shock and chaos are closely aligned with the notion of “bullshit” explored by Frankfurt (2005). When Trump’s falsehoods are challenged, the responses provide insights into the underlying ontology. First, Trump rarely, if ever, apologizes for his utterances.9 Second, his spokespersons have repeatedly sidestepped accountability by postulating a seemingly constructivist view of the world, which quite explicitly repudiates the idea of external truths that exist independently of anyone’s opinion. Thus, Trump’s counselor Kellyanne Conway famously declared that she was in possession of “alternative facts” when defending claims that Trump’s inauguration crowd was the largest ever: it was not. Similarly, when Trump attorney Rudolph W. Giuliani sought to explain in August 2018 why the White House had been delaying an interview between the president and special counsel Robert Mueller, he proclaimed that “truth isn’t truth.” Those deections are not isolated occurrences but arguably form a pervasive pattern that has been labeled “ontological gerrymandering” (McVittie and McKinlay 2018). Ontological gerrymandering is not conned to the United States. When a British right-wing personality’s claim that a car accident had been a terrorist incident was challenged, she dismissed the correction as “blatant state propaganda” and added (Charman 2017): I have no belief in fact. Fact is an antiquated expression. All reporting is biased and subjective. There is no such thing as fact any more.…There is no truth, only the truth of the interpretation of truth that you see.

This apparent ontological shift from realism to an unbounded constructivism has been noted by several scholars (e.g., McCright and Dunlap 2017; Waisbord 2018a). I consider this shift to be the most critical aspect of the current “posttruth” malaise. This conclusion is accompanied by an important qualier. There is widespread agreement in the social science literature that much of knowledge is socially constructed and that it is the objective of the social sciences 8 9

The operatic aspect of Trump’s rhetoric may be more than a coincidental wrinkle. The 2016 presidential campaign has been likened to a continuous spectacle (Mihailidis and Viotty 2017). A notable exception is an apology during Brett Kavanaugh’s swearing-in ceremony, when the president apologized to the new Supreme Court justice “for the terrible pain and suffering” that he and his family endured during his conrmation hearings (Arnold 2018), which were dominated by allegations of sexual assault against the nominee.

106

S. Lewandowsky

to understand this constructive process (e.g., Berger and Luckmann 1991). I accept the idea that knowledge, including scientic knowledge, is socially constructed and that this process is subject to critical and scholarly examination. However, unlike some strong critics of constructivism (e.g., Boghossian 2006), I do not accept that constructivism in its academic and philosophical sense inevitably entails an “anything goes” relativism. Raskin and Debany (2018:348) articulate strong reasons why epistemological constructivism does not imply ontological relativism: “If a rock is hurtling toward us, we will construe it ontologically as real, hard, and potentially dangerous. To do otherwise would be foolish.” Thus, from here on, when I refer to constructivism, I refer specically to an unbounded overextension of this concept entailing an “anything goes” ontology of truth. Below, I will place this unbounded and overextended constructivist conception of truth into its political and historical context and then examine its psychological fallout and technological foundation. This analysis yields some tentative paths toward a solution.

Constructivist Conceptions of Truth: Political and Historical Context Explicitly constructivist approaches to truth were at the core of the ideology of Italian fascism and German Nazism. Both rejected positivist thought, or the idea that absolute answers could be obtained by consideration of evidence (e.g., Varshizky 2012). Instead, Nazi writers such as Alfred Rosenberg proclaimed the existence of an “organic truth,” whereby “only that is true which promotes the existence of the organically closed, inner-worldly national community” (Voegelin 2000:62). On that view, truth is a personal experience, based entirely on intuition, which “can only be revealed through inner reection and acknowledgement of the mythic experience of the soul” (Varshizky 2012:326). Knowledge, evidence, and science can be true only “if they serve the purpose of the racially bound nationhood (Volkstums)” (Voegelin 2000:62). One’s personal constructed truth is thus inseparable from the existence of an overarching myth, created to “bind the masses emotionally and to arouse in them the politically effective expectation of salvation” (Voegelin 2000:62). Although, in principle, there are no constraints on the nature of the myth, it is typically palingenetic (promising a “rebirth”) and ultranationalist (Colasacco 2018). It is this adherence to a myth in preference to individual choices or evidentiary considerations that identies fascism and other totalitarian ideologies, such as Communism, as political religions (Voegelin 2000). Of course, the existence of a myth is by itself insufficient for people to have the “mythic experience” required to absorb a constructed truth. What is needed in addition, therefore, is a propaganda apparatus that brings the populace into the fold (Colasacco 2018; Voegelin 2000). There could be no fascism without persuasive propaganda (Arendt 1951; Eatwell 1996).

Willful Construction of Ignorance: A Tale of Two Ontologies

107

The affinity between fascism and constructivist views of truth are rmly established. However, it does not follow that Trump and other populist politicians can legitimately be considered to be fascists. There is little appetite among scholars to label Trump a fascist (e.g., Colasacco 2018; cf. Giroux 2018; Peters 2018). In particular, there is no evidence that Trump explicitly seeks to overturn the institutions of constitutional government and replace it with a totalitarian “new order” in a revolutionary national “rebirth” (Colasacco 2018). Trumpist politics are also not accompanied by an identiable myth, other than the rather diffuse—though potentially palingenetic—appeal to “Make America Great Again.” Trumpism, and similar movements in other countries, is thus best understood as a form of radical right-wing populism. Nonetheless, the similarities between fascist conceptions of truth and the ontology employed by Trump and his ideological allies must not be overlooked. I suggest that the “post-truth” discourse is inseparably tied to a constructivist ontology, which in turn is theoretically and empirically inseparable from populist politics and its psychological underpinnings and fallout.

Constructivist “Truth,” Populism, and Its Psychological Fallout A dening attribute of populism is its Manichaen view of the world, as a binary conict between “the people” and its enemies (Waisbord 2018a). Those enemies may be the “elites” or other out-groups such as immigrants (or both). A corollary of this binary view is the affirmation of “commonsense” truths against “elite” lies. In consequence, facts can never penetrate the unfalsiable premise of populism that there is an eternal conict between “the people” and “the elites.” As Waisboard states (2018a:10): Critics can never offer facts that question, challenge, or complement populist assertions. Populism’s view of good people and bad elites is immune to factual corrections and nuances.

Instead, populists negate the possibility of truth-seeking as a shared goal of a society (Waisbord 2018b). The disregard for facts exhibited by Trump and other populist politicians must therefore be understood as a necessary consequence, rather than an incidental by-product, of their ideology. Populist conceptions of truth are incompatible with liberal-democratic norms of truth-seeking. An ontological analysis, however, can only go so far: It can identify the nature of populist rhetoric and contrast it to democratic norms. It can explain the supply of constructivist rhetoric. It does not, however, explain why there is demand for misinformation. Why do large numbers of people tolerate a political leader who is sprouting 15 identiable falsehoods every single day? To answer this question, we must rst understand what has replaced evidence-based truth-seeking in populist discourse. If Washington Post fact

108

S. Lewandowsky

checks nd no traction with segments of the public, what does? In line with the fascist ontology of truth, Trump and other populist politicians’ appeal to their intuitive authenticity to project an (largely imaginary) image of honesty (Theye and Melling 2018). Authenticity is a potentially multifaceted construct (Kernis and Goldman 2006). Here we are concerned with one dimension of authenticity; namely, the relationship between a person’s behavior and his/her claims. It is this consistency between an actor’s “front door” and “backstage” that can project an image of authenticity to others (Hahl et al. 2017). There is much evidence that Trump is considered authentic by his followers. In one survey during the primary season (December 2015), 76% of Republican voters considered Trump to be authentic (Sargent 2015. In November 2018, a Quinnipiac University poll found that 77% of Republicans (but only 6% of Democrats) considered him to be honest. Conversely, 92% of Democrats found him to be dishonest, compared to only 18% of Republicans.10 Several markers of authenticity can be identied in Trump’s rhetoric. For example, Enli (2017) found that more than one third of his tweets expressed political incorrectness, name-calling, and insults. Altogether, Trump has insulted 487 things, places, and people on Twitter (Lee and Quealy 2018). Those clear norm violations are signals of authenticity because while they place Trump outside the conventional sphere of politics (Theye and Melling 2018), they also signal his willingness to speak his mind without artice. Similarly, continual violations of the “establishment” norm of truth-telling, or departure from conventional cognition (Lewandowsky et al. 2018), enable Trump to project himself as an authentic champion of “the people.” Those signals of authenticity are, however, not universally accepted; among Democrats, after all, Trump is nearly uniformly considered dishonest and is the subject of considerable derision. What determines acceptance of authenticity as a replacement or surrogate of honesty? Research has identied trait variables as well as contextual factors that drive endorsement of authenticity. In terms of trait variables, there is some evidence that American conservatives are more susceptible to fake news than liberals. For example, Pennycook and Rand (2019) found that the ability to differentiate between real and fake news was lower among participants who supported Trump than among Clinton supporters. Similarly, endorsement of pseudo-profound bullshit statements (e.g., “consciousness is the growth of coherence, and of us”; Dalton 2016) has repeatedly been shown to be higher among conservatives than liberals (Pfattheicher and Schindler 2016; Sterling et al. 2016). A recent big data analysis has found that conservatives were more likely than others to share fake news during the 2016 U.S. presidential election (Grinberg et al. 2019; Guess et al. 2019). Although these ndings are based on a small number of studies, and are thus best considered suggestive rather than conclusive, the strong underlying 10

https://bit.ly/2FoZhqd (accessed Jan. 28, 2020).

Willful Construction of Ignorance: A Tale of Two Ontologies

109

correlation between intuitive thinking and susceptibility to fake news observed by Pennycook and Rand (2019) meshes well with populist ontology. Turning to contextual variables, Hahl et al. (2018) identied experimentally some of the conditions under which a “lying demagogue” might be considered authentic. They presented participants with a ctional election in which an incumbent was cast in two different lights. When the incumbent was described as having taken advantage of his position and shown disregard for an out-group, thus compromising institutional legitimacy, a challenger from that out-group who uttered an overt lie accompanied by a misogynistic statement was considered to be more authentic than an honest challenger. By contrast, if the incumbent was not compromised, then a lying challenger was judged less authentic than an honest challenger. In a nutshell, lying may be considered authentic if a person is an outsider who is disadvantaged by an institution that is perceived to be in a crisis of legitimacy. These conditions identied by Hahl et al. (2018) mesh well with Trump’s projected image as an “anti-establishment” candidate who shares his supporters’ anguish at a presumedly oppressive “political correctness” (Theye and Melling 2018). Just as with the ontology of fascism, however, those conditions must be combined with effective propaganda before large segments of the population come to endorse populist demagogues. I address the specic propagandistic circumstances of the current “post-truth” malaise below. The endorsement of an authentic demagogue has cognitive ow-on consequences. In one study (Swire et al. 2017), participants were presented with claims made by Trump on the campaign trail. Participants were asked to provide belief ratings, before false claims were corrected and true claims affirmed. Subsequent belief ratings were sensitive to this intervention. People adjusted their beliefs in the appropriate direction (i.e., increase for true and decrease for false claims). However, the feelings and voting intentions of Trump supporters were unaffected by those corrections; that is, the extent of belief shift was uncorrelated with voting intentions. One interpretation of those ndings is that the accuracy of Trump’s statements is of no concern to his supporters. In another study, partisans were found to engage in a form of “fake news” themselves. Schaffner and Luks (2018) presented participants with two photos of presidential inaugurations side by side and asked them to indicate which one had more people in it. One of the photos was from President Obama’s inauguration, the other from President Trump’s inauguration. Both photos were taken under identical conditions at the same time, and there is no ambiguity about the fact that far more people attended Obama’s inauguration than Trump’s. However, the “alternative facts” surrounding this event that had been invoked by Trump’s counselor, Conway, were sufficient to convince a sizable share of Trump voters to identify the wrong photo. It is particularly noteworthy that highly educated Trump supporters showed greater inaccuracy (26%) than their less-educated counterparts (11%). Schaffner and Luks (2018:136) interpreted their results to reveal expressive responding, “whereby individuals

110

S. Lewandowsky

intentionally provide misinformation to survey researchers as a way of showing support for their political viewpoint.” The preference for authenticity over truth seems to be widely shared by supporters of Trump. We therefore confront a state of Western democracies in which multiple ontologies of truth are in irreconcilable competition. How did we get here and how can we restore the pursuit of evidence-based truth as a consensual feature of democratic societies? Answers to those questions can be found mainly in the political domain. The “post-truth” world arose from socioeconomic and political factors, and ultimately the solution will therefore require socioeconomic and political measures. Those political analyses are beyond the present scope.11 Here, I focus instead on selected psychological aspects of the technology and communication strategies that have enabled a populist ontology of truth to nd traction in Western societies.

The Road to “Post-Truth” We live in an era of “cultural chaos” (McNair 2017): new communication technologies have “made public access to potentially destabilizing information easier, and elite control of unwanted information harder” (McNair 2017:504). WikiLeaks or Ed Snowden, for example, have subverted conventional authority and, in concert with social media, have arguably contributed to protests and democratization efforts in some instances (Jost et al. 2018). But the same cultural chaos, in which information is no longer distantiated so that violence in Burkina Faso can unsettle people’s sense of security in Castrop-Rauxel, has given rise to a heightened sense of crisis (McNair 2017). A sense of crisis is a necessary condition for “authentic lying demagogues” to nd traction (Hahl et al. 2018). And once demagogues nd traction, their ontology of truth creates the necessary conditions for “shock and chaos” misinformation (McCright and Dunlap 2017) which further amplies the cultural chaos in a never-ending feedback loop. I take shock and chaos misinformation to refer to falsehoods that are dispersed not with the intent to persuade consumers of a particular state of the world, but to disrupt, undermine, and cast into doubt targeted information. Here I focus on three principles of shock and chaos misinformation: incoherence and conspiracism; diversion and deection; and ooding and trolling. Much existing analysis has explored those principles with respect to Russian state-sponsored efforts (Jamieson 2018; Paul and Matthews 2016). I accept that focus; however, I consider it to be for computational convenience only. 11

To provide brief pointers, some societal mega trends that may be particularly relevant to the emergence of the post-truth society are listed in Lewandowsky et al. (2017). Funke et al. (2016) provided a quantitative model of the temporal linkage between nancial crises and subsequent outbursts of populism.

Willful Construction of Ignorance: A Tale of Two Ontologies

111

Analyses of shock and chaos misinformation applies equally regardless of the particular source. Incoherence and Conspiracism Conventional wisdom holds that persuasive campaigns should avoid contradiction (Paul and Matthews 2016). In stark contrast, Russian sources routinely issue highly contradictory accounts. For example, after the downing of Malaysian Airlines MH17 by a Russian-made missile in 2014, Sputnik, RT (formerly Russia Today) and other pro-Kremlin websites rst denied the involvement of a Russian missile. Then the same sources blamed the downing on a Ukrainian attack. Then they said the pilot had deliberately crashed and that the plane had been full of dead bodies before impact. Finally, these sources argued that the incident was part of a conspiracy to besmirch Russia (Lewandowsky and Lynam 2018). Similar incoherent narratives were provided by Vladimir Putin during the crisis that led to the annexation of the Crimea by Russia (Paul and Matthews 2016; White 2016). Although contradictory messages can enhance persuasiveness under certain circumstances (Reich and Tormala 2013), those circumstances (high trust in the source, strong arguments) do not typically apply to Russian sources with a Western audience. The incoherence of shock and chaos may thus serve a different purpose. Incoherence is a known attribute of conspiracy theories (Lewandowsky et al. 2016; Wood et al. 2012), and the mere exposure to conspiracy theories, in turn, is known to reduce trust in official information (Einstein and Glick 2015). Similarly, when people are asked to construct a narrative from a set of available information, the presence of extreme conspiratorial statements reduces reliance on official information (Raab et al. 2013). The (sometimes) preposterous claims by Russian sources (Richey 2018) or Trump (Lewandowsky 2019) may thus fail to persuade but they do succeed in creating doubt about official information. Diversion and Deection On November 19, 2016, President-elect Trump unleashed a Twitter fusillade against a Broadway play in New York City, claiming that the cast of Hamilton had “harassed” Vice-President-elect Pence, who attended the performance. Ostensibly, this tirade was triggered by the cast reading an open letter at the end of a show, pleading for respect for a “diverse America.” Curiously, Trump’s tirade coincided with the revelation that he had agreed to a $25 million settlement (including a $1 million penalty to the State of New York) of lawsuits targeting his (now defunct) Trump University. This timing may have been entirely coincidental, but the confected Twitter outrage may also have been a targeted distraction as some observers suggested at the time (Bulman

112

S. Lewandowsky

2016). Diversion has been nominated as one rhetorical category in a taxonomy of Trump’s tweets (Lakoff 2017). The possibility that the Hamilton tirade was a strategic diversion nds indirect support in an analysis of Google Trends (Lewandowsky et al. 2017), but an overall quantitative analysis of this particular strategy is lacking to date. Closely related to diversion is the strategy of deection (Lakoff 2017), whereby another party is accused of dishonesty or “fake news” while the deective message is itself false. Trump has been shown to use the accusation of “fake news” to spread his own mis- and disinformation, using the accusation to frame his own messages as truth (Ross and Rivers 2018). The idea that Trump might deliberately engage in such diversionary tactics is consonant with the observation that similar methods are employed by corporate actors in their attempts to “greenwash” their actions (Siano et al. 2017). Flooding and Trolling One overarching attribute of shock and chaos misinformation is its sheer volume (Paul and Matthews 2016; Richey 2018). Volume matters because the signal-to-noise ratio on the Internet is diluted every time another falsehood is published. A vast number of false stories also prevents rebuttals to be issued because fact-checking is necessarily more painstaking than inventing the claim that the Pope had endorsed Trump and that Clinton sold weapons to ISIS (Hallin 2018). One particular ooding technique involves “trolling,” a disruptive online bullying behavior that involves “posting inammatory malicious messages in online comment sections to deliberately provoke, disrupt, and upset others” (Craker and March 2016:79). Trolls create a rhetorical environment in which any substantive and serious response would only elicit further abuse, thereby shutting down the possibility of civil conversation. Moreover, the mere presence of uncivil comments is sufficient to cause attitude polarization (Anderson et al. 2013). Individuals engage in trolling in pursuit of satisfaction; the personality traits psychopathy and sadism are strong predictors of trolling behavior (Craker and March 2016). However, trolling has also been weaponized by the Russian government. Using “troll farms” of professionals who ood the Internet with disruptive content, weaponized trolling can shut down legitimate conversation (Kurowska and Reshetnikov 2018). Russian trolls have demonstrably created discord around events in the United States and Germany (Prier 2017). It is now also clear that trolls interfered in the U.S. presidential election and the Brexit referendum in the United Kingdom in 2016, and the French presidential election in 2017 (Prier 2017). Although Russian trolls favor the extremist right wing overall (thus supporting Trump, Brexit, and LePen), they also frequently stoke both sides of an issue to maximize discord and division (Romano 2018).

Willful Construction of Ignorance: A Tale of Two Ontologies

113

For example, Russian trolls have been found to engage on both sides of the vaccination issue, amplifying both scientic content as well as anti-vaccination disinformation (Broniatowski et al. 2018). Given the crucial role of the perceived scientic consensus in determining the public’s attitude toward issues such as vaccinations and climate change (Lewandowsky et al. 2013; van der Linden et al. 2015), the amplication of both sides of the issue serves to create a false equivalency that is likely to erode the perceived scientic and public consensus (Broniatowski et al. 2018). The Remainder of the Iceberg and Implications for Common Knowledge This discussion has omitted numerous other variables that determine the efficacy of shock and chaos misinformation. Foremost among those are computational propaganda tools such as social-media “bots,” micro-targeted messaging, and avatars (e.g., Howard et al. 2018) as well as more basic adverse attributes of social media, such as the misogyny it supports (Eckert 2017) and the simplicity, impulsivity, and incivility it fosters (Ott 2017). I also omitted discussion of security issues, such as state-sponsored cyberattacks to obtain condential information (e.g., Farrell and Schneier 2018; Inkster 2016) and skirted the implications of big data analyses of news consumption (e.g., Allcott and Gentzkow 2017; Schmidt et al. 2017). A full understanding of shock and chaos misinformation requires examination of all variables. Even the present selective discussion should, however, suffice to establish the risks of shock and chaos misinformation. Democracy requires a body of common political knowledge (Farrell and Schneier 2018). This common knowledge provides the stabilizing expectations that enable societal coordination (e.g., knowledge and condence that the voting system is fair and that an election defeat does not prevent future wins). Prolonged exposure to shock and chaos misinformation may pollute the information environment sufficiently to compromise this tacit, but crucial, shared knowledge (Farrell and Schneier 2018; White 2016).

Exiting the “Post-Truth” World The antidotes to the populist ontology of truth, and the shock and chaos misinformation it entails, follow naturally from the preceding analysis. This analysis identied two main elements of the “post-truth” world. First, authoritarians, autocrats, and populists avoid deliberation and actively seek to suppress or subvert reasoned discourse (Hinck et al. 2018). Second, “posttruth” propaganda does not necessarily seek to persuade but to divert, distract, deect, and to undermine common knowledge (Farrell et al. 2018; Paul and Matthews 2016).

114

S. Lewandowsky

Deliberative Democracy An antidote to the rst element involves deliberative forms of democracy, such as citizens’ assemblies and other deliberative fora in which randomly chosen citizens consider issues in depth and with expert input (Chambers 2018). Under the right circumstances (e.g., expert facilitation), deliberations by groups of citizens can be inclusive, depolarizing, and constructive (Chambers 2018; Curato et al. 2017). The constructive role of deliberative bodies can be illustrated with respect to several recent referenda. In the United Kingdom, the Brexit referendum was marred by a surplus of misinformation, much of it disseminated by tabloid media, that has been characterized as “systematic epistemic rights violations” (Watson 2018). The referendum has engendered a crass majoritarianism, with growing and toxic polarization. In striking contrast, Ireland was able to conduct two referenda on highly emotive issues (marriage equality and abortion) without experiencing a comparable toxicity. One ingredient of Ireland’s success was citizens’ assemblies which were convened by the Irish government and informed the subsequent popular vote, based on extensive expert interrogation (Farrell et al. 2018). The key role of deliberation is further supported by the fact that a citizens’ assembly constituted in the United Kingdom after the referendum (for research purposes) yielded recommendations for Brexit that were far more nuanced and pragmatic than the rhetoric during the campaign (Renwick et al. 2018). It is of particular interest that the assembly favored a continuation of free movement (i.e., free immigration from EU countries) even though removal of that right was a centerpiece of the campaign to leave the EU (and is now presented as an achievement by the U.K. government). Deliberative assemblies cannot be a panacea to guard against populism. However, their design ensures resilience against the processes of demagoguery and propaganda reviewed earlier. This resilience has been repeatedly conrmed in real settings involving citizens’ assemblies. Notably, Ireland has been largely spared the populist tendencies observed in other comparable countries (Culloty and Suiter 2018; Suiter et al. 2018). Journalistic Norms and New Narratives The antidote to shock and chaos misinformation cannot only involve pointby-point rebuttal (Richey 2018). This is nearly impossible in light of the sheer volume of misinformation, and, in the end, debunking it is also often distressingly ineffective (Lewandowsky et al. 2012). Instead, putting aside regulatory (Wood and Ravel 2018) and technological (Lewandowsky et al. 2017) countermeasures for now, shock and chaos can only be met by proactive messaging and pursuit of an alternative narrative (cf. Lewandowsky et al. 2012). Several alternative narratives have been tabled (e.g., Hellman and Wagnsson 2017). One intriguing option is to avoid covering shock and chaos

Willful Construction of Ignorance: A Tale of Two Ontologies

115

misinformation. This was practiced by the French press during the French presidential election in 2017. When the campaign of President Macron was hacked and emails leaked, the press did not cover the content of those emails. Instead, the media focused on the hacking and the inuence operation behind the hack, refusing to give credibility to the leaked information (Prier 2017). Similarly, the Irish media have successfully served as gatekeepers against populism (Culloty and Suiter 2018; Suiter et al. 2018). In striking contrast, the American media appeared more concerned with the content of Democrats’ hacked emails than the fact that they were obtained illegally. Only after the election of Trump did the New York Times concede that it had become “a de facto instrument of Russian intelligence” by publishing multiple stories that cited hacked content (Lipton et al. 2016). Recommendations by legal scholars not to publish hacked content have followed (Zelinsky 2017), and the New York Times has urged the media to ignore Trump’s “Twitter expectorations” (Bruni 2019). Ironically, this is a recommendation for the deliberate creation of public ignorance, ostensibly for the public good. A more proactive narrative seeks to counterbalance the sense of cultural chaos with a message of order and structure (Hellman and Wagnsson 2017). The details of such narratives are beyond the present scope of this discussion. However, given the widespread discontent that provides the breeding ground for populism, it is crucial for a counternarrative to be built on a message of inclusivity and solidarity that can withstand populism (Stacey 2018). New Norms of Citizenship Finally, even positive narratives require a receptive audience. There has been a decline in trust in traditional media, at least among some groups. For example, in Germany the mainstream media are routinely besmirched as Lügenpresse (lying press) by populists (Quandt 2012). For journalistic norms and new narratives to be an effective solution to exiting the post-truth era, people must learn to dismiss false information and fake news. There is some evidence that this skill can be acquired (Lewandowsky 2019). Specic recommendations of how this skill can be exercised have been provided by Lewandowsky and Lynam (2018). It is encouraging that “fake news” nds much less traction among young “digital natives” than among the elderly. In a big data analysis, Guess et al. (2019) found that people over age 65 share articles from fake news domains seven times more frequently than the youngest age group. Behr (2017) describes a broader educational context that would be required to achieve a restoration of democratic spaces for deliberation and diversity. Finally, and somewhat ironically, deliberately choosing not to know may be another arrow in the quiver against the deliberate construction of social ignorance by demagogues. Deliberate ignorance may be a strategy to shield oneself from misinformation, Trump’s barrage of falsehoods, and conspiracy theories. This is a nontrivial task, as lies, “fake news,” and conspiracy theories

116

S. Lewandowsky

are, by design, made to be more interesting and novel than simple facts and truths. It may take considerable cognitive effort to mentally quarantine the claim that Hillary Clinton is a shape-shifting reptile or that she sold arms to ISIS. Accordingly, deliberate ignorance can be understood as a smart strategy to protect oneself against the deliberate construction of public ignorance. Evidence on this issue is ambivalent. As noted earlier, partisans are willing to shift their belief in specic statements made by Trump (e.g., disbelieving falsehoods after they have been corrected) but those changes do not affect their support or feelings for Trump (Swire-Thompson et al. 2019; Swire et al. 2017). In the present context, one could interpret those results to imply that people remain deliberately ignorant of the truth value of Trump’s statements (unless they are corrected in an experimental intervention), and that this ignorance shields people from having to update their core beliefs about Trump. This interpretation is supported by another aspect of the ndings by Swire and colleagues, namely that Trump supporters expressed nearly the same extent of belief in both true and false statements made by Trump before the experimental intervention. However, this interpretation must necessarily remain tentative for now. Other relevant evidence can be adduced from a recent study by Lewandowsky, Jetter, and Ecker (submitted), which related Trump’s Twitter vocabulary to media coverage of issues that were politically threatening to the president. They found that increased coverage in the New York Times or ABC Evening News of the Mueller investigation into Russian inuence during the 2016 election triggered increased Twitter activity by Trump on unrelated topics that represented his political strengths (e.g., job creation). That increased Twitter activity, in turn, reduced subsequent coverage of the Mueller investigation by the media. In the present context, one can interpret this as a failure of strategic deliberate ignorance on the part of the media. Instead of ignoring irrelevant tweets, they were successfully diverted from the Russia-Mueller coverage by Trump.

Concluding Remarks Democracy is a never-ending quest (Przeworski 2016). Democracy requires pluralism and the recognition that citizens are irreducibly diverse (Galston 2018). Populism’s binary view of the “people” versus the “elite” cannot accommodate that diversity and will therefore inevitably continue to search for new enemies that, once identied, will need to be combated. Ultimately, the logic of populism will threaten the rights of minorities and enable a creep toward autocracy (Galston 2018). Democracy also requires common political knowledge that is accepted by all actors (Farrell and Schneier 2018). The blizzard of shock and chaos misinformation that is propelled by the logic of populist ontology is undermining that common knowledge. Turning to the global level, a populist vision is incompatible with a multilateral international order that is governed by law and compromise (Kasner

Willful Construction of Ignorance: A Tale of Two Ontologies

117

2017). It is at this level of abstraction that the carefully curated deceptions involving Iraqi WMDs share further common ground with the shock and chaos fake news that are entwined with a constructivist view of truth. As Walter Benjamin, a German philosopher who ed from the Nazis, noted from exile in 1935 (Benjamin 2004:1239): “Fascist ideology, culminate[s] in one thing: war.” The cost of failure to reclaim a realist ontology of truth may therefore be high indeed.

Acknowledgment I thank Kent Peacock, James Ladyman, Ralph Hertwig, Gordon Brown, and the participants of the Strüngmann Forum for comments on an earlier draft.

Models

8 Models of Deliberate Ignorance in Individual Choice Gordon D. A. Brown and Lukasz Walasek Abstract This chapter reviews models of deliberate ignorance and argues that models developed in both psychology and economics may be useful in understanding different aspects of deliberate ignorance. Such models must specify what quantity is increased at the expense of the potential benets of the ignored information. A model classication is developed based on the quantity that different models assumed to be so increased. Three broad classes of relevant models are identied: (a) models that assume that utility associated with the content of beliefs may be increased by deliberate ignorance, (b) models that assume that the consistency of beliefs with each other or with a sense of identity may be increased by deliberate ignorance, and (c) models that assume that the quality of decision making may be increased by deliberate ignorance. Gaps in the literature are identied. In particular, it is suggested that insufficient attention has been given to the distinction between the effects on an agent’s utility of acquiring information (a one-off change) and possession of information (being in a steady-state of changed beliefs). Ultimately, models of deliberate ignorance will need to address the relationship between people’s (often partial and contradictory) knowledge about the world and their reasoning about that world.

Introduction This chapter reviews, with a broad brush, disparate computational and mathematical models which we believe may be useful in understanding deliberate ignorance. We limit our discussion to quantitative and mathematical models rather than descriptive or verbal ones and avoid literature that is primarily empirical. We develop a classication of models and attempt to integrate and evaluate the strengths and weaknesses of different approaches to the modeling of deliberate ignorance. For our discussion, we adopt a working denition as follows: deliberate ignorance is the conscious individual or collective choice not to seek or use

122

G. D. A. Brown and L. Walasek

information in situations where the marginal acquisition costs are negligible and the (individual or social) benets are potentially large. We conne our discussion to decision making at the level of the individual rather than the group, as strategic considerations are the focus of other authors in this volume. Moreover, we do not restrict ourselves to cases where a decision is “deliberate” in the strict sense of “conscious; the result of deliberative thought.” Such a restriction would exclude many relevant models in both psychology and economics as well as possibly marginalizing some of the classic examples of deliberate ignorance. It seems far from clear, for example, that avoidance of medical tests always reects deliberate and conscious processing. More specically, much research in social and cognitive psychology supposes that we are often unaware of (and may also be mistaken about) the reasons for our actions and inactions; deliberation often seems to follow, rather than precede, a decision. To the extent that this supposition is correct, models of conscious processes alone may miss crucial insights pertinent to deliberate ignorance. Another reason for interpreting “deliberate” loosely is that many of the models most relevant to deliberate ignorance are to be found in the economics literature, and such models are typically interpreted as “as-if” accounts. As-if models do not claim to characterize the deliberative and conscious psychological processes that underpin decision making; instead, they are couched at a higher (e.g., algebraic) level of description and typically aim to make sense of behavior by identifying inferred preferences that explain a person’s behavior as being consistent. Thus, there is no assumption of conscious deliberative processing in such models. Here, however, we argue that as-if models nevertheless offer key insights into deliberate ignorance. The working denition of deliberate ignorance involves the choice not to seek or use information in situations where possession of the information would confer “large potential benets.” However, in seeking to understand why people engage in deliberate ignorance, theorists are necessarily looking for some quantity that is optimized or at least increased as a result of the choice not to look for, or use, additional information (while acknowledging that there may also be many costs). Any model of deliberate ignorance must assume that the ignorance is in some respect benecial for the agent who seeks it, and the task of the modeler is to identify what the quantity being optimized is, as well as perhaps to specify the relevant psychological mechanisms, either analytically or through simulation. Indeed, the question of “what is being maximized when deliberate ignorance occurs” is central to the classication of models that we develop below. To put it in the language of economic models, people can be understood as having preferences, and the job of models is not to say whether those preferences are right or wrong (for nothing can be said about that; “people want what they want”) but simply to identify what those preferences are. The task for an economic model, therefore, is to identify the preferences that deliberate ignorance is helping to satisfy. The approach, therefore, contrasts with evolutionary or functional

Models of Deliberate Ignorance in Individual Choice

123

perspectives, according to which preferences can be explained in terms of contributions to tness. Such preferences must act in opposition to preferences for information. It is well established that under many circumstances people place a value on gaining information per se, even if the resulting information is unlikely to inform their future actions. Thus, people will pay to discover the secret of a magic trick or to discover what would have happened if they had made different choices in an experiment. People’s choices of information in logical reasoning tasks can be better explained if it is assumed that they choose items that will maximize information gain rather than apply logical rules (Oaksford and Chater 1994). In George Miller’s terms, we are “informavores” (see also Loewenstein 1994). Demonstrations of deliberate ignorance seem to show that there is some factor deriving from the expected content or consistency of belief, rather than the amount of information gained, which can override this general psychological preference for information seeking. So, what are the preferences that might be satised by deliberate ignorance?

A Preliminary Classication Figure 8.1 illustrates the sources of deliberate ignorance effects assumed by the various models that we review below. Current beliefs and preferences (i.e., those that hold at the time of decision making) are illustrated on the left side of the gure. Current beliefs may be associated with utility to the extent that they are consistent with an individual’s preferences; holding a belief that I carry the gene for Huntington disease may not sit well with my preference for a state of the world in which I have a long and healthy life, and hence changing, suppressing, or simply not thinking about that belief may increase my well-being. The left-hand side also shows the importance for utility of having beliefs that are consistent with one another and/or consistent with one’s sense of identity and/or ego; many economic models (reviewed below) accord a central role to identity, consistency, and the possession of positive views about the self. To identify additional possible sources of deliberate ignorance, Figure 8.1 also represents anticipated future beliefs and preferences (right side of the gure) together with the temporal trajectory linking present and future beliefs. These may also inuence decision making at the time it occurs. First, deliberate ignorance may be predicted by a focus on the utility anticipated to be associated with potential future beliefs. As with current beliefs, anticipated possible future beliefs may be associated with the loss of utility to the extent that they are inconsistent with anticipated future preferences, each other, or an anticipated future sense of identity. Although deliberate ignorance is normally assumed to reect some anticipated difference between current and future states, only some potential accounts of deliberate ignorance take explicit account of the amount of time for

124

G. D. A. Brown and L. Walasek Preferences

Current beliefs (steady state)

Identity, ego, and consistency Time of decision making

Preferences Information acquisition (process)

Passage of time (delaying resolution of uncertainty; keeping options open)

Future beliefs (steady state)

Identity, ego, and consistency Anticipated future

Figure 8.1 Schematic to illustrate the various sources of deliberate ignorance effects that have been assumed by models.

which a state of ignorance is maintained. Anticipatory emotions, such as hope, may lead to deliberate ignorance if that ignorance increases the amount of time before hope is likely to be dashed (e.g., one might postpone nding out the outcome of a low-probability but high-win gamble, such as a lottery entry). Thus, we view a choice to delay the acquisition of information as a form of deliberate ignorance, because such a choice maintains a state of relative ignorance for longer than necessary. Delay-related deliberate ignorance can also result from models that assume preferences over the timing of uncertainty resolution, or Kreps-Porteus preferences (Kreps and Porteus 1978). We therefore include “passage of time” in Figure 8.1. In addition, by including “information acquisition” as a separate component, the schematic reects a distinction between effects on anticipated utility that are due to the acquisition of new knowledge and effects due to the possession of it. Imagine, for instance, deciding whether to go to the doctor to receive results of a gene test for Huntington disease. It is intuitively plausible that your feelings about going to the doctor would be strongly inuenced by a vivid mental picture of yourself sitting in the chair in the doctor’s surgery receiving the news, and less inuenced by considerations of how you might feel in the longer term as a person in possession of the unfavorable diagnosis, but having had time to adapt to it. Although many models of deliberate ignorance fail to note this distinction, it is important psychologically and is captured in models that assume that necessarily transient emotions, such as surprise, may motivate deliberate ignorance. Finally, although Figure 8.1 focuses on preferences and beliefs rather than the choices and decisions that would result from them, deliberate ignorance may also be motivated by expectations about the quality of decision making and choice conditional on the amount of information available to the decision maker. Although these expectations are not represented, there are many cases where ecologically

Models of Deliberate Ignorance in Individual Choice

125

optimal decision making or future prediction may be improved by discarding or ignoring information, and we review such models below. We can now identify three broad categories of models, with the categories differentiated by the quantity that is assumed to be amenable to increase through deliberate ignorance: 1.

2.

3.

People may choose to ignore information likely to support beliefs that in some way threaten their preferences (here broadly interpreted to include desires and attitudes). These models assume that utility associated with the content of beliefs is maximized. People may ignore information to improve the consistency of their beliefs with each other or with their identity; there might be a cognitive cost to inconsistency per se or to believing, for example, that one has low ability if such a belief is inconsistent with the belief that one has high ability. Relatedly, there may be a cognitive cost to changing one’s mind and it may be this cost that is minimized by deliberate ignorance. People might be maximizing the quality of their decision making. For example, they might base decisions on smaller-than-available samples of information, leading to superior identications of contingencies in the world albeit at some cost of false positives; ignore information to prevent known cognitive biases contaminating their decision making; and discard information to prevent “overtting” of predictive models of the environment.

Overall, our classication of computational models is based on their underlying assumptions of how deliberate ignorance may emerge due to interactions involving people’s preferences, beliefs, and the time course of resolving uncertainty. This framework is, therefore, distinct from existing taxonomies of deliberate ignorance (Gigerenzer and Garcia-Retamero 2017; Hertwig and Engel 2016). In these other taxonomies, the primary goal is to delineate various causes of deliberate ignorance, but this does not require reference to the mathematical and formal aspects of the underlying processes. How does the modelbased taxonomy developed here map onto those developed by others? We see a close correspondence between our “belief-content” models and two of Hertwig and Engel’s subcategories of functions of deliberate ignorance (emotion regulation and regret avoidance; suspense and surprise maximization). In addition, models based on enhancing decision-making quality fall neatly within their performance-enhancing subcategory. Our model-derived category of models involving consistency and identity, in contrast, does not t well although it has some overlap with Hertwig and Engel’s “strategic” category as applied to individuals (e.g., self-disciplining). We now review models within each of these three categories. Perhaps surprisingly, the majority of relevant mathematical models can be found in the economics literature. Indeed, the idea that beliefs can, in themselves, carry implications for current and predicted well-being has received more attention,

126

G. D. A. Brown and L. Walasek

at least as far as provision of specic models is concerned, in economics than in psychology.

Deliberate Ignorance and Models of the Content of Beliefs Several economic models assume that people derive utility from their beliefs about states of the world (Brunnermeier et al. 2017; Brunnermeier and Parker 2005; Caplin and Leahy 2001, 2004; Ely et al. 2015; Epstein 2008; Golman and Loewenstein 2016; Golman et al. 2016, 2017; Köszegi 2003, 2006, 2010; Köszegi and Rabin 2009; Loewenstein 1987). The notion of belief-dependent utility represents a strong departure from the standard approach, according to which beliefs and preferences are independent. In the classic view, a person should only choose to obtain new information for its instrumental value. In utility-from-beliefs models, beliefs about future outcomes or the present state of the world can be a source of positive utility in themselves. Models that explicitly accord a role to belief-related utility necessarily open up the possibility that utility might be increased through deliberate ignorance. Some theorists explicitly explore such implications, while others do not. Below, we distinguish between content-based models that do and do not allow utility to be inuenced by the amount of time that an individual is in a particular belief state (“duration-dependent” and “duration-independent” models). The passage of time could be relevant to deliberate ignorance either because positive emotions (like suspense) might be more valuable if they obtain for longer or because it might be preferable to experience negative emotions, perhaps related to uncertainty, for as short a time as possible or in the future rather than the present. Duration-Independent Content-Based Models A number of economic models quantify the utility loss that may be experienced when preference-relevant beliefs change as a result of new information, and hence can explain deliberate ignorance for such information. Such models typically do not assign a major role to the amount of time that passes between the decision to engage in deliberate ignorance and (potential) information acquisition. For example, Köszegi (2003) invoked utility dened over beliefs to explain why patients may rationally choose to avoid new information about their health condition. According to his model, a patient who learns new information can choose appropriate treatment, which increases anticipatory utility since the patient will expect their health to improve. This increase, however, may be offset by the negative impact of learning that the state of health is poor. The patient must trade off the utility loss associated with receiving bad news against the benet of a better knowledge of their health, and a decision to avoid visiting the doctor may reect this trade-off

Models of Deliberate Ignorance in Individual Choice

127

(see also Schweizer and Szech 2018). Köszegi (2010) discusses at length the role of disappointment aversion in informational preferences (which can include deliberate ignorance). Imagine that you possess an instant-lottery ticket with a 50% chance of winning £50 and a 50% chance of winning £100, and the choice of resolving the lottery immediately, or waiting to do so. Under ignorance, you will likely be either surprised or disappointed by the outcome, and the degree of surprise or disappointment will depend on the reference point given by your expectations. The state of ignorance may carry higher or lower utility depending on your degree of disappointment aversion (and possible trade-offs with optimism). A number of models address the relationship between investment behavior and disappointment aversion. For example, in the Andries and Haddad (2017) model, increases in the subjective probabilities of disappointing outcomes may lead people to prefer infrequent “bundles” of information over more frequent small amounts of information (e.g., checking the performance of investments frequently); for an account based on “news utility,” see Pagel (2018). In a series of papers, Golman and Loewenstein have proposed a unied utility-based theoretical framework that captures preferences for acquisition and avoidance of information (see Golman and Loewenstein 2016; Golman et al. 2017, 2019). The key deliberate ignorance-relevant assumption of the model is that people prefer to avoid attending to unpleasant anticipated outcomes. Despite a preference for seeking information for its instrumental value, people may engage in deliberate ignorance if the new information would be psychologically painful or unpleasant. In a number of other models discussed below, preference for information emerges from people’s underlying attitudes toward risk, time, and uncertainty. In the Golman and Loewenstein model, however, preference for information is the source of (and can therefore inuence) preferences for risk and uncertainty. Unlike many traditional utility-based models in which utility is assigned to material outcomes, Golman and Loewenstein’s model assigns utility to cognitive states, which include (a) strength of attention paid to unanswered questions and (b) subjective judgments about the probabilities of possible answers to such questions. More formally, consider a question set Q = {Q1, …, Qm} with corresponding attentional weights, w = {w1, …, wm}. For each question Q1 an individual holds subjective beliefs over potential answers, i Ai1 , Ai2 , . The space of answers and prizes (X) is then given by α = 1 × 2 ×… m × X. The cognitive state is then given by a subjective probability (π) dened over possible answers to a given question (α) and the vector of attention weights (w). In the model, acquisition of new information is treated as a decision to accept a lottery over possible cognitive states. The utility function in the model is dened over cognitive states that may result from actions s  S , which may lead to a discovery of new information and revision of one’s prior beliefs: U

,w S

max u s

S

,w

,

(8.1) 

128

G. D. A. Brown and L. Walasek

where u is the utility associated with a particular action and U is the utility resulting from choosing the action s associated with the maximum u. The desire to acquire new information is therefore given by the difference in the expected utility before and after receiving new information:

Di

¦

S i0 Ai U S A , w A |S  U (S 0 , w 0 | S ), i

Ai Ai

i

(8.2) 

where (π0, w0) species the initial cognitive state. Thus, for example, a decision to nd out about the results of a medical test depends on the utility associated with the potential cognitive states. If the anticipated result is negative, an individual will avoid the answer to the relevant question Qi. This utility function instantiates three mechanisms that guide the desire to acquire or avoid information: • • •

Information has instrumental value: its availability can increase the utility of subsequent actions. Individuals gain utility from acquiring new information and closing information gaps. A “curiosity motive” is thus conveyed by utility gained from new information absent its instrumental value. Motivated attention (and therefore deliberate ignorance) can emerge if the new information inuences attention weights.

Individuals may actively avoid information that increases how much they think about unpleasant outcomes, regardless of the instrumental value of new knowledge, and potentially overcome the curiosity motive. Golman and Loewenstein (2016) describe conditions under which motivated cognition can result from the inuence of attention weights on utility. Full specication of the model, its assumptions and consequences, can be found in the original papers (Golman and Loewenstein 2016; Golman et al. 2917, 2019). Here we note that the key determinant of deliberate ignorance in their model is the valence of the surprising information. Surprising information (causing large revision in prior beliefs) is associated with a shift in attention weights upon acquiring new knowledge. This will amplify people’s reluctance to gain information when the information is associated with negative valence. Duration-Dependent Content-Based Models A number of other models assign a more central role to the amount of time that passes before uncertainty is resolved. Thus, a preference for deliberate ignorance could result from attempts to maximize positive anticipation or minimize dread, and it seems reasonable to assume that the utility gains of a month of eager anticipation (e.g., of winning the lottery) are greater than the gains from just a week of experiencing the same anticipation. Alternatively, but relatedly, deliberate ignorance could result from a general preference for delaying resolution of uncertainty.

Models of Deliberate Ignorance in Individual Choice

129

Loewenstein (1987) developed a model in which utility could be gained by delaying consumption and presented illustrative data (e.g., people are willing to pay more for a kiss that is delayed by three days than for an immediate one). People may also sacrice currently preferred consumption options to keep future options open, thus taking account of the fact that their preferences may change in the future (Kreps 1979). Preferences for delaying consumption or commitment (choice) do not in themselves lead to deliberate ignorance. However, models of preferences for delay can be extended to deliberate ignorance when the future outcome is uncertain, because a choice to delay the resolution of uncertainty (i.e., delaying acquisition of the knowledge about an outcome) is a case for preferring ignorance, at least for some period of time. An individual may thus choose to avoid information, such as learning the outcome of a lottery, to maximize the utility derived from suspense or excitement. Chew and Ho (1994) present evidence that hope (which they dene as enjoyed maintenance of a state of uncertainty, often regarding a potential gain) is stronger for a low-probability gain. Formal utility-based models have been developed to capture the role of anticipatory emotions (Caplin and Leahy 2001; see also Dillenberger 2010; Köszegi and Rabin 2009). Of particular relevance to deliberate ignorance are models in economics that build on the axiomatic approach to studying dynamic resolution of uncertainty proposed by Kreps and Porteus (1978). This approach brings preferences for the time at which uncertainty is resolved within the traditional economic utility-based framework. Extensions of this recursive expected utility model have led to multiple belief-based models in economics. For example, the framework has been extended to capture the role of anticipatory anxiety (Caplin and Leahy 2001) as well as the value of suspense and surprise (Ely et al. 2015). Note that the difference between “suspense” and “surprise” mirrors the distinction, raised above, between being in a state of ignorance over time (suspense) and the resolution, at a single time point, of uncertainty (potential surprise, depending on expectations). Ely et al. (2015) use their modeling approach to study the conditions under which noninstrumental information may be sought to maximize surprise and suspense. In their model, an individual may choose a particular information policy to achieve optimal suspense and surprise as beliefs evolve over time. Suspense arises when the uncertainty in an outcome is higher in a future period than in the current period. Surprise is dened as the extent to which beliefs change. This modeling framework captures an important determinant of deliberate ignorance. In many cases people may choose to delay resolving uncertainty to minimize negative and maximize positive emotions. Thus, their model can simultaneously capture cases in which people seek irrelevant (noninstrumental) and avoid relevant (instrumental) information. Several other models also assume that there is a benet to optimism and the anticipation of positive future events, such as passing an examination or becoming rich due to the success of one’s investments. In Brunnermeier and

130

G. D. A. Brown and L. Walasek

Parker (2005), individuals are assumed to hold incorrect and yet optimal (in the sense of being happiness-maximizing) overoptimistic beliefs. Over a long enough time period, the negative cost (for decision making) of being too optimistic may be outweighed by the positive utility of holding an erroneous belief. A decision not to seek information can therefore be motivated by the individual’s desire to maintain a positive outlook on the future (cf. Köszegi 2010). The idea that “living with risk” can be associated with anxiety or hope is also explored in a model developed by Epstein (2008), who shows that such a model can predict information avoidance when an unfavorable outcome is very likely together with information seeking when a favorable outcome is anticipated. A related account was developed by Bénabou (2013), who describes a model of groupthink and shows that denial of negative information can either be contagious or self-limiting, depending on how harmful to others it is. More specically, if the other members of an agent’s group engage in deliberate ignorance (of bad news), they may act in a way that is either good for the agent (thus reducing the agent’s own incentive to increase anticipatory utility by engaging in deliberate ignorance) or bad for the agent (increasing the agent’s own incentive for deliberate ignorance). Individual incentives for deliberate ignorance, therefore, depend both on the accuracy of other’s beliefs and on the probabilities of good and bad outcomes to collective action. Anticipated regret may be a key affective consideration that underpins choosing not to know. Gigerenzer and Garcia-Retamero (2017) proposed a modeling framework to explain why people may choose to avoid information about both positive and negative outcomes. In their regret theory of deliberate ignorance, individuals are assumed to avoid the maximum possible anticipated regret (minimax regret criterion). In the model, the emotions associated with the possible outcomes of acquiring knowledge may encourage a person to prefer ignorance. For example, a person may choose not to know when they will die if one of the possible answers is that they will die very soon, thus causing high anticipated regret. The model can be adapted to account for positive emotions, as when the anticipated regret is based on the loss of suspense and surprise. An individual may choose to ignore new information to maintain these positive emotions.

Deliberate Ignorance and Models of the Consistency of Beliefs and Identity Maintenance In the previous section, we discussed how deliberate ignorance can be understood in terms of maximizing positive emotions (such as anticipation of favorable outcomes) and minimizing inconsistency between individuals’ beliefs and their preferences and desires. However, deliberate ignorance may emerge from a preference for belief consistency, irrespective of the content of those beliefs. Closely related models assume that utility is gained by maintaining a

Models of Deliberate Ignorance in Individual Choice

131

consistent or a positive identity. Here, we briey review this class of model and show how consistency preference and ego protection models may shed light on deliberate ignorance. We note that a preference for consistency is different from assuming loss aversion in preferences over changes in beliefs, as is assumed in the model of Köszegi and Rabin (2009). The idea that consistency matters has a long history in social and clinical psychology. For example, Heider’s Balance Theory maintains that unbalanced structures of cognitions produce negative affect (Heider 1958). Cognitive dissonance is assumed to arise when attitudes, social norms, and behavior are not aligned (Festinger 1954), and an inuential line of research argues that, rather than having direct access to our preferences, we infer them from our own behavior (Bem 1967; Wilson 2002). Effects of cognitive dissonance have also been noted in the economics literature (Mullainathan and Washington 2009), with Golman et al. (2016) providing a comprehensive review of the evidence for preferences for “belief consonance” across, rather than within, individuals. A concern for consistency could motivate deliberate ignorance in cases where the ignored information might threaten an existing worldview. The phenomenon of conrmation bias can be seen as a form of deliberate ignorance. Conrmation bias occurs when people pay selective attention to evidence that is consistent with their existing attitudes or beliefs at the expense of ignoring information inconsistent with those attitudes and beliefs. Conrmation bias can thus be seen as an intermediate point on a continuum with complete attention at one extreme and deliberate ignorance at the other. Although there are few formal models within the social and clinical literatures that have given rise to concepts such as cognitive dissonance, some relevant models exist in both economics and cognitive psychology. Many of these models have not been directly applied to deliberate ignorance, so we review them briey as they provide a framework within which deliberate ignorance could be accommodated. Falk and Zimmerman (2017) describe a model in which individuals have a preference to behave consistently, and hence for having consistent beliefs (on the assumption that consistent beliefs are more likely to lead to consistent behavior). The preference is assumed to reect the adaptive value of signaling strength to others and is captured in a utility term that represents observers’ certainty about the relevant agent’s beliefs. Deliberate ignorance offers one, although not the only, way to maintain consistency of one’s beliefs. It may, in turn, be easier to behave consistently if one’s beliefs are consistent. This approach adds to a number of economic models of cognitive dissonance (e.g., Konow 2000; Rabin 1994) to suggest that people can, at least to some extent, control their beliefs. They are consistent with the idea that one way in which dissonance may be reduced is through deliberate ignorance (e.g., Akerlof and Dickens 1982). Yarif (2002) believes an individual’s utility function is assumed to capture a trade-off between (a) utility gained from making good decisions, as in the standard model, and (b) utility gained from having consistent beliefs. Thus,

132

G. D. A. Brown and L. Walasek

agents can “choose what beliefs to hold.” Yarif shows that agents may prefer to avoid new information if the cost (to consistency) is greater than the benet (to improved decision making). Bénabou and Tirole (2011) present one of the many related economic models which accord an important role to identity. In such models, “identity” typically refers to a person’s self-image as well as their feelings about themselves, and utility can be gained or lost by making choices that are or are not identity consistent. In the Bénabou and Tirole model, people infer their values from their past choices, and invest in and protect their identity. One way of doing so is by avoiding markets (or even thoughts about markets) that involve prices being placed on goods (e.g., sex, votes, bodily organs) which, according to the given self-identity, should not be bought and sold. These types of models thus suggest that deliberate ignorance may serve a role in value preservation and/or avoiding feelings such as guilt, disgust, and repugnance (Roth 2007). Along related lines, deliberate ignorance may help people preserve their sense of moral worth. A large body of ndings suggests that deliberate ignorance of the effects of one’s actions on others can allow one to maintain one’s self-image without sacricing one’s own payoffs. This relates to the idea of “moral wiggle room.” Grossman and Van Der Weel (2017b) describe a model of the interplay between altruistic preferences, selsh tendencies, and selsh preferences. They use the model to determine the conditions under which there can be an “ignorance equilibrium,” whereby ignorance limits availability of information to a person that can be detrimental to their self-image (see also Serra-Garcia and Szech 2018). According to some models, deliberate ignorance may also arise as a result of a preference for believing that one has high ability, or “ego utility” (Köszegi 2006). Köszegi’s model shows how a decision maker who is happy with the ego-related beliefs that they currently hold has an incentive to avoid receiving information that might threaten those beliefs: the “self-image protection motive” (see also Johnson and Fowler 2011). Other identity-related models show how deliberate ignorance may be used as a device for self-control. For example, Carrillo (2005) shows that an individual with time-inconsistent preferences may under some circumstances choose not to obtain information about the future consequences of actions (e.g., the adverse health consequences of smoking, or the pleasure derived from a certain type of consumption) because they fear that new information might cause them to behave in a less healthy way in the future (see also Carrillo and Mariotti 2000). For example, learning that cocaine consumption is enjoyable might lead to a present preference for immediate consumption but future abstinence. However, due to temporal discounting, longerterm overconsumption might be anticipated and hence avoiding knowledge about the present value of cocaine consumption may be benecial. Another approach to understanding deliberate ignorance can be found in parallel constraint satisfaction (PCS) modeling. PCS models are a class of connectionist models in which cognitive processes are represented by spreading

Models of Deliberate Ignorance in Individual Choice

133

activation in a network of interconnected nodes. In the context of motivated cognition, nodes are taken to represent goals, actions, or beliefs, and the connections between these nodes reect the strength of association or level of compatibility. PCS models involve feedback relations which allow for satisfaction of multiple simultaneous constraints imposed on a network (Read et al. 1997). The PCS process attempts to nd the highest level of organization (or lowest energy) in activation of the nodes, given specic relations between the nodes (imposed by the researcher). The level of organization or harmony in the network has been interpreted as a measure of cognitive consistency (e.g. coherence of beliefs). How could a PCS approach shed light on deliberate ignorance? Thagard (1989) describes a model in which the relation between propositions and observations contributes to the overall coherence of an entire system of beliefs (see also Thagard 2006). Using a neural network implementation, any two propositions can be mutually excitatory if they are coherent with each other (if A and B cohere, then B and A cohere) or mutually inhibitory if they are not coherent (if A contradicts B, then A and B do not cohere). Simulating multiple propositions and observations, the model has been used to explain phenomena such as the acceptability of Copernicus’s theory of the solar system (Nowak and Thagard 1992). With an extension of the model, it is possible to account for seemingly irrational beliefs. For example, the emotional value of the links between propositions can lead to a situation in which individuals engage in self-deception (Sahdra and Thagard 2003). In another extension, coherent propositions about greenhouse gases and their role in global climate change can be rejected if they conict with one’s values, such as the importance of a small government. Thus, a coherence model provides a natural perspective on cases of deliberate ignorance, such as when an individual actively chooses to avoid information that could disturb the coherence of their existing beliefs. If we assume that coherence and self-deception are important determinants of subjective well-being (Sahdra and Thagard 2003), then it may be psychologically benecial to avoid certain sources of information to avoid the risk of challenging the existing belief system. A nal type of model that we consider under this discussion of consistency and identity focuses on the tension that may arise when people’s attitudes and attitude-related beliefs are in conict with a social norm. Under such conditions, people may choose to ignore sources of evidence (e.g., newspaper articles or people with uncongenial opinions) to reduce the tension between expressing attitudes that are consistent with their own beliefs and expressing attitudes that are consistent with the social norm. In social sampling theory (Brown et al. 2019), individuals are assumed to have their own authentic private attitudes (e.g., political attitudes) which are not visible to others. Individuals are also assumed to be sensitive to the distribution of attitudes held by a social group. The attitude people decide to express publicly represents a trade-off between two opposing forces: an authenticity preference which motivates an individual to

134

G. D. A. Brown and L. Walasek

act in a way that is in line with their underlying beliefs and attitudes, and social extremeness aversion which discourages them from endorsing attitudes that are too different from those of others in a social context. Since both forces are a source of utility loss, an individual will express utility-maximizing attitudes that will reect a compromise between personal and social beliefs or attitudes. In such a model, utility-maximizing agents have an incentive to seek information that is consistent with their privately held attitude and to avoid information that challenges it (e.g., by preferring the company of similarly minded individuals). In this way they can better satisfy their authenticity preference without being socially extreme.

Maximizing Quality of Decision Making The working denition of deliberate ignorance that we have used refers to not seeking or using information when it might be benecial to do so. As noted in the introduction, however, what counts as “benecial” is relative to a combination of an agent’s goals and the environment in which they live. In particular, the costs and benets of a strategy may motivate self-perception bias, which in turn can motivate deliberate ignorance. For example, Johnson and Fowler (2011) describe a model in which it is advantageous for an individual to overestimate their own ability. This can occur when (a) an agent’s decision whether or not to enter a contest for resources is made on the basis of the agent’s beliefs about their own ability, relative to the ability of the other potential combatant, and (b) the positive payoff from winning a contest is much greater (in absolute terms) than the negative payoff that would result from losing the contest. The suggestion that an asymmetrical payoff matrix may motivate self-delusion is distinct from the idea that particular patterns of behavior, such as consistency, may serve a useful role in signaling abilities to others (Falk and Zimmermann 2017). The point here is that payoff contingencies in the environment may lead an adaptively rational agent to perceive the world in a non-veridical way. One way of achieving this is through a form of deliberate ignorance that involves making estimates of some quantity or association using fewer data than are available, even if there is negligible cost to obtaining the additional data. “Quality of decision making” could be quantied in terms of minimizing processing cost or decision time, minimizing false positive errors, or minimizing false negative errors. Several studies from the judgment and decision-making literature focus on avoiding false negatives, to which we now turn. One example concerns the use of small samples in estimating payouts, which could reect a deliberate strategy. It is well established in the psychology of judgment and decision making that many judgments of social and economic quantities are based on small samples retrieved from one’s memory or the immediate environment (e.g., Fiedler and Juslin 2006). The “small samples” assumption has been used to account for a variety of phenomena,

Models of Deliberate Ignorance in Individual Choice

135

including stereotype formation, illusory correlations, conrmation bias, polarization, and overcondence. The use of small samples may reect the increasing cost of expanding sample size, combined with diminishing returns for accuracy of judgment. However, basing estimates on small samples may be adaptive even in the absence of such costs, and hence be an instance of deliberate ignorance. For example, statistical modeling has shown that small samples may be better for detecting small associations in the environment, albeit at the expense of false positives or less accurate assessment of the strength of the association than would be obtained with a larger sample (e.g., Fiedler and Kareev 2006). Thinking in ecological terms, it is easy to envisage circumstances in which it is more important to become aware, at the earliest possible stage, of possible contingences in the world than it is to avoid developing beliefs in relationships that do not in fact exist. Deliberate ignorance (in the form of choosing smaller samples than available) could reect this need, although the fact that small samples may be better for some purposes is not in itself an instance of deliberate ignorance. (We note that, as mentioned in the introduction, we are including cases where ignorance may reect adaptive considerations even in the absence of conscious deliberation of benets of information.) The “decisions from experience” paradigm provides another case where simulations have shown that deliberate ignorance might help. People seem to choose between payoff distributions on the basis of small (ca. seven) samples from each, even when the cost of obtaining larger samples is relatively small. Hertwig and Pleskac (2010) show that small samples help in choosing between payoffs, because small samples amplify the difference between the earnings associated with uncertain payoff distributions. Their model points to another case where ignorance (in this case of a wider sample) could make choices easier by making options more distinct, even if additional information could be acquired at little or no cost. Other models of decision making that illustrate how deliberate ignorance can lead to improved performance can be found in Simple Heuristics That Make Us Smart by Gigerenzer et al. (2000). Researchers working within this tradition have found that decision rules are, under many circumstances, more successful when they ignore part of the information. In particular, strategies such as tallying (i.e., counting up the number of cues that favor an option, without weighting them) may work well even though part of the information (i.e., cue weighting) is being ignored. The use of heuristics such as tallying may seem far removed from traditional cases of deliberate ignorance, as they do not seem to involve deliberative and active decisions to ignore certain information. However, as we noted in our introductory remarks, there is no clear dividing line to be made between models which do and do not assume conscious deliberative processing in ignoring information. Therefore, we include them here as an example of how discarding available information can improve judgment and decision making.

136

G. D. A. Brown and L. Walasek

For an example of how forgetting may aid heuristic inference, see Trimmer et al. (this volume).

Conclusion In this chapter, we have reviewed a number of different approaches to modeling deliberate ignorance from both economics and psychology. The models reviewed identify a number of different mechanisms through which deliberate ignorance may occur: deliberate ignorance may reect a desire to maintain beliefs that are consistent with preferences, a desire to maintain a consistent identity or pattern of behavior, a desire to maintain a positive self-image, or a desire to maximize the quality of judgment and decision making relative to a particular set of goals and within an environment of a given structure. Because the denition of deliberate ignorance restricts it to cases where the cost of acquiring information is negligible, we have omitted from this discussion a broad class of models, mainly developed within economics, on “rational inattention” (e.g., Sims 2003). These models typically assume that there is a cost to acquiring information, and hence account for cases where decisionrelevant information is ignored by an optimal decision maker because the costs of acquiring the information exceeds the expected benets of having it. What, if any, general conclusions can be drawn? It is evident that many models may shed light on the phenomenon of deliberate ignorance. Indeed, the various models that we have described above cover, between them, most of the types of deliberate ignorance included in the taxonomy developed by Hertwig and Engel (this volume, 2016). One of the points that they make is that psychology needs to pay more attention to deliberate ignorance, and in this context, it is noteworthy that most of the models we have identied are to be found within the economic, rather than the psychological, literature. There is clearly a need for psychologists interested in deliberate ignorance to pay more attention to a number of these papers in the economics tradition. Indeed, a casual examination of the articles that cite economic models of evidently psychological phenomena (e.g., cognitive dissonance, identity, optimism, conrmation bias) reveals a striking absence of articles within mainstream psychology journals. We also note that existing psychology-based taxonomies tend to place more emphasis on fairness motivations than has been seen in models, while the reverse is the case for considerations surrounding identity maintenance. More generally, there is something of a mismatch between emphases in the modeling and non-modeling literature: the former devotes relatively more attention to maintenance of identity and consistency whereas the latter attends more to issues of ensuring fairness. An open question is the extent to which a unied account of deliberate ignorance can or should be sought. We have tried to show that there are some common features underlying the classes of models. Some emphasize utility

Models of Deliberate Ignorance in Individual Choice

137

that relates to the content of beliefs, whereas others place the emphasis on consistency of beliefs with either each other or with a desired self-image. We have treated these models (those concerned with identity and those concerned with belief consistency) as a single category because beliefs about one’s own identity can be thought of simply as another set of beliefs that must be consistent with other beliefs. We note, however, that beliefs about self and identity seem likely to be ones where the content is particularly important, and hence models that refer to utility gained from the content of beliefs may be linked to models that focus on consistency of beliefs per se. Still other models focus on the length of time over which emotions, such as fear or optimism, are present. At the present stage of theoretical understanding, these appear to be very different and individually plausible sources of deliberate ignorance, and hence the existence of a variety of models adds support to the idea that there is no single determinant of deliberate ignorance (or at least no determinant that is not so general as to be vacuous). Moreover, there may be domain specicity. It is an open question as to whether work on choice under uncertainty in the laboratory will generalize to feelings about health states. However, there is clearly much scope for further competition between and unication of models within each of the classes that we have described, and for the models to be brought into contact with a rich variety of the empirical and theoretical psychological literature. There is an intuitive distinction between the psychology involved in acquiring information and the psychology of being in the state of having that information (with the latter bringing the need to consider the time course of adaptation). The acquisition–consumption distinction has already been emphasized in the literature of consumer choice (Hsee et al. 2009); we suggest that models will need to pay more attention to this distinction as it relates to deliberate ignorance than has hitherto been the case. We also note a potential link with the exploration–exploitation dilemma; we can imagine that an organism might be deliberately ignorant to remain in an exploitation state. Finally, one further avenue for future research is the importance of prior beliefs. In many cases, people must have some idea about the likely valence of information before deciding whether or not to reduce uncertainty by accessing the relevant information. Prior expectations are incorporated in some, but by no means all, of the models we have discussed above.

Acknowledgments This study was supported by the Economic and Social Research Council (U.K.) [grant number ES/P008976/1], the Leverhulme Trust [grant number RP2012-V-022], and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program [grant agreement No 788826]. We thank numerous members of the Forum, in particular the editors, for their detailed comments on earlier versions of this manuscript.

9 The Evolution of Deliberate Ignorance in Strategic Interaction Christian Hilbe and Laura Schmid Abstract Optimal decision making requires individuals to know their available options and to anticipate correctly what consequences these options have. In many social interactions, however, we refrain from gathering all relevant information, even if this information would help us make better decisions and is costless to obtain. This chapter examines several examples of “deliberate ignorance.” Two simple models are proposed to illustrate how ignorance can evolve among self-interested and payoff-maximizing individuals, and open problems are highlighted that lie ahead for future research to explore.

Introduction Information is a precious resource. Common sense suggests that we make better decisions the more we understand our options and their consequences. Yet people sometimes deliberately choose not to gather relevant information, even if this information is readily available and effective to make better choices (see Hertwig and Engel, this volume, 2016). For instance, people give considerable amounts to charities, but they rarely consider how efficient a charity is, although there are websites that allow for easy comparison (Hoffman et al. 2016). Managers often avoid arguments that run counter to their previous decisions, although such arguments would help them to abandon projects with a low probability of success in a timely fashion (Deshpandé and Kohli 1989). When asked for a costly favor, subjects in laboratory experiments sometimes avoid retrieving information about the exact cost, especially if information retrieval is observable by outside parties (Jordan et al. 2016). These and other instances of ignorance can be the result of different cognitive processes. Individuals may physically avoid learning a piece of evidence, they may stop paying attention, they may deliberately misconstrue evidence, or they may try

140

C. Hilbe and L. Schmid

selectively to forget information that is considered unpleasant (Golman et al. 2017). Here we wish to sketch how mathematical models can illuminate such paradoxical behaviors and discuss why some of these behaviors seem more puzzling than others. In the following, when we refer to “ignorance,” we mean that an individual does not know of a certain fact which, if known, could affect some of the individual’s future decisions. We say this ignorance is “deliberate” if the individual had a chance to learn this fact but chose not to. In particular, deliberate ignorance requires individuals to be aware of gaps in their knowledge, and they must have the means to resolve these gaps. However, we do not require deliberate ignorance to be the result of a calculated process in which individuals consciously weigh the advantages and disadvantages of further information. Instead, we wish to emphasize that deliberate ignorance can endogenously evolve among individuals who repeatedly encounter similar decision problems, and who adapt their strategies based on simple heuristics. Some cases of deliberate ignorance are amenable to a straightforward economic explanation. In many cases, individuals simply avoid information because it seems irrelevant or because its expected benets do not warrant the search costs (Stigler 1961). For example, many of us will not know how to react properly if we are attacked by a bear, presumably because such an event appears too unlikely to justify even one minute of Internet search.1 Other instances of deliberate ignorance, in which information is essentially costless, are subtler to fathom. In addressing these more interesting cases, we nd it useful to distinguish between strategic and nonstrategic ignorance, which is a somewhat coarser distinction than the one proposed by Hertwig and Engel (this volume, 2016). In models of strategic ignorance, information is typically taken as a means to an end. It has no intrinsic value, but it allows individuals to evaluate their options better. Instances of strategic ignorance include cases in which individuals avoid information in order to commit themselves credibly to a certain path of action (Schelling 1960), when they want to avoid leaking information or biasing themselves in a negotiation (Auster and Dana, this volume), or when they exploit a moral wiggle room when making morally ambiguous decisions (Dana et al. 2007). In models of nonstrategic ignorance, the mere possession of information (or the way by which it was obtained) may affect a person’s well-being. For example, people might avoid information due to regret aversion or dissonance avoidance, even if the information itself would allow them to take actions that would improve their future material payoffs. In the following two sections, we introduce two simple models that illustrate some of the issues that arise when modeling the evolution of strategic 1

It turns out that the proper reaction depends on the species of bear. The U.S. National Park Service advises to play dead when being attacked by a grizzly, yet to escape to a secure place when being attacked by a black bear; see https://www.nps.gov/subjects/bears/safety.htm (accessed May 8, 2019).

The Evolution of Deliberate Ignorance in Strategic Interaction

141

ignorance. In the nal section, we briey discuss models of nonstrategic ignorance, which are somewhat more difficult to grasp from an evolutionary perspective.

Ignorance as a Commitment Device As popularized by Schelling (1960), players can use self-commitment as a powerful tool to improve their strategic position. The idea is that by eliminating some of their strategic options, players can enhance the credibility of pledges that would otherwise be viewed simply as cheap talk. Selfcommitment can take various forms, ranging from the proverbial burning of bridges to disabling one’s steering wheel in the game of chicken. As noted by Schelling, avoiding certain kinds of information can act as a commitment device as well, such as when second movers deliberately ignore the action of the rst mover. To illustrate the value of deliberate ignorance as a commitment device, let us consider the “envelope game” (Bear and Rand 2019; Hilbe et al. 2015; Hoffman et al. 2015). The envelope game is a stylized model used to illustrate the tensions that arise when players cooperate for opportunistic reasons (when cooperation happens to be to their own advantage) or out of principle (no matter how the current incentives for cooperation are). The game involves two players and has four consecutive stages (Figure 9.1). In the rst stage, a chance move by nature (N) determines whether the players face an environment in which cooperation carries a high (H) or a low (L) cost. The outcome of this chance move cannot be observed directly; the players only know that on average they face a high-cost environment with probability p. In the second stage, player 1 has the option to learn the current state of the environment at no cost. Based on player 1’s decision, player 2 can choose whether to accept or reject the pairwise interaction in the third stage. If the interaction is rejected, the game is over and both players receive a default payoff of zero. Otherwise, if the interaction is accepted, the game enters a fourth stage in which player 1 chooses whether or not to cooperate with player 2. If player 1 cooperates, each player i receives a benet bi > 0. However, player 1 also needs to bear a cost cs. This cost depends on the current state, s  {L, H}, of the environment with cL < cH . If player 1 defects, each player i receives a payoff of di. For the game to be interesting, we assume that payoffs satisfy the following two conditions. First, player 2 prefers cooperative interactions to no interaction, but strongly opposes interactions with a defector:

b2

d 2.

(9.1) 

This condition can be seen as a denition of what it means to “cooperate”: taking an action that is to the co-player’s advantage even if it may be individually costly. Second, P1 prefers to cooperate only in a low-cost environment:

142

C. Hilbe and L. Schmid

1

2

H

Look

P1

4

P1

P1 P2

0 0

L

Don’t look

P1

L

P2 Reject Accept

Reject

Low

Don’t look

P2

3

Payoffs

N

High

P1

P2 Accept

P1

C

D

b1-cH b2

d1 d2

Accept

b1-cH b2

D

d1 d2

C

b1-cL b2

P1 D

d1 d2

L

P2 Reject Accept

P1

P1 C

0 0

Look

0 0

Reject

P1

P1

C

D

b1-cL b2

d1 d2

0 0

Figure 9.1 The envelope game is an asymmetric game with incomplete information between two players, P1 and P2, and four stages: (1) Nature (N) determines randomly whether players are in a high-cost (H) or low-cost environment (L). Neither of the two players knows the state of the environment (as illustrated by the closed envelope). (2) P1 decides whether or not to look into the envelope to learn the state of the environment. (3) Based on whether the envelope has been opened, P2 chooses whether or not to accept P1 as an interaction partner. If P2 rejects P1, the game is over and both players receive no payoff. (4) If accepted, P1 decides whether to cooperate or defect. If P1 has opened the envelope in stage 2, this decision may be contingent on the realized cooperation cost. Payoffs are such that P2 always prefers P1 to cooperate. However, P1 only has an incentive to do so in a low-cost environment. Dashed lines represent information sets, connecting nodes that the respective players cannot distinguish, given the information they have.

b1 cL

d1

b1 cH.

(9.2) 

This condition guarantees that the information in the second stage is useful in subsequent stages. Alone, player 1 would prefer to learn the state of the environment and to cooperate conditionally. This envelope game can be solved by backward induction (see Appendix 9.1 for details). Depending on the probability p of a high-cost environment, there are three possible outcomes. First, if p is comparably small, the players are predicted to settle at an “opportunism equilibrium.” In this equilibrium, player 1 learns the state of the environment, player 2 accepts the interaction, and player 1 cooperates whenever own interest is best served (i.e., only if the cost is low). The rationale for this equilibrium is straightforward: As long as high-cost environments are rare, player 2 accepts co-players who occasionally nd it worthwhile to defect.

The Evolution of Deliberate Ignorance in Strategic Interaction

143

Second, if the high-cost probability p is very large, player 2 will always reject the interaction, independent of whether or not player 1 chose to learn the current environment. Again, this “no interaction equilibrium” is straightforward to rationalize. If the cost of cooperation is typically high, player 2 either expects player 1 to defect by default (if player 1 does not know the current environment) or is sufficiently likely to defect (if player 1 learned the environmental state in stage 2). In between these two extremes, b2

b2

d2

p

b1 d1 cL , cH cL

(9.3) 

there is an “ignorance equilibrium.” In this equilibrium, player 1 deliberately ignores the current state of the environment in the second stage. This leads player 2 to accept the interaction in the third stage, and player 1 cooperates in the nal stage. The second inequality in Equation 9.3 ensures that high-cost environments are sufficiently rare such that player 1 cooperates by default. At the same time, the rst inequality in Equation 9.3 ensures that high-cost environments are too common (or too harmful) for player 2 to accept purely opportunistic co-players. For these predictions to be sensible, we do not need to require that players derive their strategies through rational calculation. Instead, it suffices that individuals adapt their strategies over time based on the past success that they have had. To illustrate this point, Figure 9.2 shows the dynamics of the envelope game in populations of players who adopt new strategies by imitating peers with a higher payoff (Traulsen and Hauert 2009; see Appendix 9.1 for the exact setup). These simulations recover the previously predicted equilibrium outcomes. In particular, for intermediate values of the probability p, we observe that subjects in the role of player 1 learn to ignore the costs of cooperation and they tend to cooperate unconditionally. In the end, players act as if they performed Bayesian updating and backward induction although they never make the respective calculations. These results allow several observations: When ignorance pays. According to our results, deliberate ignorance is most likely to emerge when p is intermediate; that is, when the hidden information would actually be most valuable to player 1. This nding underlines the function of ignorance as a commitment device. It allows player 1 to persuade others to engage in an interaction that they otherwise would be reluctant to accept. In addition, the inequalities (Equation 9.3) suggest that strategic ignorance is most likely to emerge if defection is very costly to player 2 (i.e., if the value of d2 is small) and if cooperation is relatively cheap for player 1 even in a highcost environment (i.e., if b1 − cH and d1 are of similar magnitude). Ignorance and altruism. In the ignorance equilibrium, player 1 will sometimes cooperate although his immediate material incentives happen to make cooperation unprotable. To an outside observer who only observes the nal stage of the game and the resulting payoffs, these instances of cooperation will

144

C. Hilbe and L. Schmid

Abundance of looking

1.0

Abundance of cooperation

(a)

1.0

Evolution into an opportunism equilibrium

(b)

(c)

Evolution into an ignorance equilibrium

Evolution into a no interaction equilibrium

0.5 0.0 p = 0.3

p = 0.5

p = 0.8

0.5 0.0

0

Time

Abundance of looking

(d)

30,000

0

Opportunism

1.0

30,000

Time Ignorance

0

Time

30,000

No interaction

0.5 0.0

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Probability of high state, p

Figure 9.2 Simulating the evolution of ignorance in the envelope game, where there are two distinct populations. Members of population 1 are randomly matched with members of population 2 to interact in the envelope game (members of population i act in the role of player i). Each player is equipped with a strategy. The strategies tell the players what to do in each stage in which they need to make a decision. After these interactions, strategies with a high payoff reproduce within the respective population, either because successful players have more offspring (genetic inheritance) or because they are imitated more often (cultural inheritance). Panels (a)–(c) show three representative trajectories of this evolutionary process. If high-cost environments are sufficiently rare (i.e., if p is sufficiently small), players in population 1 tend to open the envelope and cooperate if the cost is low. If there is an intermediate risk of a high-cost environment, player 1 cooperates without looking. Finally, if costs are likely to be high, player 2 typically rejects all interactions no matter whether player 1 looked. In panel (d), black dots show time averages of simulation runs for different probabilities of a high-risk environment. The dashed vertical lines in panel D indicate the three different equilibrium outcomes according to backward induction.

appear as if player 1 acts altruistically. Altruistic cooperation has been observed in numerous experiments on behavior in social dilemmas. Subjects sometimes cooperate even if interactions are one-shot and fully anonymous (Dawes et al. 2007; Fehr and Fischbacher 2003). To explain these apparently irrational instances of cooperation, researchers typically argue that subjects have social preferences (Bolton and Ockenfels 2000; Fehr and Schmidt 1999), or that subjects make their decisions based on simple heuristics (Bear and Rand 2016; Delton et al. 2011; Jagau and van Veelen 2017). These heuristics are thought to be generally adaptive in the players’ natural environment even if they may misre in exceptional cases (Fawcett et al. 2014; Gigerenzer and Goldstein 1996; Hertwig et al. 2013). The above model gives an alternative interpretation

The Evolution of Deliberate Ignorance in Strategic Interaction

145

for why social heuristics may have evolved. If intuitive cooperators are considered more reliable, people may, in turn, have an incentive to avoid learning the details of a strategic interaction and to cooperate instinctively (Hoffman et al. 2016). In line with this argument, laboratory experiments suggest that subjects tend to be more cooperative when they need to make quick decisions (Rand et al. 2012), and uncalculating cooperators are considered more trustworthy (Jordan et al. 2016). Conversely, Ma et al. (2018) nd that trustees in a trust game tend to be more generous toward those investors who were less inquisitive about the trustee’s past history. Observable ignorance. In the above model, we have assumed that player 2 observes whether or not player 1 decided to learn the state of the environment. This observability is crucial for our results (in fact, for all self-commitment models). If there was an option to secretly learn the current state, it would be a weakly dominant action for player 1 to do so. In equilibrium, player 2 would expect player 1 to know the environmental state and, as a consequence, would reject all interactions (provided the rst inequality in Equation 9.3 holds). In particular, although people can commit themselves by publicly refusing to learn some relevant information, they cannot do so by engaging in similar acts of internal self-commitment, such as forgetting relevant information or privately misconstruing it. Ignorance in the presence of communication. For the standard version of the envelope game depicted in Figure 9.1, we have assumed that the players cannot directly communicate with each other. In particular, we have assumed that player 1 is unable to inform player 2 about the present state of the environment after opening the envelope. If such communication was possible, the equilibrium predictions may change, depending on the defection payoff d1 of player 1. If d1 < 0, player 1 prefers not to interact with player 2, rather than to defect. Thus, when player 1 nds out that the present costs of cooperation are high, it is in his own interest to communicate this fact truthfully to his co-player, such that player 2 can abort the interaction. If communication is possible, there is thus only one equilibrium when d1 < 0. In this equilibrium, player 1 always looks, communicates the result to player 2, and player 2 reacts accordingly. In contrast, communication has no effect when d1 > 0. In that case, player 1 always prefers to interact. As a result, even when players are in a highcost environment, player 1 has an incentive to pretend the cooperation cost is low. Thus, any message of player 1 represents cheap talk. Knowing this, player 2 does best by ignoring all communication. Consequently, we yield the same equilibria as in the no-communication case. Ignorance versus deliberate ignorance. For cooperation to evolve under the conditions of Equation 9.3, it is actually not necessary that player 1 actively decides to avoid information. Instead, we yield the same cooperation rates if player 1 were not even given the option to learn the environmental state (i.e., if stage 2 was removed from the game altogether). That is, for intermediate values of p, the evolution of cooperation only requires ignorance, not deliberate

146

C. Hilbe and L. Schmid

ignorance. In the above model, deliberate ignorance only emerges as a byproduct, as a means to ensure that player 1 remains uninformed. In contrast, in the following we present a variation of the envelope game in which the active choice not to know is crucial.

Ignorance as a Costly Signal The following model variation is based on the idea that different types of player 1 may have different incentives to act opportunistically (Pérez-Escudero et al. 2016). In that case, the decision of player 1 to ignore relevant information may not only communicate the player’s commitment but also the player’s type. To incorporate this idea, we introduce an additional stage to the envelope game. In this stage 0, nature randomly determines the type of player 1. We assume that with probability q, player 1 is “unfavorable” (U), whereas with probability 1 − q, player 1 is “favorable” (F). Player 1 always knows his own type, but player 2 only knows the general probability q. The two types differ in their respective likelihood to face a high-cost environment and in their incentives to cooperate. Specically, we assume that while favorable players encounter a high-cost environment in stage 1 with probability pF, the respective probability for unfavorable players is pU > pF. The subsequent stages of the envelope game remain unchanged: in the second stage, player 1 decides whether or not to learn the state of the environment (this decision may now depend on player 1’s type); in the third stage, player 2 decides whether to engage in an interaction (depending on whether or not player 1 looked at the state); and in case of an interaction there is a fourth stage in which player 1 decides whether to cooperate. The payoffs of the players may now too depend on player 1’s type; they are bit after cooperation and dit after defection, with i  {1, 2} and t  {F, U}. Again, we can analyze this model by characterizing the possible equilibria and by performing evolutionary simulations. The respective results are illustrated in Figure 9.3. There we nd four different regimes, three of which correspond to the cases observed in the commitment model: • • • •

An opportunism equilibrium, in which both types of player 1 learn the environmental state, are accepted, and cooperate only if the cost is low A no interaction equilibrium, in which player 2 rejects all co-players, irrespective of whether or not they learned the state of the environment An ignorance equilibrium, in which both types of player 1 commit themselves by not learning the environmental state and by cooperating by default A “partial ignorance equilibrium,” in which only a favorable player 1 decides not to learn the state of the environment, is accepted, and cooperates by default; the unfavorable player learns the current environmental state and is rejected by player 2

The Evolution of Deliberate Ignorance in Strategic Interaction No interaction equilibrium Both types of player 1 may or may not ignore the payoffs

Player 2 only accepts those who do not look

Player 2 rejects everyone, regardless of previous action

Always high cost pF=1

0.8

Ignorance equilibrium Both types of player 1 ignore payoffs and cooperate unconditionally

0.6

Player 2 accepts everyone Opportunism equilibrium Both types of player 1 look at payoff and act opportunistically

0.4

Player 2 accepts everyone

Favorable player’s probability, pF

Partial ignorance equilibrium Only favorable types of player 1 ignore payoffs and cooperate

147

0.2

Always low cost pU=0

0.2

0.4

0.6

0.8

Unfavorable player’s probability pU 0.0

0.2

0.4

0.6

0.8

Always high cost pU=1

Always low cost pF=0

1.0

Proportion of games that end with cooperation

Figure 9.3 Evolution of ignorance in a game with uncertainty about the player types. There are two types of player 1: favorable (F) and unfavorable (U). Favorable players are more likely to encounter a low-cost environment. We consider the evolutionary dynamics that arises in two distinct populations engaged in the envelope game. Population 1 consists of two subpopulations of xed size, corresponding to the favorable and unfavorable players. Each player 1 knows its own type, but players in population 2 only know the relative abundance of the two types. As before, players are randomly matched, and strategies that yield a higher payoff are more likely to spread within the respective (sub)population (see Appendix 9.1). Depending on the parameter values, we observe that evolution leads to one of four possible equilibria. In the “partial ignorance equilibrium,” only favorable players avoid looking into the envelope, whereas unfavorable players look. Player 2 accepts non-looking co-players and rejects all others.

In contrast to the pure commitment model considered previously, the active choice of a player to avoid information can now be crucial. Only if the players themselves have a choice whether or not to ignore their environment can they differentiate themselves from others in the partial ignorance equilibrium. Whereas favorable players can afford to ignore relevant information, as they are likely to end up in a state in which cooperation is mutually benecial anyway, unfavorable players cannot.

148

C. Hilbe and L. Schmid

As in the baseline model, however, we note that strategic ignorance can only be used as a signal if it is observable. If there were secret ways to learn the true state of the environment, the signal of publicly ignoring information would no longer be costly, and thus no longer reliable. Thus, favorable players can only sustain a partial ignorance equilibrium if they are able to make their ignorance veriable.

Discussion Different Dimensions of Strategic Ignorance The above models illustrate two different mechanisms for how strategic ignorance can emerge, as a way of self-commitment and as a signal. There are, however, further mechanisms. The model of Dubey and Wu (2001), for instance, explores how work performance depends on the intensity of monitoring when a reward is promised to the most productive worker. Workers differ in their baseline productivity. Moreover, their output is subject to random shocks. In such a scenario, employers benet from showing minimal scrutiny. If, instead, employers collect too much data, workers with a low baseline productivity no longer have an incentive to exert effort. Due to the law of large numbers, their chance to get the reward approaches zero. Kareev and Avrahami (2007) ran a set of experiments which conrms that subjects perform better under minimal, rather than full scrutiny. In other words, employers are better off not nding out as much as they theoretically could. As another example in which deliberate ignorance occurs, Dana et al. (2007) describe situations in which subjects decide not to learn how their actions affect others. By removing transparency, subjects create a moral wiggle room that allows them to be more selsh. Despite these examples, however, we seem to lack a general theoretical framework that delineates when strategic ignorance pays, and which kind of information is most protable to ignore. Deciding Not to Learn versus Deciding Not to Convey There is an interesting ipside to the problem of deliberate ignorance. Thus far we have considered scenarios in which a focal player decides to remain uncertain about specic aspects of a strategic interaction, although a naive understanding would suggest that resolving the uncertainty should be in the focal player’s interest. There are, however, other scenarios in which a focal player deliberately decides to leave others uncertain about certain aspects, even if it seems in the focal player’s interest to let them know. As an example, many donors give substantial amounts to charities while deliberately withholding their name. According to the Chronicle of Philanthropy, in 2017 there were at least 36 anonymous donations of at least $5 million each in the United States

The Evolution of Deliberate Ignorance in Strategic Interaction

149

alone. If people donate to gain reputation benets, why would they decide to leave others ignorant about their good deeds? One possible explanation is that by donating anonymously, donors avoid being harassed by other charities. However, this argument alone does not explain why anonymous donations are often considered more virtuous. To account for such behavior, Bénabou and Tirole (2006b) propose a signaling model in which players may have three different motives to choose between their actions. The players’ decisions may depend on the intrinsic value they attribute to an action, on any extrinsic incentives (such as subsidies), and on the action’s reputational value. When players differ in the relative weight they attribute to these three motives, good actions might be suspected of being driven by appearances only. In some cases, players may thus prefer their good actions to be unknown. Similarly, in the signaling model of Hoffman et al. (2018), donors may sometimes have an incentive to deliberately “bury” their positive signals. If such buried signals are eventually revealed, observers not only learn of the donor’s good deeds but also that the donor was not interested in public appraisal. These observations suggest that deliberate ignorance may be just one aspect of a more general class of social quirks, revolving around how we strategically acquire (or ignore) and transmit (or withhold) benecial information. Nonstrategic Ignorance Above, we have given an evolutionary account for why individuals may engage in strategic instances of deliberate ignorance. In the ignorance equilibria, individuals prefer not to know because their ignorance eventually helped them to secure higher material benets. However, there are also various examples in which individuals avoid information although their ignorance may come at a substantial cost to their long-run welfare, such as when they avoid learning the results of a medical diagnosis (for further examples, see Ellerbrock and Hertwig, Auster and Dana, as well as MacCoun, this volume). Existing models that account for such behaviors typically assume that subjects have nonstandard preferences (see also Trimmer et al. as well as Brown and Walasek, this volume). Individuals value not only their material payoffs, but also the information they have and how they obtain it. For example, the model by Golman and Loewenstein (2018) can account for many psychologically intuitive behaviors, like natural curiosity and the ostrich effect, by assuming that individual utility not only depends on material payoffs, but also on beliefs and the attention devoted to them. However, while models based on nonstandard preferences give very reasonable proximate explanations for the psychological mechanisms at work, they typically do not address how subjects have evolved these preferences in the rst place. If evolutionary forces have shaped our preferences, it remains unclear why our preferences seem to fail to maximize our material payoffs. To

150

C. Hilbe and L. Schmid

resolve this puzzle, it may be necessary to analyze the preferences we have in light of the ecological context in which they evolved (Fawcett et al. 2014). For instance, a preference to avoid potentially negative information (e.g., disease) could have emerged as a means of self-deception, which in turn may be used to deceive others (Trivers 2011a). In line with this argument, the simulations of Johnson and Fowler (2011) suggest that individuals benet from a certain degree of overcondence in evolutionary competitions. Seen from this evolutionary perspective, even nonstrategic instances of deliberate ignorance suddenly have a strategic component.

Appendix 9.1 Static Equilibrium Analysis For the two models considered in this chapter, one can derive the following equilibrium predictions by backward induction (for the commitment model) or by solving for the Perfect Bayesian Nash equilibria (for the signaling model) (Fudenberg and Tirole 1998). In the commitment model, the sequential game illustrated in Figure 9.1 allows for three generic outcomes: 1.

2.

3.

Opportunism equilibrium: If p < b2/(b2 – d2), the game has a unique equilibrium according to which player 1 looks at the environmental state in the second stage, player 2 accepts looking in the third stage, and player 1 cooperates in the fourth stage if and only if the costs are low. Ignorance equilibrium: If b2/(b2 – d2) < p < (b1 – d1 – cL)/(cH – cL), there is a unique equilibrium according to which player 1 refuses to look at the environmental state in the second stage, player 2 only accepts those coplayers in the third stage who refuse to look, and player 1 unconditionally cooperates in the fourth stage. No interaction equilibrium: If p > max{b2/(b2 – d2), (b1 – d1 – cL)/(cH – cL)}, then in any equilibrium, player 2 rejects player 1 in the third stage irrespective of player 1’s decision in the second stage.

For the signaling model, a full description of possible equilibria is more elaborate. Here, we only describe the equilibria that can be observed for the parameters used in Figure 9.3. 1.

2.

Opportunism equilibrium: There is a pooling equilibrium in which both types of player 1 look at the environmental state and are accepted if pU ≤ 11/15 – 6/5 pF. This condition ensures that player 2 nds it, on average, benecial to interact with fully opportunistic co-players. Ignorance equilibrium: There is a pooling equilibrium in which both types of player 1 refuse to look at the state, are accepted, and cooperate unconditionally, if 1/3 ≤ pU ≤ 2/3. The rst inequality ensures that player 2 has an

The Evolution of Deliberate Ignorance in Strategic Interaction

3. 4.

151

incentive to reject an interaction with a co-player who looks; the second inequality ensures that both types of player 1 cooperate by default. No interaction equilibrium: If pF ≥ 2/3, then in any equilibrium of the game, player 2 rejects her co-player in the third stage. Partial ignorance equilibrium: If pF ≤ 2/3 ≤ pU, there is a separating equilibrium in which only the unfavorable players look at the state, only those players who do not look are accepted, and accepted players cooperate unconditionally.

Evolutionary Analysis The results in Figure 9.2 and Figure 9.3 are based on simulations of a “pairwise comparison process” (Traulsen and Hauert 2009). In the following we describe the process for the signaling model; the process for the commitment model follows by setting q = 1 (such that effectively only one type of player 1 is present). There are two populations: population 1 and population 2 of size N1 and N2, respectively. Population 1 consists of two subpopulations: a subpopulation of size qN1 of unfavorable players and a subpopulation of size (1 – q)N1 of favorable players. In each time step, players of population 1 are matched with players in population 2 to play the envelope game. To this end, each player in population 1 is equipped with a strategy represented by a 4-tuple (x, y0, yH, yL) {0,1}4. The rst entry x is the player’s probability to learn the state of the environment in stage 2. The other entries give the player’s cooperation probability in the fourth stage, given that the costs are unknown (y0), known to be high (yH), or known to be low (yL). Similarly, strategies of players in population 2 are represented by a 2-tuple (zl, zn)  {0,1}2. The two entries give the player’s probability to accept the co-player, depending on whether or not the co-player looked at the state of the environment. Given each player’s strategy in either of the two populations, we can calculate each player’s expected payoff. After interacting in the envelope game, we assume that one player is randomly chosen from either of the two populations. This player is then given the chance to revise his or her strategy. With probability μ (akin to a mutation rate) the player simply picks a new strategy at random. With probability 1 – μ, the player compares his or her own payoff with the payoff of a randomly chosen role model from the same (sub)population. If the focal player’s payoff is π and the role model’s payoff is π’, the focal player adopts the role model’s strategy with probability ρ = (1 + exp[−β(π′ – π)]) −1. The parameter β > 0 is called the strength of selection. In the limiting case β → 0 the imitation probability simplies to ρ = 1/2, independent of the players’ payoffs. In that case, imitation occurs essentially at random. For higher values of β, imitation events are increasingly biased in favor of strategies that yield a higher payoff.

152

C. Hilbe and L. Schmid

This basic process consisting of mutation and imitation is then repeated over many time steps. Figure 9.2a–c shows for each time step of the simulation how often players in population 1 look at the state of the environment on average, and how often the game ends by player 1 cooperating. Figure 9.2d and Figure 9.3 show the respective time averages. The simulations for the commitment model are run with the following parameter values: b1 = b2 = 6, d1 = 1, d2 = −12, cH = 7, cL = 1, N1 = N2 = 100, β = 1. For the signaling model we set q = 0.5, b1F = b2F = 6, b1U = b2U = 5, d1F = 1, d2F = −12, d1U = –1, d2U = –10, cH = 7, cL = 1. Other parameter values and alternative evolutionary processes yield similar results.

10 The Zoo of Models of Deliberate Ignorance Pete C. Trimmer, Richard McElreath, Sarah Auster, Gordon D. A. Brown, Jason Dana, Gerd Gigerenzer, Russell Golman, Christian Hilbe, Anne Kandler, Yaakov Kareev, Lael J. Schooler, and Nora Szech

Abstract This chapter looks at deliberate ignorance from a modeling perspective. Standard economic models cannot produce deliberate ignorance in a meaningful way; if there were no cost for acquisition and processing, data could be looked at privately and processed perfectly. Here the focus is on cases where the standard assumptions are violated in some way. Cases are considered from an individual’s perspective, without gametheoretic (strategic) aspects. Different classes of “not wanting to know” something are identied: aside from the boring case of the cost of information acquisition being too high, an individual may prefer to not know some information (e.g., when knowledge would reduce the enjoyment of other experiences) or may want to not use some information (e.g., relating to a lack of self-control). In addition, strategic cases of deliberate ignorance are reviewed, where obtaining information would also signal to others that information acquisition has occurred, and thus it may be better to remain ignorant. Finally, the possibility of deliberate ignorance emerging in population-level models is discussed, where there seems to be a relative dearth of models of the phenomenon at present. Throughout, the authors make use of examples to summarize different classes of models, ideas for how deliberate ignorance can make sense, and gaps in the literature for future modeling. Sometimes a man wants to be stupid if it lets him do a thing his cleverness forbids. ― John Steinbeck, East of Eden Group photos (top left to bottom right) Pete Trimmer, Richard McElreath, Anne Kandler, Jason Dana, Gerd Gigerenzer, Gordon Brown, Russell Golman, Christian Hilbe, Anne Kandler and Russell Golman, Richard McElreath, Sarah Auster, Yaakov Kareev, Gerd Gigerenzer, Nora Szech, Christian Hilbe, Pete Trimmer, Lael Schooler, Jason Dana, Nora Szech, Yaakov Kareev, Gordon Brown

156

P. C. Trimmer et al.

Introduction The term deliberate ignorance implies that an agent has the option of obtaining some information and chooses not to. If there would be a great cost to acquiring the information, a deliberate choice to not acquire it would come as no surprise, so we restrict ourselves to considering cases where the information could have been acquired at a very small (or zero) cost to the agent, yet the agent chooses to remain ignorant. In standard economic models of rational decision making, the expected “value of information” (i.e., knowing the information for free) can never be negative (Hirshleifer and Riley 1992). This is based on the assumption that the agent knows the underlying distribution of data with respect to the world so, although a particular sample may happen to be misleading, on average, the effect of receiving information will be non-negative. Simply put, standard models assume that an individual is in tune with their environment, so if information alters actions then, on average, it will improve the outcomes (and, if some data were deemed not to be useful, the individual could simply then continue as though they had not received it, which is different to ignoring the data in the rst place). To understand this in a biological context, see McNamara and Dall (2010). As the value of information is non-negative in standard models (and information is generally regarded as valuable for informing future decisions), the fact that individuals display deliberate ignorance (even when the cost of data acquisition is small) can seem surprising. It would be useful to understand when, and why, the phenomenon occurs. We will examine this by violating the standard assumptions of economic models in various ways, which is where theories and models come to bear. It can be helpful to distinguish models from theories. Theories make assumptions about the world and from these assumptions derive predictions, which can be tested empirically. Models are abstract, simplied representations of the world or of theories about the world. Good theories will be predictively powerful (i.e., consistent with empirical data, but not generally consistent with any imaginable data), broadly applicable across a wide range of situations, and have assumptions that are parsimonious. Good models are realistic enough to describe the essential features of the world that the model is trying to capture, yet simple enough to give us insight. Models cannot perfectly describe the world in all its complexity; they are useful if they help us understand a particular aspect of the world by abstracting away from irrelevant details. Formalizing a theory as a mathematically precise theoretical model can help ensure that we fully understand the theory, that we are aware of its assumptions (e.g., converting implicit assumptions of a verbal theory to explicit), and that we can derive from them unambiguous predictions (it can be difficult to know the predictions of a complex theory without a model). There are many types of models (e.g., descriptive, predictive, normative) and each is, by denition, an abstraction, so no model is a perfect representation

The Zoo of Models of Deliberate Ignorance

157

of the real world. Consequently, the choice of model should depend on its intended purpose. Models have many potential benets, including • • • • •

exposing the logic of a situation, making new, or more accurate, predictions, making predictions independent of the theorizer, helping to guide worthwhile empirical work (both for testing and improving on existing models and theories), and informing discourse on the likely effect of changes to the system (e.g., for policy planning).

Not all models are good; some models that seemingly explain a problem merely redescribe the data with regard to a new term which assumes the effect.1 It is also easy for modeling to create “just-so” stories, providing seeming explanations for an effect, but with no other capability. To avoid just-so stories, the key is to nd predictions from the model that we did not know beforehand. This allows the model to be tested. It is also worth noting that models often only produce predictions over a limited range of conditions. Although this may initially seem like a limitation, the fact that a model may prescribe different outcomes over different ranges means that the model has predictive power. While it is generally not possible to show that a model is “right” (just because it produces correct predictions does not mean that it will do so in all cases, forevermore), it is possible to show that one is wrong, and it is often possible to contrast the predictions of models against one another. Models of deliberate ignorance could be used to • • •

show why deliberate ignorance exists, help with our ability to infer cases of deliberate ignorance (e.g., from behavior or physiological measurements), and understand the implications of policy changes (on whether particular information will be avoided, for instance) by modifying existing parameters or introducing new aspects to an existing model.

From a biological perspective, the question of “why” an organism displays deliberate ignorance can potentially be explained in four ways: mechanism (causation), function (adaptive value), ontology (development), and phylogeny (Tinbergen 1963). Each of these approaches can have their own models, and even if one model is perfect in a particular role, it may not necessarily help in another role. For instance, a mechanistic model of how the brain operates and results in an individual choosing to ignore information in a particular situation may be unlikely to help explain why that kind of brain evolved (from a functional perspective of tness maximization in the species). 1

Molière’s parody to the question, “Why does opium make people sleepy?” is “because of its dormative properties.” Similarly, the fact that people give money to others in the dictator game has been “explained” by other-regarding motives, which is very close to redescription.

158

P. C. Trimmer et al.

Perhaps the most intriguing of these cases is the functional question of why natural selection would have favored individuals with a decision-making system which ignored cheap but potentially useful data (or, more precisely, deliberate ignorance in cases where the expected value of the information minus the cost of acquisition is greater than the expected value of not having the information). When dealing only with the data itself (without any signal being available to others of knowing whether an individual has obtained the data), the functional reasons for deliberate ignorance existing are • • •

the cost of gathering the data is prohibitive, the cost of storing and processing the data are prohibitive, and processing of information (e.g., not being able to switch off automatic processes, resulting in suboptimal actions) is suboptimal.

When data acquisition also signals to others that data have been received, there is a fourth strategic reason for choosing to remain ignorant, which we discuss below (see section on “Interpersonal Strategic Perspectives”). We take an example-based approach to discussing various classes of models and distinguish two within-individual reasons for deliberate ignorance, irrespective of others. First, we discuss cases where an individual would prefer to not know some information (e.g., when knowing would reduce the enjoyment of other experiences). We then discuss cases where an individual would want to not use some information (e.g., relating to a lack of self-control). Thereafter, we turn to the strategic cases of deliberate ignorance: the effect of signaling that information acquisition has occurred. Finally, we explore the possibility of deliberate ignorance emerging in population-level models.

Preferring to Not Know Information It is easy to list cases where an individual would prefer to not know some information. When reading a murder mystery, for instance, it would be easy to ick to the last page (and learn the culprit’s identity) before reading the rest. For most people, doing so would reduce their enjoyment of the book; they would prefer to remain deliberately ignorant until reading to the end. Many hedonic reasons may be supplied for avoidance of such information. For some it may relate to the feeling of suspense, whereas others may enjoy trying to work something out. In some cases, such as when to hear the punch line of a joke, there is likely to be fairly universal agreement that deliberate ignorance is best. In other cases, this aspect can differ signicantly between people. For instance, although some would like to know how a magic trick is done or to understand how a rainbow is formed, others may prefer (and thus deliberately choose) not to know. Such choices can depend on how an individual perceives the expected payoffs of knowledge and their subsequent interactions with the world. In this section, we discuss models where an individual seeks to maximize their

The Zoo of Models of Deliberate Ignorance

159

expected level of “happiness.” There are numerous models that can be used to explain deliberate ignorance under such circumstances; several are based on an individual’s subjective utility. Subjective Utility The standard approach to decision making within economics is known as subjective expected utility theory (SEUT). According to SEUT, we as theorists can, if we make certain assumptions about people’s rationality, make sense of a person’s choices between possible outcomes if we assume that people behave as if they possess (a) stable utility functions and (b) beliefs about the probabilities of different outcomes. More specically, consider an event with just two possible outcomes, x1 and x2, and assume that an individual believes that the probabilities of these outcomes are p(x1) and p(x2), respectively. The subjective expected utility associated with the event will then simply be u(x1) × p(x1) + u(x2) × p(x2),

(10.1) 

where u(xi) is the utility of xi for that person. The SEUT approach then assumes that the action (i.e., event) with the highest subjective expected utility is chosen. Note that SEUT assumes that individuals behave “as if” they are calculating the best option all the time, but the models often say nothing about the process by which such decisions are reached. Perspectives on Belief-Based Utility Bayesian updating is one of the standard approaches used to represent learning and optimal decision making in economic models; this approach ts with SEUT. Recent models from behavioral economics on deliberate ignorance can be divided in two groups, depending on whether they rely on Bayesian updating or not. In the rst case, agents are assumed to be Bayesian updaters. They may decide to avoid or ignore information and thus stick to their Bayesian prior to reduce problems of self-control, keep a halfway decent self-image, or stick to a not-too-drastic belief about their health status. Examples include Bénabou and Tirole (2002), Carrillo and Mariotti (2000), and Mariotti et al. (2018), as well as some models in which beliefs enter the utility function directly such as those of Caplin and Leahy (2001, 2004) or Schweizer and Szech (2018). In this class of models, agents will always respect the rules of Bayesian updating. As an illustration, take the example of Huntington disease: An agent knows that one parent carries the genetic mutation for the disease, while the other parent does not. The probability that the agent has the mutation is 50%. As she is Bayesian, this is also her belief of having it. If she takes a perfectly revelatory test, she will learn that there is the mutation in her blood (leading to disease) or not.

160

P. C. Trimmer et al.

When this agent thinks about getting information from the test versus ignoring it, she can only end up in situations where her belief about getting Huntington disease is 0%, 50%, or 100%. If she decides to ignore information, she sticks to the Bayesian prior of 50%. This is very much in contrast to the second class of models, such as Brunnermeier and Parker (2005) or Gollier and Muermann (2010), in which agents “optimize” their beliefs. Consider again the example of Huntington disease. If the agent has not been tested yet, she can choose her beliefs to be anything from 0% to 100%. Thus, she may want to bias her beliefs optimistically and deviate substantially from the Bayesian prior of 50%. The latter models thus provide a lot of leeway to design (i.e., bias) beliefs, as Bayesian rules do not have to be followed. An intermediate solution is proposed by Golman and Loewenstein (2018), who put forward an information gap belief-based utility model in which the impact of beliefs on utility depends on the attention paid to those beliefs. They assume that getting information attracts attention to the affected beliefs, with more surprising information attracting more attention. Golman et al. (2019) analyze the predictions of this model for information acquisition and avoidance. Information that may produce beliefs that are unpleasant to think about can have disutility because it forces people to pay more attention to beliefs that they do not want to attend to. This disutility is traded-off against the pleasure of satisfying curiosity and the instrumental value of the information. The model predicts that when beliefs are sufficiently unpleasant to think about, a person will prefer to remain deliberately ignorant, and as the intrinsic valence of that belief gets worse, the person would be willing to pay even more to remain ignorant. The model also makes predictions about when curiosity will overcome deliberate ignorance and cause a person to seek out information. Contrasting Two Utility-Based Models In a standard utility model, there are states of the world θi  Θ with probability p(θi) and choices between strategies (or actions) sj  S that map from the set of states Θ into a set of material outcomes X, with a utility function dened on X. The standard value of information is (10.2)  In belief-based utility models, beliefs about the state of the world enter the utility function, but the value of information can still be modeled as the expected utility of the posterior beliefs (including the utility of the choices made contingent on those beliefs) minus the utility of the prior belief (including the utility of the choice made given that prior):

161

The Zoo of Models of Deliberate Ignorance

(10.3)  is belief after learning where p is the prior belief about states of the world, that the state is θi (i.e., the degenerate distribution on this state), and xj is the (prior) distribution over outcomes that would result from choosing strategy sj. We may rewrite this as  

,

(10.4) 

where In the information gap model, attention enters the utility function, and if π denotes beliefs about answers to questions and about the distribution over outcomes (i.e., π takes the place of p, xj), the value of information from answering a question becomes (10.5)  where the Ai are the possible answers to the question, w is the attention placed is the attention placed on each question before getting any information, and on each question after nding out that the answer is Ai (Golman et al. 2019). The information gap model assumes that the attention weight vector w depends on the importance of the various questions (which is modeled in terms of the spread of the utilities that would result from different answers) and on the salience of the various questions (which is not modeled at all); the attention additionally depends on the surprise associated with nding out weight answer Ai (which is modeled in terms of Kullback-Leibler divergence). The model also assumes a specic form for the utility function:  

, (10.6) 

where H is the entropy function (a measure of how uncertain each belief is) and k indexes the various questions that the person is aware of (Golman and Loewenstein 2018). Consider, for example, an opportunity to get tested for HIV with the assumptions that this is the only question that the individual is aware of, that the individual would choose to take medicine if the test is positive, and that they would choose to not take medicine if the test is negative or if he remains deliberately ignorant (see Golman and Loewenstein 2018). The value of information becomes (10.7)  Without loss of generality, consider the material value of not having HIV to be 0, and also assume that not having HIV has neutral belief valence 0 (i.e.,

162

P. C. Trimmer et al.

the person does not mind or enjoy thinking about not having HIV). Then Letting the material value of having HIV and taking medicine be valence of believing that one has HIV be vH, we get

and the (10.8) 

Lastly, letting the material value of having untreated HIV be we get

and letting (10.9) 

Putting this together, a person would choose to be deliberately ignorant of their HIV status if (10.10)  Rearranging terms, the condition for deliberate ignorance is (10.11)  Interpreting Eq. 10.11, the instrumental value of the information plus the intrinsic value of reducing uncertainty (or satisfying curiosity) wH(p) needs to be less than the benet of not increasing attention on a negativevalence belief (w+ – w)(–pvH). The assumption that surprise attracts attention implies that (w+ – w) > 0. The prediction of whether or not the person chooses to be deliberately ignorant depends on how much difference it makes to take medicine when HIV positive , on how unpleasant it is to think about being HIV positive (vH), and on how much attention the person was initially paying (w), which itself depends on how salient the question was and on its importance. The model predicts that if thinking about being HIV positive is sufficiently bad, the person will choose to be deliberately ignorant, and that given xed values of how unpleasant it is to think about being HIV positive and on how much taking medicine helps in that case, a person could choose to be deliberately ignorant if the question is not initially salient (and thus attracts little attention), but could change his mind and choose to become informed if the question becomes highly salient. In a model based on optimism, deliberate ignorance may arise from a person choosing to hold optimistic beliefs in the absence of information but being unable to maintain optimistic beliefs after getting information. Following Oster et al.’s (2013) analysis of Huntington disease testing, we can use Brunnermeier and Parker’s (2005) model of optimism to analyze HIV testing. Accordingly, if a person chooses to be deliberately ignorant, he can choose his belief (i.e., the probability q that he believes he has HIV) to reduce the anxiety of having it at the cost of then potentially mistreating it, even though he makes the decision of whether to remain deliberately ignorant in the rst place in some sense knowing the true probability p that he actually has HIV. While in principle he could

The Zoo of Models of Deliberate Ignorance

163

choose any value of q, his choice really comes down to whether to choose q = q*, the minimum level of risk that would induce him to seek treatment, or q = 0 (no risk). A person who chooses q = 0 has no anxiety but has a probability p of having untreated HIV, which has value , so the expected utility of choosing q = 0 is . A person who chooses q = q* will get the treatment and thus has a p chance of having HIV with treatment, which has value and a 1 – p chance of getting unnecessary treatment despite not having HIV, which has value . This belief choice also leads to anxiety from anticipating these outcomes with probabilities q* and 1 – q* respectively. Thus, the expected utility of choosing q = q* is (10.12)  where δ is the weight placed on the belief-based utility (i.e., anxiety). (Note because the person must be indifferent about that treatment at q*.) The person chooses q = 0 if (10.13)  If the person gets the HIV test, he can no longer choose his belief. Instead his expected utility, conditional on proper treatment, is . The person will choose to be deliberately ignorant of the test results if (10.14)  The model predicts that if the thought of having HIV (even if treated) is sufficiently negative, then placing enough weight on anticipatory utility (i.e., being sufficiently anxious) will cause somebody to avoid the test result (i.e., choose to be deliberately ignorant). Finally, we note that although the subjective utility approach assumes that utility functions may be inferred, there has been no attempt here to tie this in with evolving those functions (from a functional perspective of the utility curves being adaptively benecial). Natural selection acts on our behaviors, irrespective of how we feel about things (individuals who constantly feel sad have the same expected tness as those who constantly feel happy, if the actions of each are the same). So, while we care about mental happiness, pain and so on, these are only adaptive insomuch as they assist our mental drives in guiding us toward adaptive behavior in certain situations. Links with Forgetting and Heuristics Schooler (this volume) outlines deliberate strategies that people might use to prevent the retrieval of emotionally disturbing information. This is akin to the

164

P. C. Trimmer et al.

“emotion-regulation and regret-avoidance” function that Hertwig and Engel (this volume) ascribe to deliberate ignorance (see also Gigerenzer and GarciaRetamero 2017). One strategy is to encode new memories that interfere with the retrieval of the disturbing memories. A second approach is to exploit human memory’s propensity to confuse imagined events with real ones (Loftus 1997). In essence, rather than having accurate memories of the past, it is better for our emotional well-being to mask the bad experiences with false memories (thus becoming deliberately ignorant of the past). Techniques to reconsolidate negative memories with more positive ones are being developed to treat people with posttraumatic stress disorder (Gardner and Griffiths 2014). However, to the best of our knowledge, precise computational models for the construction of false memories, whether benecial or not, are rare (an exception is Hoffrage et al. 2000). This approach also suffers from the same difficulty as mentioned above, of not linking with the functional aspect of whether such processes make sense from an adaptive perspective. Making Use of Ignorance We conclude this section on a somewhat different tack, by noting that the condition of ignorance can itself be informative, as ignorance can correlate with what one wants to know. For instance, the recognition heuristic (Goldstein and Gigerenzer 2002) has been shown to be highly effective in some scenarios. Consider the prediction of the outcomes of tennis matches at a major event. If a spectator has heard of one player but not their opponent, the recognition heuristic predicts that the player whose name is recognized will win. Mere recognition has been shown to predict as well as, or better than, the ATP rankings and Wimbledon experts (Serwe and Frings 2006). (Note, however, that ATP rankings are not set up purely to predict match outcomes; the rankings include a recency bias to encourage players to play more matches.) The recognition heuristic has also been successfully used to ignore information deliberately while investing in portfolios of stocks that reect the limited name recognition of rms (Borges et al. 1999; Ortmann et al. 2008). The heuristic works well in situations where a lack of recognition has high predictive power. Thus, heuristics can make use of a lack of knowledge; ignorance (lack of recognition) in the recognition heuristic is, itself, information. When that cue of ignorance has greater validity than other potential cues, it is theoretically possible for an individual to benet by preferring to remain ignorant rather than recognize additional cases (e.g., when they recognize half the players in a tennis tournament). This outcome does not contradict the ndings of the standard economic model as it violates the standard assumptions by assuming a form of bounded rationality; recognition is binary, in contrast with standard models which would assume graded information levels (e.g., of how often a player had been seen before) and full use of all available information.

The Zoo of Models of Deliberate Ignorance

165

Ignorance (or partial ignorance) can also be benecial when it comes to choosing what to attempt. For instance, someone trying to climb the academic ladder might benet from not recognizing that all of the current professors are of the opposite sex. Such information could be disheartening and cause the person to believe that a professorship is most unlikely, a result that would then be self-fullling. It is conceivable that someone might be subconsciously aware of such a fact and then “choose” (i.e., consciously, and thus deliberately) not to look into data on the topic. If someone’s work ethic could be inuenced by such data, this strategy may be benecial.

Wanting to Not Use Information We now turn to cases where making use of information is expected to lead to worse outcomes, as in situations where processing data is automatic. Under these circumstances, it obviously makes sense to be deliberately ignorant. Some cases of deliberate ignorance are imposed at a group level (e.g., what a jury is allowed to know about a defendant); here we focus on choices at an individual level. Overtting and Forgetting It has long been recognized that it is easy to “overt” data. Given a set of instances from which to learn, a model can be tted using features, by nding the set of parameters which best predicts the outcome data from the feature data. However, when used in an unsophisticated way, this approach typically tries to make too much use of the feature data and, to make predictions, it would be better to deliberately ignore some of the features because the “true” validity of the information is unknown. Consider, for example, a wild salmon shery where the goal is to know the structure of the true biological model of a salmon shery’s population. Such a model will have many unknown parameters, because salmon are complicated animals with complicated life histories. If we are unable to gather enough data to estimate accurately all of the different parameters, we might make better decisions by using a simpler model that ignores information. If, however, we knew the values of the parameters with sufficient accuracy, then ignoring this information would not lead to better decisions. Thus, the value of deliberate ignorance arises from ignorance of something else. Kareev (2012) shows that a correlation is more likely to be detected as sample size (or memory) decreases. In contrast to the paragraph above, this does not arise from not knowing something else about the system. When decisions are discrete, there may be nothing that we can tell the agent that would mean that acquiring a few more samples would improve decisions. However, like the salmon example, it is also a consequence of how bias and variance

166

P. C. Trimmer et al.

jointly inuence patterns of error. There are numerous techniques for eliminating this problem, such as using simple heuristics, take-one-out (or, more generally, n-fold) validation or, in the case of decision trees, various types of pruning (Mitchell 1997). When data is handled appropriately, there is no harm in receiving such data (if it is free to acquire and process) as it will not be overused, so there is no benet in being deliberately ignorant. However, there are a couple of obvious caveats: • •

If there is a cost to acquiring or processing the data, then it can be better to choose to avoid it. If processing of the data is automatic (i.e., it cannot be ignored) and is not always suitable, then it can be better to avoid it.

It is the second of these cases that we focus on here, as the more interesting. One of the reasons that using data might be inappropriate is that it may be out of date, thus it is no longer benecial. Schooler (this volume) explores how memory processes, including forgetting, can achieve functions that Hertwig and Engel (this volume, 2016) have attributed to deliberate ignorance. There are well-developed computational models of human memory motivated by the observation that forgetting helps to prioritize important information and set aside information that is likely to be distracting (Anderson and Milson 1989). Beyond removing potentially interfering information, forgetting may be adaptive for specic purposes. For instance, Schooler and Hertwig (2005) implemented a model of the recognition heuristic and varied the forgetting parameter of their model. They showed that the recognition heuristic performed best at intermediate levels of forgetting. At low levels of forgetting, the model would likely recognize both options, whereupon the recognition heuristic could not be used. Similarly, at high levels of forgetting, neither name is likely to be recognized and again the recognition heuristic does not apply. However, there is no claim that forgetting is “deliberate” in any conscious sense in the basic model. However, when the world is changing, there is good reason to consciously discount old data: deliberately ignoring it, much like forgetting it, would then make plenty of sense as a way of making sure that it is not used. Collider Bias A valid reason to omit a variable from consideration is because including information can confound inference, regardless of how much data we have. The clearest example is known as collider bias. A collider is a variable that is a function of two (or more) other variables. Consider, for example, a lamp. Whether the lamp is “on” is a causal function of both the switch that controls it and the ow of electricity: switch → lamp ← electricity

The Zoo of Models of Deliberate Ignorance

167

The lamp is a collider of the switch and electricity. Once we know the state of the lamp, it provides information about the switch and the electricity. If the lamp is on, we know that both the switch and the electricity are on as well. If the lamp is off, then either the switch or electricity (or both) are off. If we know the lamp is off and the switch is on, then the electricity must be off. The point is that while causation ows in one direction, from causes (switch or electricity) to results (lamp), statistical information can ow in all directions. What does this have to do with confounding? Suppose we wish to learn about the relationship between education (E) and wages (W). How much does education inuence (cause) wages, E → W ? Suppose also that education and wages jointly inuence hobbies (H), like sky diving or watching football. Now H is a collider of E and W. As a result, if we learn someone’s hobbies and their education, we also learn something about wages, in the same way that knowing whether the lamp is on and the switch is on tells us about the electricity. If we then regress W on E, including H as a covariate, it will result in a confounded estimate of the causal inuence of E on W. Why? Because as soon as we condition on H (learn about H), statistical association ows along the path E → H ← W and biases our inference. We end up polluting the path E → W with information from the other path. If, instead, we omit H from consideration, no information ows along the path E → H ← W because, if we do not know H, then E tells us nothing about W. There may be a number of variables like H (e.g., marriage status or number of children), and including any one of them as a “control” variable would bias inference, regardless of how much data we collect. Therefore, the reason for ignoring a control variable is distinct from overtting concerns. An interesting feature of the collider bias example is that it requires enough knowledge of the causal model in order to stimulate deliberate ignorance of the collider (hobbies in the example). Therefore, a person practicing deliberate ignorance of, for instance, hobbies already knows (or thinks they know) more about hobbies than a person who might use hobbies in the analysis. So, the person is hardly ignorant about hobbies, in the abstract. Rather, they deliberately avoid gathering more information about them (as any such data should not be used). Another feature is that it does not matter, in terms of inference, whether the collider is never learned or simply not used in the analysis. This is a property of many individual motives for deliberate ignorance. The recognition heuristic is a plausible exception: it is not easy to un-recognize something and recall that it was previously un-recognized. There, ignorance and nonuse are connected. When we turn later to interpersonal, strategic reasons for deliberate ignorance, the difference between ignorance and nonuse will be crucial.

168

P. C. Trimmer et al.

An Evolutionary Case, Using Collider Bias Suppose that a species has evolved in a situation where they sometimes encounter potential cues, Xs and Zs, and must choose their behavior based on their expectations of another parameter, Y, which can only be inferred at the time of decision. Let us rst assume that there is collider bias, as shown in Figure 10.1, so the individual should only go by X without making use of Z. Many such circumstances could be imagined. For instance, X could be any factor at a location that inuences whether a nest should be built at that location, Y then represents whether it is good or bad to build a nest there (which cannot be directly perceived), and Z a factor inuenced by the other two, such as the number of existing nests in the area. With zero costs for data, it would be fully rational to take in all information (Xs and Zs) and then choose which data to make use of (as standard models of rationality typically assume). However, the costs associated with gathering information as well as the mental processes involved (e.g., building and maintaining memory banks, energy costs of processing information) mean that it can be best for an agent to have bounded rationality (in the sense of constraints on their mental capabilities). Consequently, rather than obtaining, storing, and mentally processing each case of Z that is encountered, natural selection may instead select for organisms which deliberately choose to ignore Z. (Note that there may also be a cost associated with ignoring an item; here we assume that this cost is smaller than the combination of the other costs if Z were to not be ignored, as seems likely in many biological cases). This then sets the stage for more extreme forms of deliberate ignorance to occur through environmental change. Deliberate Ignorance Arising Through Environmental Change In the modern world, deliberate ignorance may be displayed because recent environmental change has been occurring faster than our brains have been able to evolve. Suppose that, in the ancestral environment, there was the possibility of obtaining a particular type of information, but at signicant cost. For instance, by approaching spiders or snakes, and sometimes being bitten, one would learn

X

Y

Z (collider)

Figure 10.1 Causal diagram of the simplest case of collider bias.

The Zoo of Models of Deliberate Ignorance

169

which were safe and which were not. Such information would be very useful, but generally it would be prohibitively costly to obtain. Instead, individuals may do much better to immediately avoid all snakes and spiders (regardless of their type) and largely only learn (what they could) about such animals from others. It makes sense, then, that psychological mechanisms should have evolved to steer clear of such dangers. This, of course, is not an interesting case of deliberate ignorance, as the costs of obtaining such information are large. In the modern world, however, with glass cages and the like, snakes and spiders can be approached and learned about directly, without paying that cost of potentially being bitten. Yet many people still display a strong aversion to such creatures, even when they are clearly behind glass. Thus, although the cost of obtaining information about the animals would be very cheap to obtain (simply walking up to the glass and looking at them), many people2 display deliberate ignorance in a fairly extreme way, by exercising immediate avoidance. The ancestral reason for data avoidance can be any case where the costs of data acquisition and processing are higher than the expected benets of having that information (e.g., any case of collider bias where there was a cost for the information). Having evolved the tendency to avoid that information, an environmental change that modies expected payoffs can then result in individuals avoiding information where, if they were able to process it properly, otherwise it would be better to acquire the information. Similarly, it is possible that cues relating to the expected cost of acquiring or processing the information (rather than the actual costs) may have altered from ancestral settings. Thus, it is potentially very easy for deliberate ignorance to arise through environmental change. Blinding and Bidding We have seen that factors such as overtting and collider bias mean that it is easy to make use of data when one would do better to avoid it. As people will often tend to make use of a variable when they know it (e.g., names in peer review), the option of “blinding” can make sense. This can occur at different levels: between organizations, between sections of an organization, or between individuals. Sealed Bids and Blind Trusts In a sealed-bid auction, a number of bidders submit their bid simultaneously, without knowing each other’s bid. In the simple case of bidding to buy an item 2

This effect is not only observed in humans; numerous YouTube videos show cats reacting, in apparent terror, when they discover a cucumber that had been silently placed behind them. It seems highly unlikely that cucumbers ever posed a threat to cats in ancestral times, but it would have been highly adaptive for cats to leap away if a long green thing suddenly, and silently, appeared behind them.

170

P. C. Trimmer et al.

(e.g., an art piece), the envelopes are opened and the highest bidder wins the item (typically then being required to pay his/her bidding price or, in some auctions, the second highest price). Here, deliberate ignorance (in the form of a blind review) is not an issue. Another common case, which is relevant to deliberate ignorance, is where bidding takes place for some large project (e.g., paving a road, building an office complex, contracting to develop some weapons system), and the call for bids is usually made by a public body (government, state, city). Generally, competitors have to submit a detailed proposal on what they propose to do, how they plan to do it, how much they would demand for the project, and so on. Once bids are submitted, they are opened, rated, and compared. Rating is usually based on a number of criteria (e.g., quality of the plan, price asked, previous experience, nancial resources) whose relative weight is announced in advance. The nal score is the weighted average of the scores on the various criteria. In such bids, one often does not want the identity of the bidder to interfere with the evaluations (e.g., you don’t want political supporters of the mayor to get an unfair advantage). In such cases, deliberate ignorance—hiding the identity of the bidder—may help. However, it may still benet an individual to (surreptitiously) know the identity of each bidder. Thus, deliberate ignorance here is at a different level to that of the individual; it is at the organization level, shielding its assessment from knowledge in other parts of the organization. We contrast this with a blind trust, where politicians, for example, shield themselves from knowledge of their nancial investments. This is arguably done as a signal to others (especially voters) that the individual can be trusted to be acting on behalf of everyone, rather than taking self-interested actions. The action also serves the individual in the longer term by protecting them from criticism about their actions if others (correctly or incorrectly) accuse them of making self-interested decisions. Collective Deliberate Ignorance Prostate-specic antigen (PSA) screening (i.e., a blood test for the early detection of prostate cancer for men without symptoms) is not recommended by the U.S. Preventive Services Task Force (USPSTF) and other national health organizations. It has also been outright rejected by the Swiss Medical Board and other organizations, as well as Richard Ablin, the discoverer of PSA. Most health insurers do not pay for the test. The reason is that randomized studies have been unable to show that screening reduces the total mortality after ten years (i.e., no life is saved), yet many men are harmed (e.g., incontinence and impotence) through surgery or radiation treatments that follow a positive test. Nevertheless, many urologists still recommend PSA screening. Studies have shown that most urologists do not know the benets and harms of PSA screening and seem to prefer to remain ignorant, even though

The Zoo of Models of Deliberate Ignorance

171

information is easily available on the USPSTF’s website as well as through other national organizations (Gigerenzer 2014). Remaining ignorant can protect these physicians from being sued, as illustrated by the case of Daniel Merenstein, a U.S. physician who studied the evidence and informed a welleducated man about the pros and cons of PSA screening, after which the man declined the test. Unfortunately, a few years later the man got advanced prostate cancer and sued Merenstein for having informed him instead of performing the test. The man was awarded the maximum amount, despite the defense having brought in national experts who testied that the benets of the PSA test are unproven but the harms are (Merenstein 2004). Aside from the risk of being sued, there appear to be two additional reasons for deliberate ignorance related to PSA screening. First, in Germany, PSA tests and their downstream consequences (e.g., biopsies) result in about 25% of the average urologist’s earnings. If urologists were to look up the scientic evidence, this might cause an internal conict (or cognitive dissonance) between making money and their self-image of being a good doctor (Festinger 1957). This internal reason for deliberate ignorance is similar to the notion of maintaining identity (internal consistency). Second, a urologist who looks up the scientic evidence and presents it to other urologists in talks or writings may expect to be regarded as a troublemaker and choose deliberate ignorance over being disrespected. Can Reading Block Original Thinking? Another thought-provoking question concerns the following: Should academics should choose not to read the existing literature when starting to research a new topic? In some cases, reading the literature might bias one toward using existing paradigms and block original thinking (just as it can be harder to think of a particular word when a very similar word is already in mind). The extent to which it is worth reading the literature may depend on factors such as the quality of the researcher (arguably very high-quality individuals may do best to start afresh) and the number of other researchers who have already tried to make progress on the topic. One approach to modeling this would be to break a task into stages and assume that existing work has already addressed each one of the stages. A new researcher would then be able to choose whether to read the existing literature (in which case they will have some assumed heuristic for which stages to attempt to progress, based on the perceived progress with each stage) or have a clear run at each stage. Of course, there are easy ways to build models in which individuals choose to ignore data, such as putting a time cost on reading the existing literature. However, by imposing such costs, these constraints shift the explanation to a boring category of the (obvious) benets being outweighed by the (obvious) costs. This approach would instead be assuming that there was no cost for reading the existing literature, except that automatic heuristics would then make

172

P. C. Trimmer et al.

use of certain aspects of that existing work, which would then bias attempts to solve a task. The worth of such a model may therefore rest on whether the heuristics were worthwhile abstractions of reality.

Interpersonal Strategic Perspectives Thus far we have considered deliberate ignorance in the context of an individual having the option of seeing data or not, without that choice being known to others. As we discuss here, when the choice of accessing data (or not) is known to others, there can be very good reasons for deliberate ignorance. Consequences of Commitment Hilbe and Schmid (this volume) provide an example of a two-player “envelope game” where it is better to be ignorant than knowledgeable, so long as the state of knowledge is known to the other player. The game provides a nice example of how parameters can govern the outcome of the system. In some settings, it is better to be knowledgeable, in which case deliberate ignorance would not be expected. In others, deliberate ignorance should exist if players are not able to communicate ndings to one another. If they are able to, however, “cheap talk” would benet both players. In a particular range of scenarios, deliberate ignorance emerges as the best strategy even when the players can communicate with one another. Models that predict qualitatively different phenomena can be usefully tested. However, biological examples of such a system are not easy to envisage. Deliberate Ignorance as a Signal of Condition If individuals differ in the extent to which they rely on information, deliberate ignorance can serve as a costly signal (Spence 1973). In that case, an individual’s public decision not to learn information may help individuals to distinguish themselves from others. To illustrate this point, consider an extension of the envelope game by Hilbe and Schmid (this volume), where there are two types of player 1: “favorable” players are generally more likely to have a low cooperation cost, whereas “unfavorable” players are more often in a high-cost environment. For some game parameters, the game has an equilibrium in which unfavorable players decide to learn the state of nature, whereas favorable players ignore it (Hilbe and Schmid, this volume). In this game, favorable players can thus use their ignorance as a signal that they can be trusted to cooperate. For this mechanism of deliberate ignorance to work, however, it is important that player 1’s ignorance can be veried. If players could privately learn the state of nature, their deliberate ignorance would no longer serve as a costly signal.

The Zoo of Models of Deliberate Ignorance

173

The game is related to the handicap principle (Zahavi 1975), which is often used to explain why individuals choose to take seemingly unnecessary risks. The driving factor in each case is that different types of individuals pay different costs. Zahavi’s handicap principle shows that high-quality individuals can signal their quality by taking risks, thus inuencing mate choice of others (to their benet). In this envelope game, by signaling that they are going to remain deliberately ignorant, individuals can inuence the choices of others, similarly to their benet. In the handicap case, the high-quality individual takes the (somewhat unexpected) action of taking a risk; in the envelope game case, the “favorable” player 1 is advantaged by choosing the (somewhat unexpected) route of deliberate ignorance. A similar signaling motive may explain other effects, such as why employers may choose not to monitor the work effort of their employees. From one perspective, monitoring should increase employee effort, because employees wish to avoid sanctions for shirking. From another perspective, employees may frame their relationship with management as a reciprocal relationship; in this case, monitoring may be interpreted as distrust and result in a reduction of effort. Note, however, that an employer might choose to fake their signal of whether they trust the employees, so the situation can be very complex. There is evidence that monitoring can reduce worker effort but the effect does not always arise (Dickinson and Villeval 2008). When it does, deliberate ignorance of worker effort may result in increased worker effort. Kareev and Avrahami (2007) also show that deliberate ignorance on the part of an employer may help to motivate less able workers to compete for bonuses. Choosing Whether to Know Payoffs to Others Table 10.1 shows the short-term payoffs to oneself and another individual when choosing between two options, which we simplistically label “up” and “down” (based on Dana et al. 2007, see also Dana, this volume). Faced with knowledge of the situation, many players may choose “up,” sacricing one unit of payoff in the immediate term to show that they are willing to help others. In contrast, Table 10.2 shows payoffs when there is no conict: choosing “down” will be best for both players. Suppose that an individual confronts one of the above situations and has the option of learning which situation (Table 10.1 or Table 10.2) they are facing. If they are ignorant of the situation being faced, they can choose “down” without Table 10.1 Short-term payoffs in a situation with strong contrasts in payoffs to others (conicting choice). Choosing “up” Choosing “down”

Payoff to self 5 6

Payoff to other 5 1

174 Table 10.2

P. C. Trimmer et al. Short-term payoffs in a situation with no conicting choice. Payoff to self

Payoff to other

Choosing “up”

5

1

Choosing “down”

6

5

hesitation and thus ensure their largest possible payoff. But knowing which situation they face may cause internal conict. Arguably, then, individuals may choose to be deliberately ignorant in such a situation. This is an example of an individual exercising their “moral wiggle room.” Models of Moral Wiggle Room Research on the topic of moral wiggle room suggests that people sometimes choose to remain ignorant of the consequences of their desired actions, specically because they would feel obliged to behave better (i.e., more altruistically or in accordance with social norms) if they knew the consequences with certainty (d’Adda et al. 2018; Dana et al. 2007; Freddi 2017; Grossman and Van Der Weele 2017; Serra-Garcia and Szech 2018). Dana et al. (2007) found that people avoid information about the consequences of their choices for other people so that they can make the choice that is in their own monetary self-interest. Of course, they could make the self-interested choice even if they were to discover that it would harm others, but then they would feel guilt. Consistent with a desire to avoid the information, Dana et al. (2007) found that only 56% of dictators chose to reveal the recipient’s payoff (i.e., from Table 10.1 or Table 10.2) and when information revelation was optional, more dictators chose the “selsh” payoff than when the recipient’s payoffs were automatically revealed. Related experiments nd that people go out of their way to avoid being asked for a donation, i.e., to be deliberately ignorant of the donation request (Andreoni et al. 2017; DellaVigna et al. 2012). Freddi (2017) nds that people avoid news articles about a refugee crisis as part of a psychological (intrapersonal) coping strategy to suppress guilt and escape the responsibility of helping to welcome refugees in one’s own community. In another example, d’Adda et al. (2018) nd that on hot days some people choose to remain deliberately ignorant of the costs of high air-conditioning use so they do not feel pressured to limit their own usage, thus allowing them to express their ignorance if confronted by others. Grossman and Van Der Weele (2017) propose a signaling model that combines preferences over material payoffs with an intrinsic concern for social welfare and a preference for a self-image as a prosocial actor. They assume that people vary in their degree of prosociality and in the importance they place on self-image. They describe a sequential game as follows:

The Zoo of Models of Deliberate Ignorance •

• • •

175

Nature selects the level of social benet associated with the prosocial action and the individual’s type (i.e., how much the individual cares about the social benet and how much the individual cares about his self-image as a prosocial actor). The individual chooses whether or not to receive a signal informing him about the level of social benet associated with the prosocial action. The individual chooses whether or not to take the prosocial action. The individual forgets his actual type, goes back to his prior belief about the distribution of types and updates his beliefs about his own type based on the action he took (or did not take) and the signal he got (or did not get) about the action’s social benet.

In the Perfect Bayesian Equilibrium of the game, there are some moderately prosocial individuals who choose not to receive the signal about the action’s social benet and who then choose not to take the prosocial action. When utility depends on the effects of actions on others, the situation can quickly get very complicated. Thus, a lot of work would be required to build a unied model that successfully incorporates “morality.” Deliberate Ignorance as a Coordination Device In some cases, deliberate ignorance may be regarded as a way not to undermine a given equilibrium outcome (see also Hoffman et al. 2016). Consider, for example, two countries that face public pressure to intervene in a war zone, should there be evidence that one of the war parties uses chemical warfare, but that acting unilaterally would be insufficient to resolve the conict. To model such a scenario, suppose there is a rst stage in which both countries can look for evidence of chemical weapons. In the subsequent second stage, the two countries decide whether to intervene based on the evidence found in the rst stage. Suppose both countries agree on a strategy to intervene if and only if evidence is found, and expected payoffs are given by the matrix shown in Table 10.3. (For simplicity, in the case of acting unilaterally, the benets of meeting public expectations are assumed to be counteracted by the failure to resolve the conict, resulting in an overall payoff equivalent to that of having taken no action.) Then each country has an incentive not to report (and in fact to not even look for) evidence of chemical weapons. By deliberately ignoring evidence, they are able to coordinate on an equilibrium they both prefer. Table 10.3 tion game.

Expected payoffs to row and column players, respectively, in a coordinaIntervention

No intervention

Intervention

30, 30

0, 0

No intervention

0, 0

50, 50

176

P. C. Trimmer et al.

Time Limits on Interactions The prisoner’s dilemma is a well-known game in which cooperation does not evolve even though everyone would do better if everyone cooperated. As a result, the repeated prisoner’s dilemma (in which the prisoner’s dilemma is played by the same players numerous times) has become a common framework for looking at situations in which cooperation will or will not emerge. Even with a xed limit on the number of rounds that will be played, cooperation can evolve so long as players are sufficiently uncertain about how long another individual will cooperate (Kreps et al. 1982; McNamara et al. 2004). It is better for everyone to not know when cooperation will stop, than for everyone to know, or for one individual to know and others to know that that individual knows. Consequently, deliberate ignorance can be the best choice even in timelimited repeated prisoner dilemma games. Term limits for politicians may provide a real-world example: arguably it is better to not know whether one will be reelected in order to be able to lay policy foundations on a longer-term basis. Stress-testing of banks is another: it is better for banks to agree beforehand that they will deliberately remain ignorant of which bank is the weakest, as knowledge of which is weakest would likely cause runaway selling of that bank. However, in all these cases, if it were possible to look at the information privately, an individual could still benet from it. Thus, there has to be a potential signal to others of seeing the data for deliberate ignorance to make sense as a functionally benecial strategy. Bounded Rationality and Self-Deception Deliberate ignorance can be closely related to self-deception in some cases. Some authors (e.g., Frank 1988; Trivers 2011a) argue that self-deception can be adaptive because that prevents an opponent from reading one’s intentions from unintentional cues (caused by bounded rationality or automatic responses, such as blushing). If individuals do not know what they are going to do, then others are unable to infer it from their body language. One can see this phenomenon as an example of deliberate ignorance that is evolutionarily selected because of the strategic advantage it provides. Fights between male elephant seals may provide an example from animal behavior. Here, advantage is derived if one opponent could not infer another’s intention to quit, or is even misled by cues (“if I know that the other will give up after the next n strikes, I may continue; otherwise I would give up immediately”). In a human context, consider a poker competition: a weak player might benet from not looking at their cards before the rst round of betting, especially when playing against an opponent well-versed in reading body language.

The Zoo of Models of Deliberate Ignorance

177

Refusing a Free Second Opinion from a Reliable Source We conclude this section with the example of an individual who refuses a second opinion. This can occur despite the fact that the source of the potential second opinion is trusted as honest and of generally sound judgment. Imagine, for instance, that Alice is in a position to decide whether Bob will be employed at her company. Before any decision has been announced, one of Alice’s trusted friends, Carol, approaches Alice and offers to give her opinion on Bob. Alice thanks Carol for the offer, but turns her down. Why? Because Alice has already formed a very strong judgment about Bob (e.g., his references showed him to be a liar), so Carol’s opinion will not inuence Alice’s decision. Alice may foresee that if Carol had a positive impression of Bob, then having given Alice her opinion, Carol might subsequently feel offended by Alice’s decision not to hire Bob. It therefore makes sense for Alice to avoid this scenario by deliberately avoiding whatever information Carol has about Bob. This is one of several cases presented here that we have left in the form of an intuitive example; it could obviously be abstracted to form a model, with costs and benets relating to each of the individuals and their actions. The benet of deliberate ignorance in this case is not about improving one’s own actions, nor altering the behavior of others to increase one’s reward in the immediate term. Instead, deliberate ignorance acts as a signal to others not to judge one’s current actions harshly, and is thus a case of deliberate ignorance being chosen to affect longer-term actions of others under different scenarios that are otherwise unrelated to the situation involving deliberate ignorance.

Deliberate Ignorance through Societal Dynamics When there are population feedbacks and spillover effects, the frequency of deliberate ignorance in a population will depend on social dynamics as well as individual psychology. Models that only address individual processes, therefore, do not suffice in creating a complete understanding of the phenomenon, nor for planning policy interventions. Here we examine general population models and then turn to a straightforward model that includes deliberate ignorance and spillover effects. Population-Level Models Population-level models consider the interactions between many individuals and are mainly concerned with analyzing the consequences of those interactions at the population level. Generally, they look for emergent properties of the system, which can be hard to derive from theory without the model doing the work for us. In particular, population-level models determine the time evolution (or equilibrium states) of certain (population-level) quantities,

178

P. C. Trimmer et al.

X = [x1, …, xn], of the considered system. The formal description of this temporal change in X can occur in some applications through an analytical model (e.g., the Lotka-Volterra models for the interactions between prey and predator species) whereas in others, simulation frameworks are used. Regardless of the framework used, population-level models make explicit assumptions about (a) the size of the population of individuals, (b) the properties of the individuals, (c) the population structure, (d) interaction dynamics leading the update of the variables of interest, and (e) demographic processes. Let us now consider the case of the diffusion of innovation in a heterogeneous population and explore whether patterns that resemble deliberate ignorance, or ignorance, can emerge as a by-product of the interaction dynamics. We assume a nite population of N heterogeneous individuals. Each individual, i, is characterized by its individual attributes, dened as a vector θi of cultural features representing, for instance, a different kind of taste or behavior (Axelrod 1997), and its decision to have adopted the innovation yet or not. Further, social interactions (and consequently information ow) between individuals are represented by networks; that is, collections of nodes represent individuals, and links connecting pairs of nodes represent social relations (e.g., Watts 2002). Additionally, individuals possess a homophilistic bias. Very generally, homophily (in particular “choice” homophily) is the tendency of individuals with similar traits (e.g., physical, cultural, and attitudinal characteristics) to interact with each other more than with people with dissimilar traits (Centola et al. 2007; Lazarsfeld and Merton 1954; McPherson and Smith-Lovin 1987; McPherson et al. 2001). This kind of homophily can be modeled by allowing the social network to evolve as a function of cultural similarities and differences between individuals; for a detailed analysis, see Centola et al. (2007). Depending on the chosen model parameter values, the links between individuals may be arranged in such a way that culturally similar individuals tend to be connected more frequently, forming clusters. Networks of this kind are called correlated networks, and the degree of correlation can be interpreted as a measure of the strength of the homophilistic bias. Now, if an innovation is introduced into such a network (e.g., by a small number of innovators), then depending on the chosen update rule, the adoption dynamic can be very different from well-connected situations. In the extreme, we can imagine that the innovation diffuses only through parts of the population due to individuals only being surrounded by others of their own cluster (i.e., individuals similar to them), while knowledge about the innovation may only be present in another cluster. In other words, the homophilistic bias may cause individuals to choose their neighbors selectively; this, in turn, could result in information being received almost exclusively from individuals who are similar and create a barrier to other sources of useful information present elsewhere in the population. At rst blush, this approach produces results that appear similar to cases where individuals choose to be ignorant. However, there is a difference

The Zoo of Models of Deliberate Ignorance

179

between deliberate actions (with a consequence of ignorance) and deliberate ignorance. In this case, although the actions of individuals result in ignorance, it does not benet them to be ignorant, so they are not deliberately ignorant. A Model of Societal Dynamics Involving Deliberate Ignorance To provide a minimalistic example of a population model that includes deliberate ignorance as a spillover effect, consider a population in which people can choose whether or not to acquire some information, such as the contents of their Stasi le (see Ellerbrock and Hertwig, this volume). Assume that there are three possible states for an individual: ignorant (I), knowledgeable (K), or deliberately ignorant (D) of the contents of their le. Individuals start off ignorant (I) and sometimes consider looking in their les. As they do so, they consider the advice of another person. Advice from ignorant, knowledgeable, and deliberately ignorant individuals has different effects on the probability that a focal individual chooses to open their le (and become knowledgeable, K) or not (and become deliberately ignorant, D). As the proportions of K and D change, so too do the rates of change because people receive, on average, different advice. Given this setup, what do you think happens? Does K or D dominate the other, depending on the details? Can D eventually replace K? Will D increase but ultimately die out? Or do K and D tend to coexist? To answer these questions, we express the above in mathematically precise terms. We can represent this model with three differential equations, one each for I, K, and D. Suppose that individuals of type I consider their les at a rate p, and when considering their le, they rst meet another member of the population at random. We assume that, in the absence of advice (i.e., having met another individual of type I), individuals of type I become D with probability r, or K with probability 1 – r. When receiving advice from a K individual, the probabilities are instead q (D) and 1 – q (K). When receiving advice from a D individual, the probabilities are s (D) and 1 – s (K). Finally, we allow some rate of population turnover, so that new I individuals appear over time (note that this differs from the real case with Stasi les). This means that at a rate f, K and D individuals leave the population and are replaced by new I individuals. All together, these assumptions imply these three differential equations: (10.15)  (10.16)  (10.17)  This system has only one interesting steady state, given by

180

P. C. Trimmer et al. (10.18)  (10.19)  (10.20) 

There is typically coexistence of K and D individuals: K and D at steady state tend to both be greater than zero. In hindsight, this is perhaps obvious. Consider, for example, when p → 0.1, q → 0.45, r → 0.4, f → 0.02, and s → 0.7. Under these parameter values, individuals who do not receive advice tend to open their les 60% of the time. Individuals who encounter K tend also to open their les 55% of the time. But individuals who encounter D open their les only 30% of the time. This results in a steady state with more D than K, even though K initially increases more quickly and a majority (55%) of I individuals who meet K choose also to open their les. Figure 10.2 shows the population dynamics for this example. This model is perhaps the simplest model that can demonstrate spillover effects of deliberate ignorance, and the simplest model is usually the right place to start. More detail could be incorporated to consider additional effects, such as media amplication or additional population structure. More detailed

Proportion of population

1.0

0.8

I

0.6

D

0.4

K

0.2

0

10

20

30

40

50

Time Figure 10.2 After some time, a stable proportion of individuals are deliberately ignorant. Simply ignorant is denoted by I, K stands for knowledgeable about their le, and D signies deliberately ignorant.

The Zoo of Models of Deliberate Ignorance

181

psychological models could also replace the static p, q, r, s parameters, producing more subtle feedback between the population and individual choices. It is important to note that, while the parameters q, r, s embody background knowledge of deliberate ignorance at the psychological level, the population dynamics themselves are quite general. The same equations could apply to many social inuence scenarios. This generality of models––abstractions typically apply to more than the contexts that inspire them––is commonplace and can help us appreciate how deliberate ignorance connects to broader phenomena in the study of social dynamics of belief. Quite different psychological mechanisms may produce quite similar population dynamics.

Discussion In this chapter we have identied several simple theories and models of deliberate ignorance, along with numerous descriptions of situations that could easily be abstracted into models where deliberate ignorance arises. While there are numerous models for individuals, we are aware of very few populationlevel models where deliberate ignorance emerges in the population through the model dynamics. We did not discern a general (single) unifying framework for deliberate ignorance, as there can be different fundamental drivers of the phenomenon. Deliberate ignorance can be caused by the expected value of having the information itself (e.g., through automatic mental processes, meaning that such info will likely misguide actions, or for hedonic reasons) or through signaling effects on others (by them knowing that information has/has not been received). We regard this as an important distinction because it is possible for automatic processes (and the like) to evolve into separate systems, and thus for an individual to gain a benet by no longer being deliberately ignorant. In contrast, when the benet is one of signaling to others, there can be no such (future) adaptational “improvement.” While some models would say simply not to bother collecting, or processing, particular information (e.g., when the cost of acquiring the information is too great), other models identify when it is best to actively avoid information. The extent to which we need different models for different types of deliberate ignorance, however, remains an outstanding question. In real-world situations, there may be multiple causes of deliberate ignorance (e.g., relating to strategic, hedonic, and automatic mental processes). For instance, in the traditional marriage market in India, updated by digital media, parents advertise on marriage websites and construct a consideration set from the responses received. Assume, for instance, that a son’s parents are looking for a wife for him. A consideration set may contain between two and ve potential wives. The parents then arrange a meeting with the parents of one of the girls, typically at a restaurant, where the two, who may never have met before,

182

P. C. Trimmer et al.

can talk with each other. Afterward, the parents ask their son whether he agrees with marrying the girl. If he says yes, and the girl also agrees, the search is over. Otherwise, the procedure is repeated with the next girl. The overall situation is very similar to the “secretary problem” in optimal stopping theory. However, this situation is more complex in that rather than the quality of the next potential partner being random, pre-sorting by the parents has occurred before each decision point by the son. In this process, the young man and the young women are each highly ignorant about the choice set they have. In the extreme, they agree to a future spouse based on a single meeting. Deliberate ignorance enters when young people choose to accept this procedure, rather than searching for themselves. Much of this behavior is hard to understand from the perspective of knowing more is better. However, a large proportion of Indians accept this procedure and have reasons for doing so. Some hold that parents have more experience about what a good spouse is (this relates to whether to use information; their own may be less reliable and mislead them). Some feel that being choosy and rejecting a candidate after a meeting would hurt that person, and thus some do not even want to meet the future wife and simply trust their parents (relating to hedonistic biases). Some view searching themselves as a signal to parents that their judgments are not fully trusted or respected (this strategic aspect may have longer-term ramications). In addition, the social norms that have developed in that society (arguably driven initially by the previous three aspects) alters the payoff structures associated with such actions (greater consternation on the part of parents, more likely to be judged by friends, and so on). The information gap belief-based utility model and the optimism model both predict that the choice to remain deliberately ignorant depends on affect (i.e., on how good or bad the beliefs would make a person feel). Different emotions can be similar in affect, yet different on other dimensions. For instance, sadness, fear, and disgust all produce negative affect but are very different from each other. Whether, empirically, people exhibit the same pattern of deliberate ignorance across beliefs that induce different emotions with similarly negative affect is an open question in need of further study. The idea that it can be better in some cases to mask bad experiences, rather than to hold on to accurate memories of the past, seems relevant to deliberate ignorance, although in this case it is about becoming ignorant of one’s own prior experiences. This is potentially very important, given the impact of some conditions (e.g., posttraumatic stress disorder) on individuals. It may be benecial to have better models relating to this idea in the future. We also note that the majority of our discussion focused on whether to obtain information in particular settings, rather than the conditions under which mental processing (or storage) of such information would not have evolved. For models relating to when learning is unlikely to evolve, despite being benecial, see Trimmer and Houston (2014).

The Zoo of Models of Deliberate Ignorance

183

Finally, we note that discussion of what constitutes “deliberate” ignorance can be entertainingly problematic. Consider, for example, a plant that benets from bet hedging with its seeds, relative to conditions for when to germinate. Suppose that to germinate at different times, some seeds have wide pores (which readily respond to rain by germinating) or small pores (thus being more likely to wait). Does that constitute deliberate ignorance? The seed’s shell is stopping rain “information” from triggering it, so its structure is keeping it ignorant of conditions. Arguably, this is not deliberate ignorance as the seed itself is not making that choice. Now, what if an animal hedges its bets by producing offspring who differ in whether (or how often) they accept or avoid freely available information? This surely seems like deliberate ignorance when that individual is tested, but the offspring still have had that imposed upon them, just like the seeds. Further, what if during development, an individual had the choice of which type of mental mechanisms to produce. One set of mechanisms would be more accurate if the environment changed (but is more accurate than by having actively ignored the initial conditions); another set would do best under current conditions (by immediately absorbing information about the environment). Is the individual who chooses the set that will subsequently ignore that information being deliberately ignorant? It would certainly seem so. But what if their probability of choosing that set were already genetically set for them? One perspective is that for something to be “deliberate,” some cost must be imposed by the deliberative process, as the action (or in this case, ignorance) may otherwise occur without being deliberative. Ultimately though, agents perform actions, and natural selection then acts without any necessary distinction of what is, or is not, “deliberate.” What constitutes “deliberate ignorance” may therefore always be blurry around the edges, when real biological systems are addressed.

Acknowledgments Particular thanks to Ulrike Hahn, Ralph Hertwig, Simon Gächter, Kristin Hagel, Stephan Lewandowsky, Pete Richerson, and Barry Schwartz. PCT was also partly funded by the German Research Foundation (DFG) as part of the SFB TRR 212 (NC³).

Norms

11 Harry Potter and the Welfare of the Willfully Blinded Felix Bierbrauer Abstract This chapter presents a selective review of welfare economics. It argues that welfare analysts need to turn a blind eye to various aspects of individual preferences, otherwise applications of welfare economics yield repugnant conclusions. This situation is illustrated with characters from Hogwarts and then related to the theory of optimal taxation. Individual decisions to ignore relevant information, and the welfare implications that result, are then examined, as is the suppression of information that may affect the behavior of others. Such acts may conict with liberal values. In the presence of behavioral biases, however, they may still positively affect welfare, in line with Lipsey and Lancaster’s (1956) theory of the second best. This reection on the limits of welfare economics is not specic to the theme of deliberate ignorance. However, these limitations need to be at the center of any debate concerned with applications. Looking at the welfare implications of deliberate ignorance is not a straightforward application of the concept of externalities. It necessitates a reection on the welfare implications of behavior that is potentially self-damaging. Moreover, it may conict with liberal values and lead to repugnant conclusions unless there is a systematic reection of what preferences to feed into welfare analysis.

Introduction Let us consider a decision that affects the well-being of several individuals: Decision-relevant information is available. However, before the decision problem of interest can be addressed, another decision must be taken; namely, whether to use this information or to take the decision under ignorance.1 1

To be clear, in our discussion of this problem, we are not interested in a trade-off of the following sort: The information, if available, would improve the collective decision. Acquiring it, however, is costly. A cost-benet analysis, therefore, must strike a balance between improved decision quality and the costs of information acquisition. Thus, we assume throughout that information is available at no cost.

188

F. Bierbrauer

To x ideas, consider the following problem: At Hogwarts, a cake of given size has to be divided between Harry and Draco. Albus, a benevolent planner, chooses the division. He contemplates an application of utilitarian principles. If both Harry and Draco were selsh, representing their preferences by the same concave utility function and maximizing a sum of utilities would give rise to an equal split of the cake. At rst glance, this seems to be an appropriate outcome. Harry, however, has altruistic feelings and derives utility from every piece received by Draco. Taking these feelings into account implies that Albus should assign a larger share to Draco. Deviating from a fty-fty split in this way has an opportunity cost, Harry’s forgone utility, as he is eating less, and a welfare gain, Draco’s extra utility from eating more, plus Harry’s extra utility from Draco’s extra utility. The latter implies that the welfare gain dominates. The conclusion that Draco should receive a larger share is, moreover, reinforced by Draco’s sensations of envy which imply that every piece assigned to Harry reduces Draco’s utility by more than just his forgone consumption utility. Thus, a consequence of utilitarianism seems to be that Harry is punished for his altruism and Draco is rewarded for his envy. Albus thinks twice. What information about preferences and utilities should be considered? What information should be ignored? This example illustrates the possibility that taking account of information on preferences makes it possible to achieve higher welfare levels, even when the consequences seem repugnant. More generally, the question is: What types of information should welfare analysis be responsive to and what types of information should welfare analysis disregard? We begin by examining this problem at a broad conceptual level, and then discuss more specically the welfare implications of information acquisition and information avoidance by individuals. To what extent are individual choices in this regard aligned with social welfare? To what extent would a welfare maximizer want to interfere with individual choice?

Blinding the Welfare Analyst What Should Be the Domain of Welfare Analysis? Let us start with an example from Coase (1960), in which an application of welfare economics seems uncontroversial. A shery and a chemical plant reside along the same river. The chemical plant, which pollutes the water of the river, is upstream from the shery. This negatively impacts the sh population and reduces the return to the shery. This classic example is discussed in textbook treatments of the market failures that arise in the presence of externalities. Under laissez-faire, the chemical plant does not consider that its activities have negative consequences elsewhere. Emissions are then too large from a welfare perspective: Less chemical production or an investment in cleaner technology

Harry Potter and the Welfare of the Willfully Blinded

189

combined with monetary compensation for missed prots would make both the chemical plant and the shery better off. An alternative example by Sen (1970) involves a person’s decision whether or not to read Lady Chatterley’s Lover, a novel with explicit accounts of sexual actions, and another person who is a prude and feels that no one should read such a book. If the rst person reads the book, this has adverse consequences for the second. Does such a negative externality warrant the same treatment as the example with the chemical plant and the shery? The logic of the latter suggests that censorship by the second person, in combination with compensation to the rst person for being censored, would make both better off. Sen uses this example to illustrate a conict between the principles of welfare economics and liberal values that arises as soon as individuals have preferences over the private choices of others (e.g., what books to read, how to dress, whom to meet, and what opinions to express). Liberal principles require that such choices are respected. A stubborn application of welfare economics, by contrast, suggests that such choices should be corrected or moderated in return for compensation. Goodin (1995) argues that the preferences used in welfare economics should go through a process of “laundering.” Goodin is concerned with perverse or sadistic preferences, such as Draco’s sensations of envy in the Hogwarts example. Welfare economics would be misguided if it took such preferences seriously. Draco would then be rewarded for his envy by receiving a bigger chunk of the cake than Harry. In Goodin’s view, the censoring of preferences fed into welfare analysis does not require a paternalistic justication. It can often be justied by distinguishing what people really want, their true preferences, and the preferences that seem to be revealed by their choices: their revealed preferences. Revealed preferences may be shaped by temptations, short-term desires, or other sensations. Individuals might not want value judgments to reect these sensations. In this case, there is a discrepancy between the normatively relevant true preferences and the revealed preferences.2 Note that the laundering of preferences can also be applied to address Sen’s conict of liberal values and welfare economics. In this case, laundering would have to remove preferences over the private choices of others. A liberal prude would admit that he feels annoyed by a fellow’s reading of Lady Chatterley’s Lover while not wanting this sensation to be used to justify the interference with private choices. An illiberal prude might disagree. There is, however, no conict between liberal values and welfare economics provided that, for the purposes of welfare analysis, all liberals agree that their preferences should be laundered from attitudes toward the private choices of others.3 2 3

For the conceptual distinction between revealed and true preferences, see Kahneman et al. (1997); for empirical applications, see Gruber and Köszegi (2001) and Allcott et al. (2019). For the purposes of this chapter a formal denition of what constitutes a private choice is not needed. This will be intuitively clear from the context.

190

F. Bierbrauer

The preceding discussion points to the question: What types of preferences should one consider in welfare analysis? This is a normative question that cannot be answered by an application of welfare analysis itself. Laundering preferences from sensations of altruism or envy may be a way to avoid repugnant conclusions, such as punishing Harry for his altruism and rewarding Draco for his envy. Dismissing preferences over the private choices of others is a way to avoid illiberal conclusions, such as the censoring of books. From the perspective of applied welfare economics, the deliberate ignorance of perverse or illiberal preferences may come with a cost. The welfare measure used in applied analysis would possibly take a higher value if dirty preferences were considered. The following examples from the welfare analysis of tax policy illustrate this point. Welfare Economics of Taxation In applied welfare analysis, there is a set of outcomes, and individuals have preferences over these outcomes. The outcomes and the preferences comprise the primitives of the problem. The problem then is to nd the “right” outcome. Often this is taken to be the outcome that maximizes a utilitarian welfare measure. A more cautious approach—one that avoids interpersonal comparisons of utility—identies a whole set of “right” outcomes, typically a set of Pareto optima.4 A eld that makes heavy use of this framework is the analysis of tax policy in public nance. To give a feel for the relevance of the preceding discussion in applied work, let us explore various examples from this line of work. The theory of optimal taxation applies welfare economics to the study of tax policy. The basic ingredients involve a government that uses tax policy to generate revenues and consumers who choose how much to consume, how much labor to supply, or how much to save. The choices of consumers are affected by the tax policy. A labor income tax affects the incentives to supply labor; a tax on capital income affects the return on savings. An optimal policy maximizes a utilitarian welfare objective by taking these behavioral responses of consumers into account.5 A well-known result is that taxes should follow an inverse elasticity rule: taxes should be high when behavioral responses, usually measured by the price elasticity of supply and demand, are low and vice versa. It is an intuitive nding. If capital income is shifted abroad in response to taxation but labor income is not, then the tax on capital income should not be as high as the tax on labor income. If demand for necessities such as bread or gas is less price sensitive than the demand for luxuries, then the tax on bread 4 5

The dening property of a Pareto optimal outcome is that moving away from it necessarily makes some people worse off. Modern analysis of this problem dates back to Ramsey (1927). A rich body of literature has rened this approach in various ways, with seminal contributions by Mirrlees (1971), Atkinson and Stiglitz (1976), Diamond (1998), and Saez (2001).

Harry Potter and the Welfare of the Willfully Blinded

191

should be higher than the tax on champagne. These examples raise distributive questions, yet even if these are taken into account, the logic of the inverse elasticity rule remains intact, all other things being equal: When two goods are consumed equally by the rich and the poor, the one with the lower elasticity of demand should be taxed at a higher rate (see, e.g., Diamond 1975). In any case, such taxes interfere with the private choices of individuals. If demand for books such as Lady Chatterley’s Lover or The Satanic Verses was less price sensitive than the demand for more “respectable” types of literature such as Hamlet or The Koran, the logic of optimal tax theory would suggest having higher taxes on the former and lower taxes on the latter. Hence, another round of laundering may be needed to avoid repugnant or illiberal conclusions from the application of optimal tax theory. The consequence of such laundering, however, is that the resulting tax system is not optimal from the perspective of a welfare measure that is based on the preferences that individuals reveal through their market behavior. Just as an insurance company loses prot when nondiscrimination requirements exclude different premia for men and women, a welfare maximizer has to live with the fact that laundering precludes reaching welfare levels that would otherwise be attainable. From the perspective of practical tax policy, having different tax rates for different types of books is a contrived example. There are, however, more plausible implications of optimal tax theory that raise similar issues. There is a rich literature on the optimal taxation of couples (for seminal references, see Boskin and Sheshinski 1983; Kleven et al. 2009). These studies show that a differential taxation of primary and secondary earners in a couple is desirable from a welfare perspective. This nding combines the logic of the inverse elasticity rule with the empirical observation that the labor supply of females, who are more often in the role of the secondary earner, is more tax sensitive than the labor supply of males. Thus, an optimal tax system should apply different tax rates to the primary and secondary earners in a couple. In particular, income due to the secondary earner should be taxed at a lower rate. Obviously, such differential taxation interferes with a private choice. It affects the assignment of roles in a couple, in particular the decision of who should contribute how much to the family’s income. With a progressive income tax system, income splitting6 is the only way to have a couple’s tax burden solely depend on their overall income, irrespective of who contributed how much. Hence, an attempt not to interfere with private choices implies that the inverse elasticity logic is not applied, with the consequence that potential welfare gains remain on the table and that the female labor supply is discouraged more than it would otherwise be. The treatment of altruism plays a prominent role in the theory of capital and inheritance taxation. Results on the desirability of capital taxes crucially 6

Let y p be the income of the primary earner and y s the income of the secondary earner: under income splitting, the tax burden (T) of a couple, as a function of income, is 2T(y p + y s) / 2.

192

F. Bierbrauer

depend on assumptions about the altruism of parents toward their children. In the analysis of Farhi and Werning (2010), altruism implies that a bequest is a source of utility both for parents and children. A bequest subsidy is warranted to make sure that the positive externalities from leaving a bequest are taken into account. In Piketty and Saez (2013), by contrast, the degree of altruism varies from generation to generation in an unpredictable way. This is shown to imply that a redistributive taxation of bequests is desirable. The redistribution from lucky children with high bequests to unlucky ones is shown to be part of a welfare-maximizing policy. Needless to say, such an analysis only makes sense on the assumption that altruism is a legitimate ingredient of welfare analysis. The conclusions on the desirability of bequest taxes and subsidies would not survive a laundering of preferences from altruism. These examples demonstrate the difficulty of addressing which preferences to feed into welfare analysis. A naive use of revealed preferences may give rise to repugnant or illiberal conclusions. Forcing the welfare maximizer to turn a blind eye (thereby exercising deliberate ignorance) to the dirty or private aspects of individual preferences may be an appropriate remedy. As seen in our discussion of tax policy, the extent to which this is done can drastically affect the policy implications of “applied work.” Internalities The Coasean bargaining example involving the chemical plant and shery is one of externalities. One rm pursues its economic interests at the expense of another one. Welfare economics stipulates that such externalities must be considered and, moreover, that doing so in an appropriate way will make both rms better off. This logic has been extended to address internal conicts that individuals may have. Self-control problems are a prominent example. An individual may have a long-term goal of leading a healthy life. In the short term, however, the individual is confronted with temptations such as alcohol, cigarettes, or unhealthy food. Giving in to such temptations damages the individual’s long-run goals. The literature often refers to such self-damaging behavior as creating internalities. Applied work in optimal tax policy has discussed corrective taxes that address such internalities. For instance, O’Donoghue and Rabin (2006) characterize “optimal sin taxes” that mitigate self-damaging behavior.7 Public policy that addresses internalities interferes with the private choices of individuals. This raises the question whether it provokes the type of conict between liberal values and welfare economics illustrated by Sen’s Lady Chatterley example. In that example, a problem arises as one person has preferences over the private choices of another person. Here, the public policy 7

Taxes are not the only instrument that can be used to address internalities. A prominent alternative are nudges (Thaler and Sunstein 2008; Mariotti et al. 2018).

Harry Potter and the Welfare of the Willfully Blinded

193

maker has preferences over the lifestyle of individuals (e.g., their drinking and smoking habits). Isn’t this the same kind of problem? This question has spurred controversies (see Gul and Pesendorfer 2008; Loewnstein and Haisley 2008). For the proponents of such policies, the answer is clearly “no,” provided that the policy maker does not pursue its own agenda but has preferences that are aligned with the individuals’ long-term goals. The agenda on “soft” or “liberal” paternalism (for the best-known example, see Thaler and Sunstein 2008) focuses on situations where individuals can be enabled to behave in accordance with their long-run goals, without harming others who do not suffer from the same kind of self-control problem. In line with this program, O’Donoghue and Rabin (2006) look at a population of smokers who differ in the intensity of their self-control problems. Some are heavy smokers and have pronounced self-control problems; others are occasional smokers and have self-control problems that are not as severe. O’Donoghue and Rabin show that a sin tax may, nevertheless, benet all smokers. Heavy smokers like the tax as it helps with the self-control problem, and the revenue that is generated can be used to compensate light smokers, who would otherwise be harmed by the sin tax. The analysis also points to the limits of liberal paternalism. A Paretoimproving sin tax is possible only if there is a one-to-one relation between the number of cigarettes smoked and the intensity of the self-control problem. If one introduces heavy smokers with no self-control problems to the system, the possibility of a Pareto-improving sin tax is gone. In this case, one has to make rational smokers worse off when one attempts to help the smokers with selfcontrol problems. From the perspective of the rational smoker, this is akin to an illiberal interference with a private choice.

Blinding Oneself Let us now turn from the issue of what information a welfare maximizer should ignore, to the information that individuals do ignore. A strand of behavioral research has investigated circumstances under which individuals take decisions while deliberately ignoring decision-relevant information (for a survey, see Golman et al. 2017; Hertwig and Engel, this volume, 2016). Here, our focus is on the welfare implications of such information avoidance. We will go through some prominent examples and discuss the criteria developed above: • • •

When a person engages in information avoidance, does this give rise to externalities (positive or negative effects on others) or internalities (positive or negative effects on that person)? If so, are mechanisms in place to ensure that these are considered? If not, is information avoidance a private choice that should not be the subject of welfare analysis?

194 •

F. Bierbrauer If not, are “dirty” preferences at play that should be removed from welfare analysis?

Let us consider climate change denial. Climate change is one of the most pressing problems currently facing humanity. It is also a prime example of an externalities problem. The CO2 emissions from past and current generations have drastic consequences on younger and future generations. Classic welfare economics stipulates that such problems be addressed using corrective taxation or quantity controls. Yet, taking these measures, which would be effective in mitigating the course of climate change, is politically controversial. Some opponents of such policies even deny the existence of a problem that needs to be solved. In other words, they exercise deliberate ignorance of the overwhelming evidence on climate change. The Coasean example of the shery and chemical plant rests on the premise that all involved accept the description of the problem. The calculus of welfare economics is applicable only if both parties agree that the emissions of the chemical plant are harmful to the shery. If, for instance, the shery denies this, there is no point in having a cost-benet analysis determine the optimal level of emissions reduction, accompanied by compensation payments to ensure that both parties are better off. Thus, a deliberate ignorance of the harm caused by emissions is equivalent to denying that policies to address this problem can be justied with an appeal to welfare.8 This raises the question whether welfare economics can be applied in the face of such deliberate ignorance. From the perspective of those who deny climate change, public policies which seek to address it are unjustied and paternalistic. Should their welfare still be considered when such policies are formulated? If so, which preferences should enter the cost-benet calculation: the preferences that are articulated in the political process or a laundered version that no longer contains traces of deliberate ignorance? Ignorance of Performance Evaluations Climate change denial has a political motivation. As another example, consider a teacher who does not want to look at teaching evaluations, for fear of a negative outcome. Here, the motivation is more personal: the desire to retain a positive self-image. Still, there are externalities: a mediocre teacher’s reluctance to explore ways of improving his teaching is harmful to his students. Golman et al. (2017) argue that the hedonic consequences of information avoidance need to be considered. This concrete example raises the question whether the teacher’s hedonic utility from keeping a positive self-image should 8

According to the impossibility result by Myerson and Satterthwaite (1983), efficient Coasean bargaining is not possible if the intensity of the externalities problem cannot be objectively veried. If only the shery knows how much harm is caused by emissions and only the chemical plant knows how costly it is to avoid them, efficiency is out of reach.

Harry Potter and the Welfare of the Willfully Blinded

195

be weighed against the benets afforded to students from improved teaching. The alternative perspective is that a desire to keep an unjustied positive selfimage is a “dirty” preference that should be removed from a cost-benet analysis of additional training. Reluctance to Test for Diseases Hertwig and Engel (this volume, 2016) report the case of James Watson, who had his genome sequenced but chose to remain ignorant about his predisposition for Alzheimer disease. This is an example of private choice, a choice that affects the welfare of one person and possibly his close relatives, or at least should be treated as such. Remember the lesson from Sen’s Lady Chatterley example: Preferences over the private choices of others have to be removed from welfare analysis, otherwise, liberal principles conict with welfare analysis. If James Watson’s decision is not treated as a private choice, what else should be? If the disease were infectious and if the risk of infecting others could be reduced (e.g., by a vaccination), the conclusion would, of course, be different: Externalities would enter the picture. A welfare analysis that weighs the personal costs of acquiring unpleasant information against the health risks of others might appear quite reasonable.

Blinding Others The previous examples involve individuals who blind themselves and, by doing so, potentially affect others negatively (e.g., opponents of climate change, teachers adverse to external evaluations). We now turn to the deliberate blinding of others. Motivated Beliefs Bénabou and Tirole (2006a) present a model of motivated beliefs in which individuals suppress unfavorable information to handle cognitive dissonances. Specically, individuals have a desire to believe that the world is just, that those who work hard or invest in their human capital can reap the rewards and become nancially better off. At the same time, individuals are confronted with the evidence that social mobility is imperfect, that economic inequality tends to persist over generations, and that hard work does not necessarily pay off. There is evidence that individuals bias their perceptions of social mobility against this evidence and instead remain overly optimistic. They stick to the American dream despite the facts that point to the contrary (Alesina et al. 2018). In the model of Bénabou and Tirole (2006a), the suppression of this unfavorable evidence has a benet: It keeps individuals going, such that they invest more in human capital than they otherwise would. This positive effect is due to

196

F. Bierbrauer

the assumption that individuals also suffer from a present bias. Educational effort, therefore, tends to be inefficiently low. Individuals give too much weight to the immediate costs of acquiring human capital and too little weight to the higher future income that results from the investment. A suppression of unfavorable information on the returns to education can thus mitigate an individual’s tendency to procrastinate. In their preferred interpretation of the model, Bénabou and Tirole take an intergenerational perspective. Parents tell their children about the returns to effort. The children, in turn, choose how much effort to exert when going to school. Thus, parents shield their children from unfavorable information on the returns to effort in an attempt to overcome their laziness. What are the welfare implications of these choices? Are parents doing harm to their children? The answer would be “yes” if there was no present bias; that is, if the children did not place too much weight on immediate gratication and too little on the longterm returns of educational effort. In this case, children who become victims of their parents’ propaganda would invest more than is in their own interest. With the present bias, however, the parents’ indoctrination may be regarded as second-best alternative, so that the children are better off with it. This example illustrates a more general lesson from what is known as the theory of the second best (Lipsey and Lancaster 1956). With distortions already in place, adding another distortion may have benecial effects for welfare. A welfare analysis of deliberate ignorance might therefore be misguided if it focuses solely on one type of deliberate ignorance in isolation. Discovering that deliberate ignorance serves a useful purpose may require evaluating it against the background of the entire menu of individual biases relevant for the application at hand. Manipulating the Salience of Taxes Positive welfare effects of blinding others have also been documented in the context of tax policy. Chetty et al. (2009) report on a eld experiment that involves a manipulation of price tags in U.S. supermarkets. The standard is a price tag that does not include sales taxes. The manipulated price tags highlighted, however, the tax inclusive price. Chetty et al. found that the manipulation triggers a behavioral response: fewer items are sold. Their work documents, however, that consumers are well-informed about sales taxes. Thus, the manipulated price tag did not provide new information; it only made available information more visible. This visibility had consequences: consumers bought more if the information on taxes was suppressed. The conventional perspective in public nance is that any sales tax has an efciency cost. Such a tax drives a wedge between the prices paid by consumers and the prices received by producers. As a consequence, gains from trade are not exhausted. A consumer who is willing to pay ten but faces a tax inclusive price of eleven will thus not purchase the product. If producers are willing to sell for nine,

Harry Potter and the Welfare of the Willfully Blinded

197

there are gains from trade between the producer and the consumer. Those gains would be realized if there was no tax, but not with the tax. The forgone benets of such transactions constitute the efficiency costs of taxes. How is this logic affected by the behavioral responses to the salience of taxes? Chetty et al. (2009) assume that the demand with tax-inclusive prices reects true preferences. Thus, individuals overconsume when taxes are not salient. This overconsumption, in turn, helps to mitigate the efficiency costs of taxation. This is another instance of a second-best logic, one that combines a behavioral bias with an inefficiency that also prevails with rational agents, the distortionary effects of taxation.9

Concluding Remarks Applications of welfare economics require principled decisions on which type of preferences to use in the analysis. Taking account of sensations such as envy or altruism may give rise to repugnant conclusions. Incorporating preferences over the private choices of others may clash with liberal values. Thus, to be relevant, welfare analysis needs to turn a blind eye to certain aspects of individual preferences. Welfare analysis must also account for decisions by individuals to ignore information that is readily available, or to suppress information that would otherwise be available to others. Analysis of such choices faces the difficulty of delineating the proper domain of welfare economics: Should welfare analysis take the preferences of those who deny climate change into account or ignore them? Should genetic tests for health risks be treated as a private affair that is not subjected to welfare analysis? An interesting line of recent research looks at related questions from an empirical perspective, by trying to elicit the preferences that individuals want to be factored into welfare analysis. For instance, Weinzierl (2017) reports that individuals demand a laundering of preferences from sensations of envy. In this study, he asked individuals to assume the perspective of a policy maker and confronted them with two situations. In the second situation, incomes are, for everybody, higher than in the rst. Inequality, though, is also higher and consequently the overall utilitarian welfare is lower. Even then, a majority of the respondents chose the second situation over the rst.10 Pursuing this avenue further might prove useful for future research on the welfare implications of deliberate ignorance. 9

10

The analysis is, however, sensitive to the assumption that individuals overconsume when taxes are not salient. Consider the alternative assumption that true preferences correspond to the demand that is observed when consumers see the price tags they are used to. In this case, making taxes more salient will aggravate the tax distortions. Relatedly, Charité et al. (2015) analyzed whether individuals support the use of welfare measures that respect the reference-point dependence of preferences, and Weinzierl (2014) investigated the support for alternatives to utilitarian welfare maximization.

198

F. Bierbrauer

Acknowledgments Background paper prepared for the Ernst Strüngmann Forum on deliberate ignorance. I beneted from conversations with Carina Bierbrauer, Martin Hellwig, and François Maniquet. I am also grateful for comments by the participants of the Ernst Strüngmann Forum on deliberate ignorance. Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy—EXC 2126/1– 390838866.

12 Is There a Right Not to Know Genetic Information about Oneself ? Benjamin E. Berkman Abstract The move from targeted genetic testing to genomic sequencing has produced a number of ethical debates, but the most controversial question is the extent to which individuals have a right not to know genetic information about themselves. This chapter explores the extent to which it is ethically necessary to respect someone’s choice to remain deliberately ignorant about this kind of information. Challenging the majority view that there is a nearly absolute right not to know, arguments are presented which push back against that vigorously held (although not always rigorously defended) position, in support of the idea that we should abandon the notion of a strong right not to know. Drawing on the elds of bioethics, philosophy, and social science, an extended argument is provided in support of a default for returning high-value genetic information without asking about a preference not to know. Recommendations are offered about how best to balance individual autonomy and professional benecence to guide the eld of genomic medicine as it continues to evolve.

Introduction The eld of bioethics is replete with cases where people choose to remain deliberately ignorant about information. In the clinical realm, patients regularly refuse to learn their diagnosis or prognosis, thus making it difficult to engage them in making informed decisions about their medical care. Some patients ignore efficacy and side-effect information when deciding which medication to take. People choose not to read consent forms before enrolling in research. To avoid bias in research data collection, investigators agree to be left in the dark about whether their subjects have received an active intervention or the inert placebo. But nowhere in bioethics has the concept of deliberate ignorance proven to be more controversial than in the genetic testing sphere. Using this

200

B. E. Berkman

as a case study, I will illustrate in this chapter some of the fascinating tensions inherent in discussions about deliberate ignorance, and provide a real-world example to help think through the normative and policy implications when people actively choose not to learn information With its promise to revolutionize medicine, the capacity to cheaply and quickly generate an individual’s entire genome has prompted a series of signicant ethical debates. Genomic sequencing, unlike targeted genetic testing, produces massive amounts of extraneous information, some of which can have relevance for an individual’s health. The mismatch between the specic indication that led to the ordering of the test and the breadth of results that the test produces has ignited an ongoing debate about the ethics of managing incidental or secondary ndings (Presidential Commission for the Study of Bioethical Issues 2013); that is, information (typically clinically signicant and medically actionable) that is generated during a test or procedure but which does not relate to the original purpose for which the test or procedure was conducted (Wolf et al. 2008). Over the last decade, the problem of managing incidental or secondary ndings has been a major source of contention in the research ethics and science policy realms. The most contentious part of the debate has focused on the extent to which it is ethically necessary to respect a person’s right not to know (RNTK) genetic information about themselves. To make this problem more concrete, imagine the following scenario: As part of a diagnostic workup for what is suspected to be a rare genetic disorder, a patient undergoes genome sequencing. Prior to the procedure, during the informed consent process, the patient clearly checks the box indicating the choice to opt out of receiving incidental genetic results. When the physicians analyze the resulting genomic data, they nd evidence of a high genetic risk for a different disorder, hereditary nonpolyposis colon cancer (HNPCC). Since HNPCC is treatable if found early, but is nearly always fatal if discovered at late stages, they recognize the intrinsic value of this information to the patient. It could enable the patient to seek enhanced screening for a cancer that is very difficult to detect with normal colonoscopies and, in turn, prevent serious disease and even save the patient’s life. Should the physicians disclose this nding to the patient, despite the explicit choice made by the patient not to be informed of secondary ndings? Whether or not a patient’s RNTK needs must be honored under all conditions has sparked a highly contentious debate. The conict highlights a classic problem in bioethics: the frequent tension between autonomy and benecence. Our society places an extremely high value on empowering and honoring an individual’s choices, particularly in the medical realm. This often presents a clear dilemma for physicians, who want to act in a way that provides the highest prospect of benet for their patients. In this case, it means asking whether a patient’s choice not to know should be honored at the cost of an opportunity to take advantage of potentially benecial medical information.

Is There a Right Not to Know Genetic Information about Oneself? 201 As genomic sequencing technology continues to be ne-tuned and implemented, the examination of ethical norms and standards of care requires serious deliberation. Is the RNTK appropriate in a genomic era, given the obvious and inevitable conict between autonomy and benecence that such a right creates? Because the ability to control what genetic information is revealed has been imbued with the power of a right, debate has thus far been unduly focused on the seemingly absolute nature of an individual’s autonomy. The majority view among bioethics scholars seems to be that the RNTK continues to be of paramount importance and should not be abrogated in any way. A case can be made, however, that genomic medicine is adhering too tightly to an outdated conception of the RNTK. My goal in this chapter is to push back against that vigorously held (although not always rigorously defended) position, in defense of the idea that the notion of a strong RNTK should be abandoned. I will offer an extended argument in support of a default for returning high value (dened below) genetic information without asking about a preference not to know.

Emergence of the Controversy Researchers and bioethicists have been grappling with the problem of genetic incidental ndings for over a decade. From this debate, which has been both protracted and often quite heated, the RNTK emerged as an uncontroversial issue, at least initially. As commentators argued about the circumstances under which there was an obligation to return individual ndings, and which ndings to return, there seemed to be broad support for the view that ndings should only be returned when the research participant expressed the desire to receive this information (Fabsitz et al. 2010). In terms of an “obligation” on the part of researchers, that obligation was to offer individual ndings to research subjects, who could elect to receive or refuse the information. Accordingly, there was wide agreement that researchers should discuss the RNTK with potential subjects and prospectively solicit their binding preferences. These early views on the RNTK were expressed in the nascent days of genomic medicine, before large-scale genomic sequencing emerged. As sequencing technology advanced, and particularly as it moved from the research setting into the clinical realm, the debate began to shift for two related reasons. First, the utility of genomic sequencing was improving. An increasing number of genetic variants had been strongly linked to a range of phenotypes where knowledge of one’s genetic status could have a profound impact on treatments for (or prevention of) serious disease. Second, a growing number of patients were being sequenced, leading to reasonable projections about the important role that genomic sequencing would have as a regular part of clinical care.

202

B. E. Berkman

In response, the American College of Medical Genetics and Genomics (ACMG) issued recommendations for the reporting of incidental ndings in clinical exome and genome sequencing (Green et al. 2013). Their goal was to start a conversation about clinical standards for managing the predictable onslaught of medically relevant incidental ndings. These recommendations suggested that labs should actively search (i.e., opportunistically screen) for a “minimum list” of variants that predispose patients to risk for disorders that “would likely have medical benet for the patients and families of patients undergoing clinical sequencing.” Considering both the weight of the scientic evidence and the clinical implications of knowing the genetic information, the ACMG limited the list to “unequivocally pathogenic mutations in genes where pathogenic variants lead to disease with very high probability and where evidence strongly supports the benets of early intervention.” Controversially, the ACMG Working Group argued against soliciting patient preferences about receiving (or not receiving) incidental ndings. They did not think that it was appropriate to give patients a choice not to learn about clinically important and actionable ndings, advancing the claim that clinicians have a duciary duty to warn patients about high-risk variants where an intervention is available. Ironically, this argument against a patient’s strong RNTK actually involves clinicians actively blinding themselves to patient preferences. The recommendation against soliciting patient preferences for not knowing genetic information ignited an extended (and often quite spirited) debate within the research ethics community. A relatively small set of commentators tried to defend the call for mandatory disclosure of high-value incidental ndings (Berkman and Hull 2014; McGuire et al. 2013). The overwhelming majority view, however, was extremely critical of the recommendation, holding that patients have a strong RNTK and that any abrogation of that right was inappropriate (Burke et al. 2013; Wolf et al. 2013). As Trinidad et al. (2015) stated, the ACMG statement was “an instance of paternalistic overreach” that should be “widely rejected as inconsistent with the ethical and legal duties of clinicians.” Even more interesting was the fact that these pro-RNTK views were often couched in absolute terms. Commentators were not blind to the fact that strongly preferencing the RNTK meant that some patients might not receive information that could save their lives. Although not expressed in exclusively principlistic language, these arguments essentially seem to advance the view that autonomy should override benecence in RNTK situations. In response to the mounting criticism of their recommendations, ACMG retreated from their initial position. Citing a purported consensus among ACMG members, the organization rened their position to state that before a sample is sent for analysis, patients should be allowed to opt out of receiving incidental ndings. I believe that this majority view is mistaken and argue, in the remainder of this chapter, that there should not be a strong RNTK high-value genetic information about oneself.

Is There a Right Not to Know Genetic Information about Oneself? 203

What Is High-Value Genetic Information? The rst step in my analysis is to dene the type of information that I will subsequently argue should not be subject to the RNTK. To be clear, I do not intend to argue that there is no role for patient preferences in determining when to receive any genetic information. Rather I focus on the extent to which there is a RNTK (a) high-value genetic information where (b) medical action can mitigate or prevent mortality or serious morbidity when (c) there is strong evidence of the link between genotype and signicant disease risk. The arguments made in this chapter should not be directly applied to information where there is no medical action to take (e.g., Huntington disease), when the condition is less severe (e.g., asthma), or when the evidence is weak (e.g., single case reports). In essence, I will be arguing against the RNTK with the relatively small set of potential ndings on the ACMG list in mind. To illustrate how valuable information from the ACMG list might be, it is useful to consider some of the variables laid out by Schwartz et al. (this volume). They enumerate a number of features that can make a given piece of information more or less valuable. In constructing its list, the ACMG intentionally selected only variants associated with serious diseases, so the magnitude of the information’s importance would necessarily be high. Schwartz et al. dened their list to include only actionable conditions, where there is an effective preventative or medical intervention to take. They also mitigated uncertainty by choosing only those variants that had a high-quality evidence base and a high penetrance meaning, such that a given nding in a particular individual would likely be decisively relevant to that person given their age and clinical presentation. It is also important to consider the possible harms associated with revealing this information. Critics have expressed concern about a number of possible risks. Most prominent were psychosocial concerns such as stigma, discrimination, and anxiety (Klitzman et al. 2013; Lázaro-Muñoz et al. 2015). There were also worries about the iatrogenic and economic impact of unnecessary follow-up procedures and interventions, both on individual and population levels (Burke et al. 2013; Klitzman et al. 2013; Wolf et al. 2013). This second set of concerns was predicated on the prior predictive value problem; since existing evidence is based on studies involving affected families, critics argued that it is premature to assume similar penetrance in families without a history of the disease because there could be as yet unidentied mitigating genetic features that could reduce or eliminate risk (Holtzman 2013). Invoking the precautionary principle, these critics argued that we should avoid returning incidental information until we can be more certain that doing so will not prompt unnecessary medical interventions. As I explain below, there are reasons to think that these concerns are not as signicant as commentators assume and that they are outweighed by the potential benets of knowing the information.

204

B. E. Berkman

The Philosophical Origins of the RNTK: An Unexpectedly Contested Concept Before analyzing the benets and risks, it is useful to begin by considering the philosophical origins of the RNTK, as this reveals the concept to be on much more analytically shaky ground than many contemporary commentators acknowledge. The RNTK genetic information is a relatively new idea: it rst appeared in the literature in the 1970s and 1980s but did not really gain traction until the 1990s (Laurie 1999, 2000; Takala 1999). A substantial body of work developed in the subsequent decade, concurrent with the gradual incorporation of genetic testing into clinical medicine. While there appears to be signicant recent support for the RNTK, a robust examination of the concept must begin with an analysis of the idea’s philosophical origins. Contemporary RNTK advocates have tended to present their views in the absence of this historical perspective, seemingly arguing that a strong, autonomy-based RNTK is selfevident (Herring and Foster 2012). In contrast to this assumption, I believe that a close examination of the earlier RNTK literature reveals a much more controverted and nuanced history. Specically, I will demonstrate that acceptance of a strict RNTK is far from universal in the philosophical literature, and that even staunch proponents recognize that the RNTK can be easily overridden by competing considerations. Arguments for a Strong RNTK Most commonly, scholars ground the RNTK in autonomy, arguing that one’s right to self-determination implies a right to make decisions about learning (or not learning) sensitive medical information. These scholars typically build their argument on a foundational assertion that genetic information has the potential to cause psychological and economic harm (Andorno 2004). While often granting that more information can allow for improved decision making regarding future plans, they stress that for some individuals, this information can lead to anxiety, depression, stigma, and possible discrimination. Therefore, an individual should be afforded the freedom to weigh the risk of psychosocial and economic harms against the potential benet that the knowledge might provide. Beyond this basic argument, RNTK proponents often cite concerns about paternalism in medical care, making claims that it has no place in modern medicine, even if justied by seemingly reasonable considerations (Takala 2001). They often draw a distinction between preventing harm and creating benet, arguing that knowing one’s genetic status offers the possibility of a benet, but not knowing about a genetic defect does not directly harm the person, since the defect is present regardless. If forced provision of information poses a risk of harm and there is only a possibility of creating benet (rather than prevention of harm), unwanted provision of genetic information is indefensibly

Is There a Right Not to Know Genetic Information about Oneself? 205 paternalistic. Accordingly, it is paternalistic to overrule individual choice, even if a choice differs from our conception of what is “reasonable.” Interestingly, there does not seem to be overwhelming support in the foundational RNTK literature for a strong, autonomy-based RNTK. The limited number of scholars cited above support such a view, but the weight of the literature is squarely against an expansive view of the RNTK. As I will discuss in the next few sections, most scholars either argued for a much narrower conception of the RNTK or dismissed the idea entirely. Arguments against a Strong RNTK Autonomy Misapplied A number of arguments challenge the notion of a strong, autonomy-based RNTK. One main strain of criticism asserts that the concept of an autonomy-based RNTK is too broad and that the principle has been misapplied. As Rhodes (1988:433) argued, “misunderstandings about the nature and moral force of autonomy have led some in the genetics community to a false conclusion about genetic ignorance.” The concern with an autonomy-based RNTK stems from the commonly held view that rights are “preemptive and value-laden,” and thus the content of a right must be carefully articulated and defended (Laurie 2014). Autonomy fails as a basis for doing so for a number of reasons. First, a strong, autonomy-based RNTK is inappropriate because that line of reasoning unrealistically requires ignoring the fact that there is no such thing as an unfettered choice (Harris and Keywood 2001). There are lots of things that people would like to do (or not do) or to know (or not know), but one sometimes must make less than ideal choices. Second, there is no basis for the idea that information alone is autonomy-constraining, because a clear distinction can be drawn between obtaining relevant information and making subsequent decisions on the basis of that information. Finally, autonomy is not boundless; there are certain actions that are prohibited as a matter of public policy. Instead of autonomy, Laurie (2014) builds his RNTK theory on the idea of spatial privacy, or the notion that we have a right to ensure that an individual is “in a state of non-access.” Spatial privacy includes both the familiar notion of physical separateness and encompasses separateness of the individual’s psyche. The latter form of privacy entitles an individual to protect his or her own sense of self. As a result, it can be an invasion of one’s “psychological spatial privacy” to receive information about oneself that was not already possessed. Ultimately, in basing his conception of a RNTK in privacy rather than autonomy, Laurie calls into question the view that the RNTK is actually a strong right. Laurie acknowledges that a decision to violate someone’s psychological privacy involves a number of competing factors and must be holistically

206

B. E. Berkman

assessed instead of being held up as a strict ethical rule. Even if unwanted disclosure constitutes a violation of privacy, it can be justiable under certain circumstances. Ultimately, he (and others) argue for a prima facie presumption in favor of the RNTK. This presumption can be rebutted, however, in a range of clinical cases. When considering the justiability of violating someone’s psychological integrity, a number of factors should be relevant, including the availability of a cure or intervention, severity of the condition, and the likelihood of disease manifestation. The Incoherence Objection In addition to challenging the notion that the principle of autonomy plausibly supports a RNTK, some critics make an even more forceful argument, calling into question the very coherence of the RNTK as a concept. These scholars make an autonomy claim of their own, but in the opposite direction, advancing the idea that knowledge is necessary in order to exercise autonomy (Malpas 2005). One needs to know that there is an issue that requires a decision; so not knowing undermines one’s ability to make an autonomous choice. Rather, autonomy demands “critical reection,” which includes thoughtful, informed decision making and deliberation. Without relevant information, it is impossible to make informed decisions about future plans, and ill-informed decisions may even frustrate one’s future self, as an individual may make choices that are ultimately self-defeating. Some have gone as far as saying that autonomy requires rationality and freedom of will, but patients who deny themselves readily available information are not acting rationally, as they are depriving themselves of relevant health information (Ost 1984) If someone is so xed in their intentions that no amount of relevant information would change their mind, this would be tantamount to an irrational obsession. Similarly, it is logically impossible for someone to claim to know a priori that information will not be relevant to his or her decision. This line of reasoning not only rejects a RNTK, it seems also to imply a moral duty to be informed about information that would make a difference in decisions (at least when it can be obtained without undue effort). Effects on Third Parties A third objection to a strict, autonomy-based RNTK is founded on a concern about the effect on others of maintaining one’s ignorance. According to this line of reasoning, genetic information unavoidably involves relatives, and one has an obligation to learn readily available information about one’s health to enable relatives the opportunity to act on that knowledge. Relatives who had not previously known about a familial genetic risk would also be able to benet from knowing by taking a variety of actions, such as seeking their own genetic

Is There a Right Not to Know Genetic Information about Oneself? 207 testing, changing risk-associated behaviors, pursuing prophylactic treatment options, and engaging in rigorous screening (Bottis 2000). An individual’s desire to refuse genetic information may conict with the duty to warn a potentially affected family member. This does not mean that the RNTK should not be respected if possible, but rather that there are clear situations where other competing ethical principles might cause one to disregard a desired RNTK. On this view, the RNTK should not be viewed as a strict right; when there is a conict between the RNTK and the right of relatives to sensitive genetic information concerning their own health, the RNTK must yield, due to the very real risk of harm to the family members. Outdated Examples Finally, defenders of the RNTK often point to concerns about testing for Huntington disease (HD) and Alzheimer disease (AD), citing data on people’s reluctance to get tested to illustrate the potential anxiety that people feel when faced with negative genetic information and to support the claim that there is a very real risk of harm associated with unwanted provision of one’s genetic status (Austad 1996). Both examples, however, stem from the targeted genetics era, and I would argue that they have limited utility as valid comparators in the modern genomic era. HD and AD are devastating and presently immitigable neurological conditions. As such they are sui generis since they present the possibility of psychological harm without any corresponding clinical benet. When these kinds of examples are utilized by scholars, they should only be used to make a claim about the RNTK genetic information associated with commensurate diseases. But this isn’t the case; commentators consistently use these limited examples to make broader claims. Citing evidence of concern about being tested for HD or AD is irrelevant to this important debate, since the real empirical and normative questions relate to whether people would or should refuse to learn about potentially life-saving genetic information.

Moving Away from a Strong RNTK In the previous section, I took a close look at the philosophical origins of the RNTK. Contrary to what contemporary commentators have been arguing, the notion of a strong, autonomy-based RNTK rests on an unstable conceptual foundation. Only a handful of philosophers have endorsed such a position, with the majority either arguing for a much more limited, nonautonomousbased conception, or even against the whole concept entirely. Here, I borrow from the bioethics and social science elds to make additional arguments for abandoning the notion of a strong RNTK. I begin by reframing the debate away from an autonomy-dominated perspective, providing a comprehensive analysis of the harms and benets that

208

B. E. Berkman

result from adhering to a strong RNTK position. From this analysis, I conclude that the potential health benets of abandoning a strong RNTK greatly outweigh the concomitant harms, thereby challenging the idea that psychosocial concerns should automatically get to trump the prospect of life-saving intervention. I end by exploring two additional considerations that are relevant to any rigorous discussion of the RNTK: moral distress and genetic exceptionalism. Analyzing the Impact of a Strong RNTK There is reason to believe that people’s views on the RNTK are less settled than one might have previously believed; while autonomy and the RNTK may seem sacrosanct in isolation, forcing people to confront the trade-offs inherent in real-world scenarios changes many minds (Gliwa et al. 2016). If people are open to considering trade-offs between autonomy and benecence, then it becomes important to rigorously examine what those trade-offs might entail. This kind of analysis has thus far been absent from the RNTK debate. The overwhelming focus in the recent literature on an autonomy-based RNTK has had the unfortunate effect of short-circuiting discussion of the topic by directing attention solely on the harms associated with not honoring individual preferences. One can see this in the arguments in favor of a strong RNTK, which generally focus on patient autonomy, appealing to the long history of shared medical decision making and respect for patient preferences. For example, as an impressively credentialed group of bioethicists forcefully argued (Burke et al. 2013:857): …choice matters. Patients may wish to decline the additional analysis on a number of grounds….Concepts of shared decision making and respect for patient preferences argue for offering meaningful choices wherever possible, with appropriate information to allow patients to choose the best option for themselves.... If patients decline additional testing, it follows that the laboratory should not perform the additional analyses.

In addition, critics often supported their strong autonomy arguments by claiming that there is good reason to think that many people do not want to learn certain kinds of genetic information about themselves (Jarvik et al. 2014; Klitzman et al. 2013). Furthermore, commentators were comfortable with the idea that people should even be able to refuse information with profound medical signicance. For example, as Wolf et al. (2013:1050) put it: Patients have the right to refuse testing and ndings, even if potentially lifesaving. Just because many patients might want this information does not mean that it can or should be imposed on all.

Is There a Right Not to Know Genetic Information about Oneself? 209 Similarly, a number of commentators cite the legal right to refuse medical interventions, arguing that an individual’s ability to place limits on treatments also implies a legal right to refuse medical information. This laser focus on autonomy has not allowed for a comprehensive analysis of the harms and benets of honoring or ignoring the RNTK. The reality is that any policy will have potential negative consequences. Whichever option is chosen, we will necessarily be making a mistake in one of two directions: unwanted disclosure or lost opportunity for medical intervention. Here, I lay out what I take to be the full set of relevant considerations and explore some of the relevant empirical data that can help us fully assess the overall impact of any RNTK policy. Specically, there are three empirical questions that should be carefully considered, which I explore below. How many people genuinely do not want to know genetic information about themselves, if it could have a profound impact on morbidity or mortality? Available data support the reasonable claim that the overwhelming majority of people would want to be given genetic risk information that will have a direct impact on their health. In one study, nearly all respondents wanted to learn about a range of genetic risk factors, with 90% wanting to learn about nonactionable health risks and 96% wanting to learn about actionable genetic risk factors (Kaufman et al. 2008). Similarly, in the largest study to date of views toward the return of incidental ndings resulting from sequencing research, nearly 5,000 members of the public were surveyed and nearly all of them (98%) wanted to learn about genetic risk for life-threatening conditions that can be prevented (Middleton et al. 2016). A strong majority even wanted to know about life-threatening conditions that could not be treated. So as a baseline, it seems fair to say that the vast majority of people would actively want to know high-value health information, although more research is needed to establish the real-world contours of this claim since these data are based on surveys that asked people to respond to theoretical scenarios. Of course, that leaves a very small subset of the population who might not want to know this information. Although this is an empirical question that requires further study, it is plausible that this small set of people who would not want to know is primarily comprised of individuals for whom clinical action might not be indicated (e.g., patients with a terminal illness, the elderly, people with a religious objection to receiving medical treatment). Proponents of the RNTK point to these types of examples in defense of their views. My counterargument is that these relatively rare examples should not drive the RNTK debate; we should not be creating a broad RNTK policy based on a limited set of cases where the medical information actually has little or no value to the individual. Rather, these cases can be addressed separately, because they represent scenarios where doctors can reasonably anticipate a need to actively solicit preferences. Doctors should be able to predict most cases where an individual

210

B. E. Berkman

patient might have good reason to not know information because it is not clinically actionable for them given their situation. Even if there are some cases where doctors cannot easily predict that a patient has a reason for not wanting to know, if that reason is strong enough, those patients will likely self-identify. If people were given genetic risk information that they would have preferred not to know, what is the magnitude of the harm they actually experience? Psychological Harm. If the vast majority of people would want to know important genetic risk information, and if most of those who would not want to know can be bracketed, we are arguably left with an exceedingly small set of people. More empirical research is needed to ascertain the exact size and composition of this group, but whatever that number turns out to be, the next task is to examine the magnitude of harm that this small group will experience if given information that they would have preferred not to know. As discussed above, RNTK proponents frequently make claims about the danger of psychological harms owing from the disclosure of negative genetic information. These claims rely on limited data related to a few poorly targeted examples, such as HD and AD. What can the broader psychological literature tell us about our reactions to unfortunate genetic information? The short answer is that psychological research has demonstrated that people are not as good as they think at affective forecasting or predicting the magnitude and duration of our future emotional reaction to both positive and negative events (Wilson and Gilbert 2005). For example, recent lottery winners typically overestimate the length and duration of their spike in happiness. Similarly, but in the opposite direction, people who have recently lost a loved one overestimate the length and duration of their negative emotional response to the traumatic event. In both cases, after an initial spike, people gradually tend to return to their previous baseline level of happiness. Essentially, the mind is assumed to have a sort of psychological immune system, which helps people handle negative information, often making the actual impact of negative information signicantly smaller than the expected negative impact. However, when making a prediction about future emotional responses, we disregard our ability to cope, thereby overestimating the negative impact of information. This is particularly true in the medical realm, where the literature suggests that an individual’s predictions concerning the emotional consequence of learning about genetic disease risk do not square with people’s actual ability to adapt to negative health information (Halpern and Arnold 2008). In a broad range of medical contexts, there are data showing that the affective forecasting bias is particularly pronounced when healthy people are asked to assess the negative emotional impact of (theoretical) future health problems. Specically, people generally assume that receiving negative genetic information will be devastating, but research demonstrates that people are much better at coping with negative information than they think they will be. In reality, we should

Is There a Right Not to Know Genetic Information about Oneself? 211 be careful about assuming that the negative psychological effect of receiving risk information for many untreatable conditions is as signicant as many assume. More studies are needed to ascertain how a broad range of people react to negative news, but the existing evidence suggests that we should be open to the idea that negative reactions to unfortunate genetic information will be relatively mild and transient (Broadstock et al. 2000). It is striking that RNTK proponents continue to make claims about the harmful psychological impact of genetic information when there is such limited empirical support for such concerns. More evidence about emotional reactions to genetic information would certainly be useful, but the existing literature at least raises important questions about whether we “systematically overestimate the durability and intensity of the affective impact of events on well-being,” thereby creating a “culture of risk-aversion in which patients may be opting out of potentially benecial diagnostic and treatment regimes” (Peters et al. 2014:312). Economic Harm. If psychological harm seems less likely and serious than is often assumed, there is still the issue of economic harm (i.e., discrimination). The likelihood and magnitude of discrimination is somewhat more difficult to assess, but existing data suggests that perhaps there is less cause for concern than previously thought (Rothstein 2008). It does appear that there are occasional instances of discrimination in these realms, but that they are primarily associated with untreatable single gene conditions (e.g., HD) that carry little weight for purposes of determining whether there should be a broad RNTK. Even with some scattered evidence of discrimination in these realms, a systematic review of existing data calls into question the need for a policy intervention (Joly et al. 2013), suggesting that there is a signicant gap between the fears of genetic discrimination and actual reality. Again, this is not to suggest that genetic discrimination will never become a problem in life or in terms of long-term care insurance. Rather, my argument is that we should make a clear-eyed assessment of the frequency and magnitude of any economic harms owing from disclosure of genetic risk information before automatically assuming a worst-case scenario. As I explore in the next section, there are some potential negative effects associated with honoring a strong RNTK, which should be balanced against a rigorous evaluation of the harms associated with not doing so. What Is the Cost of Always Soliciting Patient Preferences? On one side of the scale, we have a very small group of people who are arguably at very low risk of experiencing signicant, lasting psychological or economic harm. On the other side, we would want to know the impact of adopting a robust RNTK policy that involved actively soliciting individual preferences. My argument is that such a policy would necessarily result in some loss of opportunity to provide people with valuable information because there is good

212

B. E. Berkman

reason to doubt our ability to assess, accurately and reliably, people’s true preferences. A number of arguments support this claim. The rst concerns how people engage with informed consent documents. Extensive data suggest that people frequently do not carefully read consent forms, and when they do, that their understanding and appreciation of the content can often be lacking (Mandava et al. 2012). If subjects are signing consent forms with such incomplete understanding of the important details contained therein, it seems questionable to have condence in the infallibility of any process designed to solicit preferences about knowing genetic incidental ndings. This is particularly true given the inherent complexity of genetic information and the associated difficulty patients will have in making a choice in that context. Many commentators have expressed concern that the wide range of types of genomic ndings will be overwhelming and could become a signicant barrier to implementing truly informed consent (McGuire and Beskow 2010). In the pre-genomic era when targeted genetic testing was the norm, patients could reasonably absorb the range of information they might receive; a single gene test typically only revealed information associated with the relevant condition. Now, when genome sequencing is employed, it is impossible to know what kind of results will be generated, making the informed consent process that much more difficult. Ensuring patient comprehension and managing expectations becomes increasingly difficult as the amount of genomic data generated grows. Furthermore, it will even be difficult to adequately describe the variety of genomic information categories because of terminological confusion. Terms such as “actionability,” “clinical utility,” and “clinical signicance” are typically used to describe the types of ndings someone might or might not desire, but there is a lack of conceptual clarity about exactly what those terms mean (Eckstein et al. 2014). There are also concerns about how preferences can shift over time. Life events and the passage of time can change a person’s views; an answer given as a single young adult might differ to one that the same person would give once they are married with children. Unless the medical world can develop a process for actively re-soliciting preferences (an unrealistic proposition), there is the very real risk that a binding decision made at a single point in time could become inconsistent with future desires. Informed consent is a cornerstone of bioethics, and with good reason. In its ideal form, it allows doctors and researchers to demonstrate respect for persons and allows competent individuals to make autonomous choices about their engagement with medicine. The arguments made above should not be read as a wholesale indictment of informed consent. Rather, my point is that we should be skeptical about our capacity to assess, adequately and accurately, individual preferences about knowing or not knowing specic categories of genetic information. There is a very real risk that a policy of actively soliciting preferences about knowing or not knowing genetic information could result

Is There a Right Not to Know Genetic Information about Oneself? 213 Harm of imposing information

Tiny n Low ma

gnitude

Cost of a strong RNTK

Erroneo us or ac cidenta choice l not to k now

Figure 12.1 A framework for comprehensively analyzing the right not to know (RNTK).

in people making choices that do not reect their true values and preferences, thus erroneously or accidentally not receiving potentially lifesaving information (Figure 12.1). Moral Distress Having examined the full range of effects that honoring or not honoring the RNTK would have on individual patients or research subjects, I turn to an examination of other relevant considerations; namely, those raised by the interests of medical professionals. It is a vexing problem to possess genetic information that one deems to be clinically important, but to be precluded from disclosing it because a patient has exercised their RNTK. These medical professionals are apt to experience what we can colloquially call the “I-can’t-sleep-at-night” problem. More technically, they experience a phenomenon known as moral distress. Moral distress refers to the situation where one knows the morally correct course of action but is constrained from taking it (Ulrich and Grady 2018). Unlike a classic ethical dilemma, where there are two ethically justiable, but nonoptimal choices, moral distress involves feeling like there is a clearly correct, but unavailable choice to make. In normal clinical care, moral distress can be found in a range of situations where structural, legal, or institutional barriers prevent someone from doing what they feel would be right. Given that a patient’s exercise of their RNTK presents a potential risk to medical professionals, the question then is: How much should we weight this concern? Stated another way, when is it permissible for a doctor’s moral interests (i.e., an orientation toward trying to prevent or ameliorate disease) to override patient autonomy? This notion of benecence trumping autonomy

214

B. E. Berkman

has been a frequent topic of exploration in the bioethics literature, with some commentators arguing that while autonomy is certainly an important principle, benecence and autonomy should be complementary. Physician autonomy and morality should also be respected, which sometimes makes it permissible to violate a patient’s autonomy. This is not to say that a medical professional’s interests generally, and moral distress in particular, are sufficiently weighty to carry an argument against the RNTK. But considered in the overall context of a rigorous debate about whether or not we should honor an individual’s RNTK important medical information about him or herself, it certainly seems like moral distress is at least another relevant consideration in favor of arguing that it is appropriate to be skeptical about a broad, strong RNTK. Genetic Exceptionalism It has been popular to argue that genetic information requires special treatment, such as extra privacy protections, enhanced pretest education, and a distinct informed consent process (Green and Botkin 2003). This position was supported by the strongly held notion that there is something different about genetic information. Specically, genetic exceptionalists have argued that genetic information is often predictive, rather than diagnostic, and thus can be used to predict an individual’s future health in ways that other kinds of nongenetic medical information cannot. Genetic exceptionalists have also focused on the fact that since genetic information is an immutable part of your identity and cannot be altered, we should be careful to guard against the psychosocial and economic effects of disclosing genetic risk information. Finally, genetic information has implications for third parties: any genetic diagnosis or risk information is not simply relevant to the patient, but also to their blood relatives. Nevertheless, as the eld of medical genetics has evolved, genetic exceptionalism has been subject to signicant criticism. Accompanying this sort of view has been an increasingly powerful chorus of arguments refuting the basic claims of genetic exceptionalists. While genetic information can often predict distant future health (sometimes with high accuracy), there are many examples of nongenetic health information possessing comparable predictive power. This strong refutation of genetic exceptionalism is relevant to the RNTK debate. Proponents of the RNTK are effectively arguing that the return of any genetic information requires explicitly soliciting patient consent. Since it is standard practice in many clinical situations to disclose certain kinds of nongenomic medical ndings without asking for explicit permission, it seems fair to ask whether this instance of genetic exceptionalism is warranted. Autonomy is obviously an important value in medical ethics; modern social norms have clearly and enthusiastically moved away from medicine’s paternalistic history. However, it is not true that patients are asked to make decisions

Is There a Right Not to Know Genetic Information about Oneself? 215 about every single aspect of their health care. If a patient undergoes a specically indicated scan, but that scan incidentally reveals a potentially cancerous tumor, a doctor is not going to ask the patient whether they want to learn about the unexpected but important result. Similarly, if a patient receives a routine blood panel to check for a specic indication but the panel returns a panic value indicating a serious acute problem, the physician is not going to ask before disclosing this urgent nding. These analogies are not perfect. In general, genomic ndings are not associated with conditions that require immediate attention, and genetic predisposition is not always equivalent to a diagnosis of manifested disease. The question, however, is not whether genetic information is precisely analogous to the urgent cases presented above. Rather, the relevant question should be whether and why the kind of important genomic information being discussed here warrants special treatment. Given the thorough rejection of genetic exceptionalism, the burden of proof lies with RNTK proponents to make that case.

Conclusion The currently prevailing view about the RNTK involves an almost exclusive focus on the principle of autonomy. This pure autonomy view results in an environment where individual preferences must be actively sought and respected. At the other end of the spectrum, one can imagine an argument that completely relies on benecence, justifying forced provision of genetic information whenever it could provide medical benet to a given individual. In between, there seems to be a more centrist, qualied disclosure view. Embracing libertarian paternalism, we could create a default package of recommended variants to disclose and give patients a choice not to receive genetic information (even if that decision seems objectively unreasonable). This would function as a form of soft paternalism, helping to frame decision making in a way that is thought to lead to more benecial choices. I reject the pure autonomy view for the reasons explored throughout this article. I cannot, however, endorse a pure benecence view either. It seems too paternalistic to force information on someone who is deliberately trying to remain ignorant. Libertarian paternalism is attractive, but partially fails because of concerns about our ability to accurately assess individual preferences for such a complex question. My view falls somewhere between the liberal paternalism and pure benecence views. For high-impact genetic information, I think that it is a mistake to actively solicit preferences. Instead, patients should be informed that there is a default set of high-impact incidental ndings that will be sought and returned. In the rare case that someone independently requests to not learn about this information, in-depth counseling should be provided to ensure that they fully understand the choice being made, but ultimately their decision to remain ignorant should be honored if not knowing consistently remains their clearly stated preference. In short, for

216

B. E. Berkman

high-impact genetic information, any deviation from regular disclosure should be a clearly dened exception, rather than the basis for a broadly applied conception of the RNTK. This approach should be relatively uncontroversial for the vast majority of people since most autonomous adults would want to know life-saving information. There are, however, a few predictable exceptions that should be fairly easy to anticipate and accommodate; namely, terminally ill patients, elderly individuals, people with religious objections to treatment, or people in lowresource settings where medical care is not available. These are all cases where clinical action is less certain, so it might be appropriate for medical providers to actively solicit preferences. These kinds of cases represent an important exception to my proposed approach, but I do not believe that we should institute a strong RNTK policy based on a small group that is relatively easy to bracket. The RNTK has become an ingrained part of our lexicon, and though I ultimately believe that we should abandon the term altogether, I recognize that this is unlikely. At the very least, a compelling case can be made that we should at least stop talking about the RNTK in such strong terms. As a nal note, the RNTK debate raises an interesting question about when it is appropriate to utilize institutional power to reduce instances of deliberate ignorance, a topic that is explored in more detail by Teichman et al. (this volume). As mentioned above, it is ironic that a policy to reduce deliberate ignorance of patients by de-emphasizing the RNTK necessarily involves increasing the deliberate ignorance of researchers and clinicians; it might be the case that deliberate ignorance is sometimes a zero-sum game. Furthermore, the framing of a specic deliberate ignorance problem is important. As is made clear by contemporary RNTK advocates, autonomy is often viewed as the controlling principle, with the implication being that people’s choices should always be honored. As my preceding analysis hopefully demonstrates, by shifting the frame we can see that there can be cases where we think that people are systematically making less than optimal decisions to remain ignorant. In such cases, where the value of the information clearly outweighs the risks, and where psychological processes are likely to cause some people to make poor choices, it becomes ethically defensible to de-emphasize autonomy in favor of benecence. In cases like these, it can be justied to use institutional power to create policies or defaults that aim to mitigate the potential for suspect instances of deliberate ignorance.

Acknowledgments The opinions expressed herein are the author’s own and do not reect the policies and positions of the National Institutes of Health, the U.S. Public Health Service, or the U.S. Department of Health and Human Services. This research was supported by the Intramural Research Program of the National Human Genome Research Institute, National Institutes of Health. This chapter is adapted with permission from Berkman (2014).

13 Reections on Deliberate Ignorance Lewis A. Kornhauser Abstract Many different denitions of “deliberate ignorance” may be derived from the ordinary usage of these two terms. “Ignorance” may refer to an absence of belief, to an unjustied belief, to disregard of a fact, or to use of a fact known to be false. “Deliberate” may refer to a direct decision not to know some fact F or an indirect decision to know F′ rather than F. An individual may be deliberately ignorant but so may a group be. These different interpretations of deliberative ignorance raise different issues in different contexts. This essay develops a taxonomy of accounts of deliberate ignorance, suggests the criteria one might use to select among denitions, and identies some normative questions that arise from them in a selection of contexts ranging from debates over individual rationality to questions in political philosophy.

Introduction Hertwig and Engel argue that psychology has largely ignored an important set of phenomena that they, ironically, call deliberate ignorance (Hertwig and Engel, this volume, 2016). They dene deliberate ignorance as “the conscious individual or collective choice not to seek or use information (or knowledge)” and note that they are particularly interested in situations in which the marginal cost of knowledge acquisition is low and the expected benets high. They then offer a functional taxonomy of deliberate ignorance, discuss why it might be normatively desirable, and suggest modeling strategies. In this essay, I offer a conceptual rather than functional taxonomy of deliberate ignorance.1 This perspective sets deliberate ignorance within a more general framework that focuses on the distribution of knowledge and information. This framework raises questions concerning the appropriate scope of a concept 1

I develop this taxonomy without regard to the net benets of information acquisition. As the subsequent discussion suggests, the requirement of low marginal costs of information acquisition does not t well with Hertwig and Engel’s interest in instances of collective deliberate ignorance.

218

L. A. Kornhauser

of deliberate ignorance in the study of psychological and social phenomena. It also has implications for our understanding of both rationality and normative questions in moral and political philosophy. The argument relies on the literature on extended cognition and the extended mind, which argues that individual knowledge does not rest solely in the mind of the individual but also in the minds of others and in other artifacts. This argument thus lessens the gap between individual and collective knowledge. Questions of “deliberate ignorance” shift into questions about the distribution of knowledge and decision-making authority. The discussion begins with an analysis of the concept of deliberate ignorance. It then assesses its implications for understanding the norms of rationality and discusses issues pertaining to political and moral philosophy.

What Is Deliberate Ignorance? In the paradigmatic case of deliberate ignorance, an individual, Liza, consciously chooses not to know or to learn some fact: Liza’s relative is diagnosed with Huntington’s chorea; Liza decides not to discover whether she has the gene responsible for the disease. Liza, one may clearly say, is deliberately ignorant. Here, “deliberate” means the decision was at least intentional but possibly, and more strongly, reasoned. “Ignorance” apparently means “ignorance of the state of the world” (or perhaps of some set of states of the world or some aspect of a state of the world). Even at this individual level, however, neither the characterization of “deliberate” nor of “ignorance” captures all the phenomena that ordinary language may encompass or the set of phenomena that might engage the psychologist or decision theorist. When deliberate ignorance is considered in the context of collectives, the inadequacy of these characterizations becomes more glaring. Below, I suggest that there are three distinct senses that one might attach to “ignorance” and two ways to understand “deliberate.” I thus identify a 3 × 2 taxonomy of “deliberate ignorance.” The senses of “ignorance” and “deliberate” derive directly from common usage of the two terms. What Is Ignorance? The three senses of “ignorance” elaborated here reect the standard usage of the noun “ignorance” and the verb “to ignore.” I begin with a denition suggested by the verb. The Oxford English Dictionary denes “to ignore” as “to disregard intentionally.”2 2

This refers to the third denition; the rst, identied as obsolete, denes “ignore” as “not to know.” Similarly, Merriam-Webster denes “ignore” as “to refuse to take notice of”; https:// www.merriam-webster.com/dictionary/ignore (accessed Jan. 12, 2020).

Reections on Deliberate Ignorance

219

Ignorance as “disregard” may come in a weak or a strong form. “Weak disregard” means simply that agents pay no attention to some of their knowledge. This occurs, for example, in the take-the-best heuristic (e.g., Gigerenzer and Goldstein 1996). This heuristic is deployed when the agent must choose among a set of options. For each option x, the agent distinguishes n characteristics (x1, x2 ,..., xn) and chooses based on a lexical ordering. As between any two options, x and y, the agent rst compares the options on dimension 1. If x1 > y1, the agent chooses option x. If x1 < y1, option y is chosen. Otherwise the agent proceeds to compare the two options along the second dimension. When the options differ in the rst dimension, the agent disregards its knowledge of the other n – 1 characteristics. If the rst k elements of the two options agree but differ at the k + 1st, the agent chooses on the basis of the k + 1st element and thus disregards the nal n – k elements. A similar form of disregard occurs in impartial decision making when the decision maker is instructed to disregard decision-irrelevant information. For instance, juries are often instructed to disregard certain testimony that, though heard, is deemed inadmissible or irrelevant. Antidiscrimination law in the United States instructs employers to disregard race, gender, ethnicity, national origin, religion, and age (over 40) when making decisions on hiring, pay, and promotion. One might understand the introduction of double-blind procedures as a replacement for the instruction to ignore the information of which treatment the patient has received, largely because evidence suggests that conscious attempts to ignore do not fully lter out the effects of the information. Similarly, some orchestras have moved to blind auditions to reduce unconscious bias or use of tainted information.3 As noted, one may disregard information in more or less radical ways. The take-the-best heuristic simply puts the disregarded information to one side; the agent makes no use of this information even when it is, at least supercially, relevant. The action chosen is the same regardless of the specic content of the disregarded piece of information. Let us call this “weak disregard.” One may “disregard” information more assertively as in the construction of a model. Here, an analyst may not simply disregard knowledge that she has, she may assume its negation. In studying the acceleration of a ball down a plane, the model may ignore friction, in the sense of assuming it does not exist. Let us call this form of disregard of knowledge “strong disregard.” The presence of false assumptions in models has provoked controversy over how models explain the phenomena under study.4 3

4

Both examples appear to fall within the functional taxonomy offered by Hertwig and Engel (this volume, 2016). The take-the-best-heuristic may fall within the category of performanceenhancing devices while the structure of moral rules clearly falls within their category of impartiality and fairness devices. The literature in philosophy of science on false assumptions is voluminous. Friedman (1953)

220

L. A. Kornhauser

Let us now turn to “ignorance,” dened by Merriam-Webster as the “lack of knowledge.”5 Philosophers analyze knowledge6 in signicant part as “justied true belief.” On this account, an individual could be ignorant in at least three different ways: (a) she might have an unjustied true belief, (b) her belief might not be true, or (c) she might not have a belief at all. These different paths to lack of knowledge suggest different forms of ignorance. An unjustied belief might arise in at least two different ways. First, the agent may lack evidence but nonetheless hold a rm belief. A Bayesian would say that she had an “improper” prior, though it is not clear that priors can be improper. In any case, it is hard to see how this type of lack of justication could be deliberate. Second, the agent might have erred in the process of forming her belief. She may not have processed the evidence properly. A formation error, of course, could be deliberate. Now consider agent Liza with a false belief. She too lacks knowledge and hence, on the dictionary denition, is ignorant. Yet, as discussed below, in some contexts, her false belief might be deliberate. Finally, agent Liza might not have a belief. The absence of belief might refer to a number of different conditions. Suppose, for instance, that Liza has a parent who has Huntington’s chorea. Liza knows that she has the genetic predisposition for the disease. There are two states of the world: H, in which she has the disease, and ~H, in which she does not have the disease. Presumably, prior to being tested, she believes she is equally likely to have the disease as not. She is clearly ignorant in the sense that her beliefs about the state of the world are in perfect equipoise. She has no belief in the sense that each relevant state of the world is equally likely. Ignorance, however, clearly extends beyond situations where beliefs about the state of the world are in perfect equipoise. Suppose Liza learns that her brother Freddy carries the sickle cell trait. Prior to any test, she should believe that she too carries the trait with a probability greater than .5.7 In this instance,

5 6

7

advocated the irrelevance of the falsity of assumptions. Cartwright (1983) argued, as her title suggests, that the laws of physics are false. Batterman (2006), Elgin (2004), Rice (2015, 2018), and Strevens (2019) are more recent contributions to the literature. The Oxford English Dictionary more quaintly denes ignorance as “a want of knowledge.” As noted above, Hertwig and Engel dene ignorance as a lack of information or knowledge but then state that they equate “knowledge” and “information.” But information and knowledge are not identical. I conne my attention to “knowledge” as the meaning of “information” differs across various disciplines (e.g., computer science, cognitive science, linguistics, logic, and semantics). The calculation is a bit more complex. She knows that at least one of her parents carries the trait. If only one parent carries the trait, then there is a probability of .5 that she carries it. If both parents carry the trait, the probability rises to .75. The actual probability depends, however, on the ancestry of each parent. In any case, her belief that she has the trait should be greater than .5. Moreover, she must condition her prior on the evidence that neither she nor her brother has sickle cell anemia, a

Reections on Deliberate Ignorance

221

we would surely say that Liza does not know whether she has the sickle cell trait; more strongly, we might say that she is ignorant of her state. But it is unlikely that we would say that Liza has no belief. This example illustrates the simplest case in which the probabilities of all states of the world are known and objective. At the other extreme, an agent may face Knightian uncertainty under which there is no quantiable information about the world; indeed, Liza might not even know what all or some of the possible states of the world are. The possibility of radical uncertainty of this type may underlie a sense that the term “deliberate ignorance” is oxymoronic and the actual state paradoxical. This sense, however, is mistaken. Consider the world in 1492: many people believe that the world is at, yet Christopher Columbus believes that the world is round. When Columbus sails west to reach China, what is his belief that he will, in fact, land on Hispaniola? More precisely, does he have a belief that corresponds to the actual map of the Western Hemisphere?8 Presumably not. Yet, even in this instance, one might think that deliberate ignorance is possible. Columbus believes that China lies to the west of Spain. He might ask himself whether the water between Spain and China is empty. He could remain deliberately ignorant of whether the western seas were empty by refusing to sail west. Even if the world is in a state that Columbus has not even imagined (e.g., shaped like a torus), deliberate ignorance would still arise if he intentionally chose not to explore these waters.9 Finally, not all knowledge may require beliefs or propositional knowledge. Distinguish knowing that (propositional knowledge) from knowing how (an ability to do something, which I shall call practical knowledge).10 On some accounts, knowing how is distinct from knowing that; the former does not involve any propositional knowledge. Riding a bicycle, for example, does not rely, at least supercially, on any propositional knowledge. It requires an ability or a disposition but not obviously any beliefs. An individual who knows how to read a book or play the piano has an ability that extends beyond propositional knowledge and belief. Deliberate ignorance of practical knowledge may seem straightforward. Liza decides not to learn to play the piano or to ride a bicycle. She may have 8

9 10

fact which should affect her belief that both parents carry the trait. Columbus must have had some beliefs about the water to the west. He must have believed, for instance, that there was not a wall, barrier, or land mass that would prevent him from reaching China by sailing west. The required intention is difficult to state precisely. Columbus must have decided not to sail to avoid learning about the extent of the emptiness of the western seas. The distinction seems to be due to Ryle (1946). It is controversial whether practical knowledge is reducible to propositional knowledge. For a survey of the controversy over the reducibility of knowing how to knowing that see Bengson and Moffett (2012).

222

L. A. Kornhauser

good or bad reasons for her decision but it hardly seems to raise interesting behavioral or philosophical issues. Practical knowledge, however, is pervasive and some of it does present more complex questions. Technology, for example, involves knowing how. Some of that knowledge is embodied in the technical means by which it is applied, but some is possessed by individuals. When technology changes, both the physically embodied and individually possessed knowledge may be lost. Socrates noted this in the Phaedrus when he suggested that the advent of writing would undermine philosophical argument. He claimed not that individuals would lose propositional knowledge but rather their dialectical ability. Other, more recent examples abound. The advent of the electronic calculator arguably has eroded the arithmetic facility of everyone. Similarly, when Liza downloads Google Maps or Waze to her smart phone, she initiates a process that may lead to her losing her navigation abilities. When her ability to navigate atrophies, she does not obviously lose propositional knowledge. Her capacity to move from point A to point B simply deteriorates. One might understand the effects of adopting these technologies in a way that implicates knowing that as well as knowing how. One might understand these phenomena as “externalizing knowledge.” Knowledge may be externalized either physically or semantically (for a discussion on the extended mind, see Clark and Chalmers 1998; for essays on extended cognition, see Carter et al. 2018). In both instances, externalization is apparently the result of the specialization of labor and knowledge. Many, if not most, of our concepts are deferential. When the pipes in my home are clogged, I know to call the plumber but I am innocent of knowledge of plumbing in general. Indeed, my (and others’) understanding of most of current technology is minimal at best. I am thus to a large extent ignorant about the content of most of these concepts. The earlier discussion of knowing how and technology illustrates how knowledge might be externalized physically in the technological aids that an agent uses to understand her world or to act. The invention of writing externalized memory; the invention of moveable type and then digital storage accelerated that externalization. Similarly, counting boards, abaci, and then the computer (partially) externalized computation. Maps and street signs facilitated navigation. These physical objects reduce, in some sense, each individual’s knowledge; at the very least, it permits Liza to deepen her knowledge about some domain D at the cost of a shallower knowledge of other domains. Collective knowledge clearly grows. This dispersion of knowledge throughout the population, however, blurs the distinction between individual and collective ignorance and, as I suggest later, renders assessment of “individual” rationality more complex. Deferential concepts are instances of knowing that. As noted above, although they are pervasive, they render knowledge inherently communal as

Reections on Deliberate Ignorance

223

each individual essentially relies on some knowledge that is mastered by someone else. Deferential concepts thus emphasize the distribution of information within a collectivity. Whether an agent A is ignorant of X thus becomes a complex question as the agent may, in fact, “know” X deferentially but nonetheless be unable to act or to reason effectively with the knowledge not located in her brain. These issues will be discussed further below (see section on collective ignorance). When Is Ignorance Deliberate? Broadly, a deliberate action might either be intentional or, more strongly, reasoned. If deliberate ignorance requires a reasoned decision, very few instances of deliberate ignorance will exist. Moreover, if they exist, they are apt to be rational for the individual. If deliberate ignorance entails only an intentional decision to ignore, then there will be many more instances of deliberate ignorance. How many will depend on the nature of the required intentionality. The agent Liza may directly choose to ignore X or she may indirectly choose to ignore X by intentionally choosing Y. This phenomenon may arise in different contexts, each of which presents difficulty. Consider rst how passive ignorance puts pressure on the idea of reasoned deliberation. Suppose Liza is a college student designing her curriculum. She is, in essence, deciding what to know. She develops a curriculum around decision making, choosing courses in psychology, economics, and political science. Has Liza chosen to be ignorant of physics? Does the determination of whether her ignorance of physics is deliberate depend on whether she explicitly considered taking physics? Suppose she simply disregarded all language courses? Does this amount to deliberate ignorance of the relevant languages? Another perplexing instance arises when ignorance results from an unanticipated consequence of an intentional act. Suppose Liza buys a smartphone. She downloads Google Maps and uses it extensively when driving. Consequently, her navigational abilities deteriorate. Similarly, she actively uses a search engine to “remind” herself of various facts. Her memory thus deteriorates. Her explicit propositional knowledge declines though the scope of her knowledge expands (and the cost of accessing that external knowledge falls). Are the changes in her propositional and practical knowledge instances of deliberate ignorance? Ignorance as disregard also raises some questions. Suppose Liza chooses between X and Y on the basis of only two out of ten criteria relevant to her choice. She thus disregards eight of the relevant criteria. Does it matter if she did this intentionally? Or does deliberate ignorance encompass decision protocols that evolved or are otherwise adaptive, independent of whether the agent acted intentionally or not? The collective context highlights the importance of the distribution of information and the distribution of decision making within the collectivity.

224

L. A. Kornhauser

Consider a medical trial of a new drug: the relevant group is the experimenter, the doctors administering the drug, the patients receiving it, and the data analysts. The experimenter adopts a double-blind protocol for the drug trial. The experimenter makes a reasoned choice to withhold knowledge from doctors, patients, and data analysts. In this case, the decision and the consequence of the decision—the ignorance—are separated. What matters is the distribution of information. What would justify choosing ignorance over knowledge? Consider rst efficiency concerns. It may be costly to acquire the information or to process the information. Moreover, it may be that decision protocols which ignore information prove to be more accurate. These types of reason underlie the category of performance-enhancing functions noted by Hertwig and Engel (this volume, 2016). There are, however, other reasons to choose ignorance. First, one might want to lter out information that distorts or interferes with successful processing of information. This logic underlies the use of double-blind, randomized controlled trials and, on at least one account, Rawls’s “veil of ignorance.”11 Second, and relatedly, there may be normative reasons to suppress, or at least disregard, certain information. Rawls (1971) justies the veil of ignorance as a fair procedure used to identify the considerations that an agent should weigh in choosing a set of institutions. Institutional choice should not, he argues, depend on an individual’s actual position in society, but only on the distribution of social and economic advantages. Hertwig and Engel (this volume, 2016) also note this reason. Civility provides a third reason for deliberate ignorance. Social life requires individuals to restrain their aggression. In many instances, too much information may provoke conict. Often “manners” require an individual to disregard information that she has acquired or to withhold it. Whether someone has deliberately chosen to be ignorant is, in many instances, straightforward. The agent explicitly averts her eyes or weighs considerations in favor or against acquiring the relevant knowledge. Problems arise, as will be discussed below, in the context of forgetting and remembering. Here, one might consider a deliberate decision to remember X as at least an implicit decision to forget Y. Deliberate Ignorance within the Collective Frame As discussed above, the concept of individual ignorance cannot clearly be distinguished from the idea of collective ignorance. But collective ignorance raises perplexing problems. Deliberate collective ignorance doubles the perplexity, as both ignorance and deliberation apparently implicate mental states. 11

Note how the randomized control trial transforms the injunction to disregard information inherent in the veil of ignorance into a true lack of knowledge on the part of the agents. If the experimenter is one of the doctors in a drug trial, she has effectively tied her hands.

Reections on Deliberate Ignorance

225

An attribution of mental states to collective entities, however, seems simply to be a metaphor. Several issues arise: • • • • •

What is collective knowledge? By what means should it be dened? Does the collectivity know X if some individual within the collective knows it nondeferentially? If so, then the collectivity could know (nondeferentially) more than any individual within it. Is it possible for the collectivity to know something that no one in the collectivity knows? Conversely, can everyone in the collectivity know something that the collectivity does not know?

If we conceive of collective knowledge simply as the distribution of beliefs within the collectivity, then we would answer these questions negatively. If, however, we attend to the social processes within the collectivity, the answer could change. Consider rst how the collectivity might have knowledge of how to do something that none of its individual members has, as in market processes. Each consumer knows how much they are willing to pay to purchase a good while each producer knows how much they are willing to accept to bring goods to the market. No one knows, however, what price will clear the market; the market process produces this knowledge. Of course, the market has no belief about this knowledge. A striking example, raised by Tom Seeley (2001), involves the choice of a new hive by a swarm of honeybees. Although individual bees are unable to reach this decision on their own, some know how to dance and are thereby able to communicate pertinent bits of information that leads the swarm to its decision. Here, knowing how is not propositional knowledge as bees presumably do not have beliefs. One might argue that the market price does not truly represent collective knowledge because the collectivity cannot reason with that knowledge. One might say, in response, that the collectivity knows how to allocate the resource through the market. Alternatively, one might point to bureaucratic organizations as collective entities that do, in fact, process collective knowledge. A bureaucracy segments decision-making authority as well as access to information. This fragmentation implies that each individual has signicantly less knowledge than the bureaucracy as a whole.12 To understand how everyone in a collectivity might know something without the collectivity knowing it, one must also consider the nature of the social processes within the collectivity. As noted, collective knowledge might be understood as the distribution and nature of the beliefs within society. Yet 12

Arguably we should understand the bureaucracies’ knowledge as externalized in its les just as historians’ knowledge of some events are externalized in their computer les that archive relevant facts.

226

L. A. Kornhauser

characterizing collective knowledge simply as knowledge that each individual member has does not acknowledge the collective nature of the knowledge. Each individual may also need to have some higher-order belief about the knowledge of X by others. That is, collective knowledge of X might require mutual knowledge, understood as the belief by each person that (every) others knows X. Alternatively, it might require common knowledge, understood as the innite cascade of beliefs, that each knows that the others know that each knows that..., or any nite cascade in between. If members of the collectivity do not have mutual knowledge, then the collectivity cannot act collectively on that knowledge, although each individual could. This conception of collective knowledge would seem to have broad application across different types of collectivities, which may range from a mass of diffuse, unorganized individuals to highly organized and structured groups such as the modern state. We often attribute beliefs, preferences, and other attitudes to these structured groups but the procedure and the demands of rationality that we place on these structured groups is contested. When the attitude of the group depends on the attitudes of the individuals that comprise it, serious logical difficulties must be confronted (Arrow 1963; List and Pettit 2013). Organic accounts of these groups are equally problematic. Both ignorance and deliberate ignorance would take different forms on the different accounts. Suppose that one thinks collective knowledge consists of a set of beliefs at least as strong as mutual knowledge: deliberate ignorance, then, might take the form of interventions that block the formation of the higher belief that the agent knows that others know. Organizations create bureaucracies to accomplish such knowledge segmentation. Banks that provide investment advice to rms as well as to investors construct “Chinese walls” to prevent knowledge from crossing the divide between the two divisions. States that seek to control social media and the Internet arguably pursue this strategy to prevent the coalescence of opposition to the regime.13 By contrast, the form that deliberate ignorance would take on aggregate accounts of belief in structured groups is unclear. In some instances, the group might require the subgroup making a decision to disregard certain information, as the rules of evidence do in jury trials (see Zamir and Yair, this volume) Because information and action are distributed across individuals within a group, the attribution of deliberate ignorance to the group raises issues not present at the individual level. Suppose Henry is the CEO of a corporation, Liza its CFO, and Freddy a sales manager. Suppose Freddy submits fraudulent sales reports to Liza, who uses these reports to prepare a nancial report for 13

This example illustrates a second difficulty with the application of collective ignorance to collectivities. On one account, we can view the state as a collectivity itself so that the censor is inuencing the knowledge of members of the collectivity. Alternatively, one might treat the state as the “government” so that its actions censor individuals external to the group. While the former has the structure of the deliberate ignorance of the collectivity, the latter does not.

Reections on Deliberate Ignorance

227

Henry. Henry then makes statements based on the report. These statements, of course, are fraudulent. Was the corporation ignorant of the fraud? We might ask the same question of Henry and Liza. In both the collective and individual cases, the attribution of legal or moral responsibility will depend on what knowledge is imputed to each entity/individual, as no single agent in the corporation has all the information necessary for the attribution of responsibility. Dening Deliberate Ignorance The prior discussion suggests several different denitions of deliberate ignorance: 1. 2. 3. 4. 5. 6.

An intentional lack of knowledge An intentional lack of knowledge or weak disregard An intentional lack of knowledge or disregard, either weak or strong An actively or passively intentional lack of knowledge An actively or passively intentional lack of knowledge or weak disregard An actively or passively intentional lack of knowledge or disregard, either weak or strong

One could construct other denitions, for instance, that restrict “ignorance” to “disregard” or look solely at “lack of knowledge.” These alternative denitions, however, exclude some of the core instances of deliberate ignorance. The choice of denition does not depend on a proper analysis of a folk concept of deliberate ignorance but rather rests on the questions one wishes to ask, as well as on the answer and the intellectual fertility of the denition. Hertwig and Engel (this volume, 2016) suggest the rst denition, with the added condition that the net benets of knowledge acquisition be large. This denition ts well with an inquiry into individual psychology. If the aim is to identify conditions under which deliberate ignorance constitutes a rational response to a decision problem, however, the motivation for the restriction to situations of low costs of acquisitions is unclear. When costs are high, of course, the rational motivation for deliberate ignorance may be straightforward, but the question and answer seem to fall well within the general research question asked. Hertwig and Engel occasionally seem to suggest the second denition. As noted earlier, the functional categories of performance enhancement and fairness suggest that disregard falls within the scope of (deliberate) ignorance. Examples of collective deliberate ignorance suggest a further broadening to include the fth denition, which includes ignorance of Y (that arises from a choice to know X instead), disregard, and intentional ignorance. On what basis should a denition of deliberate ignorance be selected? There are at least two different approaches. The rst simply explicates the “folk” concept of deliberate ignorance. Though I have presented a taxonomy in this

228

L. A. Kornhauser

section based on the ordinary usage of the terms “ignorance” and “deliberate,” I have not offered an analysis of the folk concept. (It is not even clear that a folk concept of “deliberate ignorance” exists.) Rather, my aim was to identify some nuances in the meanings of the two terms “ignorance” and “deliberate,” to distinguish more nely among various situations. This process of renement might help with the second approach to dening deliberate ignorance; namely, to ask what denition is most intellectually fertile and helpful. As Hertwig and Engel may have adopted their denition based on investigations into individual decision making, rationality, and psychology, the second approach might be the most appropriate. Nonetheless, this approach seems too narrow to encompass investigations of collective decision making as well as to address some normative questions in moral and political philosophy.

Rationality Broadly, analysts identify two distinct domains of application of the concept of rationality: decision outcomes (or choices) and decision processes.14 To these domains they bring three distinct, but related, concepts of rationality: instrumental rationality, procedural rationality, and substantive rationality. Within each class of concepts, numerous distinct conceptions of rationality have been elaborated. In game theory, for instance, one might understand each proposed solution concept as a conception of instrumental rationality. Instrumental rationality requires the agent to choose the best means to achieve her ends. Procedural rationality requires the agent to follow appropriate procedures when making her choice; she should use “right reason” to reach her decisions, form her beliefs, and argue successfully. Substantive rationality requires the agent to have appropriate ends. Supercial reection on theoretical rationality, or the rationality of belief, suggests that instrumental, procedural, and substantive rationality converge. In theoretical rationality, after all, the obvious substantive end is truth; substantive rationality is thus straightforward and trivial. Procedural rationality in deduction points to the rules of logic as both instrumental and normative as they are “truth-preserving.” They identify rational decision processes and reach from true premises rational outcomes. In induction, procedural rationality points to statistics and decision theory as correct processes for reaching correct outcomes. Together, these norms apparently condemn deliberate ignorance. When one considers practical rationality, however, this convergence is less clear. Many different goals compete for the ends dened by substantive rationality. Procedural rationality might point to the same norms to govern decision processes but practical concerns (i.e., the costs of decisions and limited cognitive capacity) suggest that these procedures may not be instrumentally 14

I include the processes of belief formation and adjustment and of argument in this category.

Reections on Deliberate Ignorance

229

best, even if implementable. The rational assessment of deliberate ignorance requires a more nuanced analysis. This section lays out a more nuanced assessment. It begins with discussions of procedural and instrumental rationality in the individual case, and concludes with a discussion of these issues in the collective context. Procedural Rationality Deliberate ignorance seems irredeemably at odds with procedural rationality. This tension, however, is not as strong as it appears. In some circumstances, procedural rationality endorses deliberate ignorance. Procedural rationality requires agents to use “right reason” in their deliberations. These accounts of rationality generally consider “right reason” to include a narrower or wider set of reasoning protocols that are normative for actual reasoning processes. Classical logic constitutes the core of these normative rules. Around the core sit the rules of probability theory, including Bayer’s Rule for revising one’s beliefs in light of new evidence. On the periphery, lies decision theory, generally subjective expected utility theory, and further out still, game theory. Decision theory assumes that agents seek to maximize their “expected utility” subject to constraints. Game theory analyzes strategic interactions among rational agents; each solution concept offers an account of rationality in these circumstances. Over the last forty years, some psychologists claim to have demonstrated that individuals are procedurally irrational; in general, humans do not conform to the normative standards of any of these reasoning protocols. This discrepancy between actual reasoning processes and logic, probability theory, and decision theory has provoked a vigorous debate over the nature of rationality understood as right reason (Cohen 1981; Stanovich 2010; Stanovich and West 2000). These experimental results, of course, do not challenge instrumental conceptions of rationality directly. Indeed, it is not clear that these results challenge right reason conceptions of rationality at all. There are two subclasses of conceptions of rationality as right reason. One position holds that the norms of reasoning are grounded outside of psychology. This position derives from the late nineteenth century, when Frege separated logic from psychology (Hanna 2006). Logic, for Frege, was not an empirical study but an a priori, necessary investigation of rules of inference. It was decidedly not the study of how people actually reasoned. A fully externalist account must then hold that probability theory and decision theory are also externally justied. Call this view “external procedural rationality.” On this account, it is supercially easy to see how the rationality of individual reasoning processes could diverge from the normative requirements of rationality: the agent’s reasoning processes simply do not always conform, as the experimental evidence shows, to rules of classical logic, to the logic of probability, or to the logic of decision theory.

230

L. A. Kornhauser

A second position holds that the norms of reasoning are constituted by human psychology through a process of reective equilibrium (Cohen 1981). The rules of logic, probability theory, and decision theory are only normative if they are endorsed by a process of reective equilibrium that reconciles judgments about appropriate rules of inference and about successful instances of inference. Call this view “internal procedural rationality.” The internalist argument often proceeds by analogy to Chomsky’s program, which distinguished between a syntactic competence and syntactic performance. On this account, humans are inherently rational; deviations from the norms of reasoning occur only because something interferes with the agent’s exercise of her innate rational competence and yields a performance error. Much of the philosophical discussion has focused on the debate between the external and internal conceptions of procedural rationality, and much of this debate has, as noted, relied in some form on the performance/competence distinction. The externalist contends that the norms of reasoning do not derive from human reasoning competence. The experimental evidence thus does demonstrate human irrationality. The internalist, by contrast, argues that rationality is simply dened by human reasoning competence. Departure from the norms of rationality are attributable to performance errors, not to a rational incapacity. Deliberate ignorance plays an ambiguous role in right reason concepts of rationality as it might either promote or undermine such accounts. Deliberate ignorance undermines all of the standard norms of reasoning to the extent that it discards or disregards decision-relevant information. The take-the-best heuristic, for example, may be described as deliberately ignoring decision-relevant facts. The heuristic, therefore, might counsel a course of action that the agent knows (or would know) is at odds with the conclusion drawn on the balance of all reasons. This situation underlies one of the paradoxes of authority. On the other hand, deliberate ignorance seems to support or constitute procedural rationality in other ways. On both the internalist and externalist right reason accounts, deliberate ignorance would be a rational strategy when it ltered out stimuli that interfered with the exercise of right reason. During the Vietnam war, for example, the United States changed a draft system that granted local boards great discretion to one that, to some extent, relied on a lottery to determine who was drafted.15 More interestingly, advocates of “resolute” choice in dynamic decision-making contexts argue for what may be a form of resolute ignorance (Gauthier 1986; McClennen 1990).16 Consider a slightly modied version of the classic difficulty faced by Odysseus, who wants to travel from Troy to Ithaca. There are two routes: one 15 16

The system was supposed to produce a fairer allocation of the burden of conscription across income classes and races. Admittedly, one might construe resolute choice as a form of instrumental rather than procedural rationality.

Reections on Deliberate Ignorance

231

passes by the island of the Sirens and the other does not. He faces a dynamic choice problem. At time 1 he chooses a route: either route D, which does not pass by the island of the Sirens, or route S, which does pass by it. If he chooses S, Odysseus faces a second choice when he passes the island at time 2: continue on to Ithaca or crash into the rocks by the island. He thus faces three plans: D which leads to a safe return to Ithaca (I), plan SI which leads to hearing the Sirens’ song and a safe return to Ithaca (SI); and plan SD which leads to hearing the song of the Sirens and death (SD). At time 1, Odysseus most prefers SI, then I, then SD. At time 2, however, conditional on having chosen SI or SD, Odysseus prefers SD to SI. On the standard analysis, Odysseus is either naive or sophisticated. The sophisticated Odysseus knows that the Sirens’ song will cause him to choose SD over SI at time 2. He thus understands that his true choice is between I and SD.17 Sophisticated Odysseus chooses I. Naive Odysseus believes he will choose SI at time 2 so he chooses S at time 1 and crashes into the rocks at time 2. Fortunately, Odysseus receives expert advice from Circe, who offers him two strategies for choosing SI over SD at time 2. First, Odysseus may literally tie his hands to the mast to prevent himself from steering into the deadly shoals; that is, he commits not to steer onto the shoals. Second, Odysseus can ll his ears with wax to avoid hearing the Sirens’ song. This second strategy is one of deliberate ignorance, as Odysseus chooses not to hear (learn) the Sirens’ song.18 Advocates of resolute choice believe that there is a third way that fuses commitment and deliberate ignorance of a different type to get past the island. They suggest resolution, a mysterious psychological commitment not to choose SD at time 2. At time 1, Odysseus resolves to adopt plan SI. As noted, plan SI presents Odysseus with a choice at time 2 to continue to Ithaca or to crash into the rocks. Resolution means that Odysseus adheres to his plan SI; one might say that a resolute Odysseus decides to disregard, at time 2, the choice of SD that is available to him and, at time 2, preferred by him to SI. On this account, resolute choice would be a form of deliberate ignorance. Subjective expected utility theory is, as noted, considered part of procedural rationality. On this account, rationality requires the agent to maximize her expected utility. Consequently, it treats an agent’s beliefs about states of the world and her preferences over outcomes as mental states.19 Much research in psychology shows that agents do not in fact maximize their expected utility. 17

18 19

A technically better story would make the worse choice at time 2 subgame perfect. In the story of Odysseus, Odysseus knows that when he hears the Sirens’ song, he will act irrationally and steer onto the shoals. In the Odyssey, of course, Odysseus lls his sailors’ ears with wax and ties himself to the mast. Thus, he arrives home safely and hears the Sirens’ song. The economics literature typically starts from the opposite direction; it assumes that agents have, as primitives, preferences over actions in an uncertain world. It then proves a representa-

232

L. A. Kornhauser

One can, however, interpret subjective expected utility theory not as a theory of procedural rationality but as a theory about the aims of the agent. The analyst can treat the agent as someone who pursues the objective of maximizing subjective utility, though not necessarily explicitly. Subjective expected utility thus denes the goal of agents but does not describe their decision-making process. It serves as a benchmark that identies what is optimal, not the road that the agent takes to achieve the optimal decision. On this account, subjective expected utility theory falls under the rubric of instrumental rationality. Instrumental Rationality The concept of deliberate ignorance appears, at least supercially, to be neutral with respect to instrumental rationality. Whether deliberate ignorance best furthers one’s ends will depend on the agent’s ends and the relative costs and benets of further acquisition of knowledge. It may also depend on the strategic situation in which the agent nds herself and the distribution of information among all interacting agents. The third position on rationality denies that rationality concerns processes of reasoning at all. Rather, rationality is dened solely in terms of its success at achieving goals. Ecological rationality, therefore, identies performance criteria for success on a decision-making task rather than assesses the process of reaching the decision against either external or internal reasoning norms (Arkes et al. 2016; Gigerenzer and Todd 2012; Schurz and Hertwig 2019). Ecological rationality thus seems to have a strong appeal. It avoids the claims that individuals are irrational by rejecting the conceptual tie of rationality to reasoning processes. This rejection is compelling in many but not all contexts. An agent trying to solve a practical problem cares primarily about achieving success, not the reasons underlying her solution to the problem. From this perspective deliberate ignorance is unproblematic as long as it better promotes the agent’s success than the pursuit of knowledge. Moreover, deliberate ignorance plays a signicant role in some ecologically rational strategies. The take-the-best heuristic, for example, directs the agent, when choosing between two alternatives, to follow a lexical rule that dictates the agent to take the alternative which ranks higher on the rst (ordered) criterion that discriminates between the two options. Similarly, the recognition heuristic directs an agent choosing, on the basis of some criterion, between two alternatives to choose the alternative she recognizes; if she does not recognize either or recognizes tion theorem that identies the conditions under which these preferences over actions can be represented by a set of beliefs over states of the world and preferences over outcomes such that the agent ranks action A over action A′ if and only if the expected utility of A exceeds the expected utility of A′.

Reections on Deliberate Ignorance

233

both, she should revert to another recognition heuristic: randomize (for further discussion and examples, see Gigerenzer and Brighton 2009). The argument for ecological rationality faces at least two difficulties. First, how do we know that fast-and-frugal heuristics perform well? Evolutionary arguments might suggest that fast-and-frugal statistics perform well enough when evolutionary tness is at issue, but the relation between most human objectives and evolutionary tness is loose at best. Deliberate ignorance as a strategy for ecological rationality thus requires validation before it becomes rational to adopt it. But against which criterion do we validate it? What is the agent trying to accomplish? What is her objective function? And what constraints does she face? Given any heuristic, there is apt to be an objective function and a set of constraints for which the heuristic is optimal. There is also apt to be an objective function and a set of constraints for which the heuristic is not optimal. The second difficulty is related to the rst: What happens when the agent’s objective function is “truth-seeking” or the justication of a theoretical or practical conclusion? What does success in a truth-nding project entail? Can one justify a truth claim? In particular, can it be justied on the basis of a process that deploys deliberate ignorance? In these contexts, right reason would seem to provide the appropriate objective function or, at least, the way to identify whether the heuristic has performed well or not.20 These questions are complex and difficult. Double-blind randomized control trials are justied in part by the deliberate ignorance embedded in the procedure. Here, deliberate ignorance promotes truth-nding by eliminating unconscious biases of the experimenter, experimental subjects, and those who implement the experiment. Randomized assignment of subjects to treatment and control groups insures that the experimenter does not unconsciously introduce selection bias whereas the double-blind feature insures that the behavior of subjects and implementers are not inuenced simply by knowledge of their group assignment. A similar argument would support disregard if decision-irrelevant information was used in making moral choices. In other contexts, however, deliberate ignorance does not adequately support the truth-nding process. Consider, for example, formal dispute resolution by courts. Society aims to achieve accurate determinations of responsibility, but trial procedures, at least in common-law countries, do not conform to the standard probability calculus (Cohen 1977). Perhaps society does this to promote a more complex goal than truth-nding. Similarly, though the rules of evidence preclude the introduction of relevant but potentially prejudicial 20

The situation here parallels arguments that identify utilitarianism as the standard against which decision criteria are assessed rather than as a decision procedure itself. This argument is offered to reconcile the observed use of “commonsense” morality, which is largely deontic, with the claim that utilitarianism is the correct moral theory.

234

L. A. Kornhauser

evidence, it is unlikely that they would preclude the exclusion of the evidence ignored by many heuristics. Rationality when Cognition Is Extended (and in Collective Entities) Thus far, this discussion of rationality has implicitly assumed that knowledge resides in the agent’s head. The prior section, however, suggested that knowledge has, at least to some extent, been externalized. The presence of deferential concepts and extended cognition has a signicant impact on an understanding of rationality and deliberate ignorance.21 At the outset, note that the agent can only act on the reasons she has immediately before her. Here, “immediately before her” excludes reasons embedded in externalized knowledge, thus either embodied in physical objects or possessed by other individuals. Ought the agent acquire this externalized knowledge? If acquisition was costless, then presumably the answer would be yes. As acquiring this external knowledge is not costless, however, it would thus seem that she should, under some conditions, disregard it. Deferential concepts and extended cognition imply that the agent has some reasons that are not, in some sense, before her. For instance, the reasons may be in someone else’s head. How should the agent proceed? Consider, for example, Liza, who must decide whether to undergo surgery or radiation to treat a cancer. She has deferential knowledge about her cancer and its treatment. To make her decision, however, she needs actual, not deferential, knowledge. Should she acquire that knowledge? Or should she defer to someone with the actual knowledge? Arguments developing this line of thought have appeared in the literature on political authority. Note that the individual with extended knowledge faces the same problems as collective entities. Consider a decision maker in a corporation, say the CEO tasked with making an investment decision. The information needed for that decision is distributed throughout the organization. A good CEO will organize the ow and processing of that information in an effective fashion. The organizational structure will delegate parts of the decision-making task to different individuals, none of whom will act directly on all the reasons the organization has. The organizational structure thus dictates that each individual will be deliberately ignorant of some reasons, and the person who is ignorant will not be same person who decides who should be ignorant. Thus, the ignorance of some of its agents or members promotes the performance of the corporation. Moreover, in some instances, the corporation might have an interest in insuring that information dispersed throughout the organization is not in fact brought to the attention of the nal 21

For more extensive discussions of the sociality of reason, see Sloman and Fernbach (2017) and Mercier and Sperber (2017). These discussions are consistent with, but different from, the discussion here.

Reections on Deliberate Ignorance

235

corporate decision maker. This structure thus deliberately creates ignorance in the corporation.

Deliberate Ignorance in Public Life Deliberate ignorance lies at the heart of several other issues that arise in political philosophy and public life, in general. Here, I briey introduce three issues that raise complex normative issues. Knowledge about Collective Entities The earlier discussion suggested the difficulties in attributing knowledge or ignorance to collective entities. Problems also arise, however, when one asks not whether or how such entities know, but also when one examines the nature of our knowledge of or about such entities. Typically, collective self-knowledge is indexed or statistical. These indices and statistics choose the aspects of a complex entity that it considers relevant and ignores, explicitly or implicitly, other aspects of the group. Consider, for example, any measure of income inequality. The measure obviously summarizes the distribution of income into a single number that conceals both positive and normative information. Specically, each measure of inequality embeds judgments about the signicance of different types of transfer of income. The relative mean deviation measure,22 for example, is invariant to transfers between people below the mean (or between those above the mean) but not invariant to transfers across the mean. The choice of index thus implicitly determines which transfers are normatively important. The choice of a particular index or statistic determines what data the analyst gathers and considers. In some contexts, such determinations are made more explicitly, though not always with a full understanding of the consequences of the choice. France prohibits the state from collecting any data on the basis of race or ethnicity. This policy affects both national debates over discrimination and the formulation, implementation, and evaluation of policies, particularly ones intended to address any (imperfectly observed) discrimination. This policy of deliberate ignorance may further two distinct social goals. First, it may, as in the individual case, serve as a lter against the use of information morally irrelevant to social decisions. It is not clear, however, 22

The relative mean deviation measure of inequality is, roughly, the sum of the absolute value of the deviation of each agent’s income from the mean income in the population. Transfers of income between individuals on the same side of the mean thus have no effect on the measure, though we might believe that a transfer from a poor person to a poorer person, in fact, reduces inequality.

236

L. A. Kornhauser

that a justication of the policy as a lter succeeds. The lter has no effect on decisions that involve individuals on a personal basis (where race and ethnicity may be evident), and it has a detrimental effect on the implementation of policies to remedy the effects of discrimination against racial and ethnic minorities because the state lacks the information to target remedies to groups that it refuses to “see.” Second, the policy may express a commitment to nondiscrimination. This justication is equally problematic. Though the policy of deliberate ignorance formally expresses the state’s commitment to nondiscrimination, society may be rife with discriminatory outcomes. This pattern of outcomes also has an expressive aspect: it shows violations of the nondiscrimination principle. Collective Memory The problems of collective memory arise most starkly in the context of the societal transition from an oppressive regime to a less repressive one (see Ellerbrock and Hertwig, this volume). In these situations, the demands of collective memory may conict with the benets of collective forgetting. An oppressive regime may have committed multiple injustices; the more widespread and egregious these injustices, the greater the demand for an accounting of the wrongdoing. Such an accounting, of course, requires remembering the offenses. To avoid a violent transition, the oppressive regime might require an amnesty prior to ceding power. Amnesty, as its etymology demonstrates, requires a forgetting of the wrongs previously done. Moreover, prospective concerns recommend not simply a legal forgetting but an actual forgetting. Forgetting may facilitate reconciliation between disparate groups and may help avoid a cycle of violence and recrimination.23 Truth and reconciliation commissions might be understood as an attempt to balance the two forces of justice and of reconciliation. A commission rst seeks to disclose the wrongdoing of the prior administration and to identify individual wrongdoers. This task must be reconciled with criminal prosecution. As noted, individuals may be unwilling to testify about misdeeds when that testimony might subject them to prosecution. After acknowledging the past, reconciliation may require remedial actions, such as reparations of some sort. Successful reconciliation, however, will require that the victims of wrongdoing disregard, to some extent, the wrongdoing of the prior regime. Similar issues may arise in collective, commemorative decisions. States constantly erect statues, designate national landmarks or monuments, and 23

The issues here are complex and may parallel some that arise in the context of individuals. The folk advice to “forgive and forget” exemplies the tension noted above. Forgiveness requires remembrance but the efficacy of forgiveness may require forgetting after the forgiveness; the advice might better be phrased as “forgive, then forget.”

Reections on Deliberate Ignorance

237

declare public holidays. These decisions create an official memory that may be at odds with the unofficial history preserved by individuals or social subgroups. This discrepancy may lead to political conict. The political construction of the past is most evident in the United States in the design of the history curriculum for primary and secondary education. Individual states identify a set of textbooks that local school districts may adopt to present what is, in effect, an official history. The overt politics that occurs in this arena may reect the more hidden choices and biases of historical research in general. All history selects one narrative out of many potential narratives; correlation across the narratives may essentially “silence the past”24 (Trouillot 1995). Democratic Education The growth and specialization of knowledge implies that each individual has command over, at best, a very narrow segment of that knowledge. This growth and specialization results from and is sustained by deliberate ignorance in the broad denition (5) offered earlier. Though in the Renaissance, a single person might be able to know everything, now it is impossible to know even all of something. This widespread, general ignorance is especially problematic in a democratic society, as citizens should participate actively in policy making, or at least in the evaluation of the policies made by their elected representatives. Policy formation in the modern world is extremely complex, requiring knowledge of complex scientic issues, social processes, and ethical questions. Consider, for example, the issues that must be investigated to form and evaluate public policy concerning climate change. Effective policy requires knowledge of the underlying science, both to assess the risks and to develop remedies. Understanding the science itself requires knowledge of statistics and decision making under uncertainty. Knowledge of the science, however, is not enough as any remedy must be both politically viable and socially efficacious. The policy maker must predict how individual behavior will respond to the policy. How will agents adjust their behavior in light of the policy? By how much would a carbon tax reduce emissions of greenhouse gases? How would 24

This is Trouillot’s term, who identies four moments at which silencing may occur: the moment at which (a) the “source” fact is created, (b) facts are assembled, (c) a narrative is chosen, and (d) the narrative is contested. Silencing may occur at any moment. Some agents may choose to leave fewer traces than others. Legend, for example, contrasts the reputed strategy of Samuel Adams to destroy the record of his actions (the source moment) with the strategy of his cousin, John Adams, to memorialize them. According to this legendary account, Samuel Adams deliberately sought to forestall historical inquiry into certain questions. Similarly, some sources are easier to accumulate to create an archive. Note that technological decisions can determine whether sources continue to exist or be accessible (e.g., when publishers switched from acid-free paper, they shortened the lives of books; when software developers upgrade a product, they may render certain prior les unreadable).

238

L. A. Kornhauser

it affect employment and prices? Will public officials monitor behavior and enforce any mandates? Moreover, in designing the policy, citizens and policy makers must determine how the interests of different generations and different groups within generations should be weighted. Assessment of each of these issues relies on knowledge not available to most citizens. Technocracy apparently offers a solution to this problem. The democratic policy determines the goals that the society should pursue (and how they are to be traded off), and experts then identify the optimal means for achieving these goals. This approach, however, faces at least two difficulties. First, setting goals may require specialized knowledge. To set goals, an agent must determine what interests and values are at stake and then integrate them into an “all things considered” judgment. Often, this process of integration must resolve complex ethical issues, such as how future generations should be treated or how to integrate the interests of distinct individuals. Second, experts do not typically agree on the optimal means to achieve any social goal. In terms of climate change, for example, although there is general agreement among scientists that global warming is occurring, there is disagreement about the rate of warming, the consequences of warming, and the best way to respond. Moreover, various interest groups in society will have vested interests in pursuing different remedial strategies. These interest groups will attempt to inuence the technocrats to adopt strategies that the group favors. An “informed” citizenry might be needed to resolve these controversies and to monitor the conduct of the technocrats. Democratic education must therefore determine what basic instruction it needs to give to each of its citizens. In the nineteenth century, reading, writing, and arithmetic might have sufficed. In the twenty-rst century, however, the scale and complexity of technology as well as the policy challenges of large, modern societies require an education that trains citizens to engage in moderate policy discussions that include controversial questions well outside of the citizen’s domain of “active” knowledge; that is, the knowledge on which a citizen can intelligently act.

Concluding Remarks The understanding of deliberate ignorance rests on an understanding of cognition and the mind. Each understanding points to different meanings of deliberate ignorance, justications for it, and methods for creating it. On the classical conception of individual mind, knowledge resides exclusively in the brain of the knower. Even in this setting, deliberate ignorance can be understood in at least four different ways. “Ignorance” might refer to a lack of knowledge or to a disregard of knowledge possessed. “Deliberate” might refer to a direct or indirect intention; the agent, that is, might intentionally choose not to acquire the relevant knowledge, K, or might intentionally choose

Reections on Deliberate Ignorance

239

to learn K′ rather than K. Right reason accounts of rationality endorse deliberate ignorance as a direct choice not to know K; deliberate ignorance serves to lter out information that may undermine the operation of rational faculties. Deliberate ignorance here requires effort to conceal or remove the knowledge, as in double-blind studies. More strongly, deliberate ignorance may entail simply the disregard of information that the agent has. This sense of deliberate ignorance underlies some strategies recommended by ecological rationality. Deliberate ignorance here requires only averting one’s eyes and is justied on the basis of its success. More strongly still, deliberate ignorance might be understood as endorsing a belief known to be false. Formal models in the natural and social sciences follow this explanatory strategy. Whether and how this practice explains the phenomenon, however, remains open. When one understands cognition as extending beyond the brain of the individual, the understanding of deliberate ignorance shifts from what is known or not known to how knowledge is distributed across individuals. This distribution of knowledge presents a number of problems. In the context of formally organized groups, deliberate ignorance may result from the division of labor within the organization so that no individual has the relevant active knowledge to decide well (either rationally or morally). In democratic societies, this poses questions of how to structure education so that citizens can participate intelligently in the formulation of policy.

Acknowledgments The nancial support of the Filomen d’Agostino and Max E. Greenberg Research Fund of the NYU School of Law is acknowledged. Comments on an earlier draft from Richard Brooks, Ralph Hertwig, Liam Murphy, and two anonymous referees greatly improved the discussion, as did the extensive discussion at the Forum.

14 Normative Implications of Deliberate Ignorance Joachim I. Krueger, Ulrike Hahn, Dagmar Ellerbrock, Simon Gächter, Ralph Hertwig, Lewis A. Kornhauser, Christina Leuker, Nora Szech, and Michael R. Waldmann Abstract In this chapter the phenomenon of deliberate ignorance is submitted to a normative analysis. Going beyond denitions and taxonomies, normative frameworks allow us to analyze the implications of individual and collective choices for ignorance across various contexts. This chapter outlines rst steps toward such an analysis. Starting with the claim that deliberate ignorance is categorically bad by the lights of morality and rationality, a suite of criteria is considered that afford a more nuanced understanding and identify challenges for future research.

Introduction “The game is all taped. Germany won,” announced Grandma Harriet as she deliberately thwarted JIK’s attempt to remain deliberately ignorant of the game’s outcome during the 1998 Football World Cup, so that he could simulate a live experience later through the taped footage. Indeed, Germany beat Mexico 2:1, but the experience of watching the game live, with the associated mounting tension, surprise, and elation, was completely undone by her pronouncement.

The pursuit of knowledge is a fundamental mandate in Western philosophy and the sciences that have grown from it. The Socratic paradox, “I know that I know nothing,” reects the idea that true knowledge, though difficult to attain, must be sought. Francis Bacon equated knowledge with power, in the Group photos (top left to bottom right) Joachim Krueger, Ulrike Hahn, Dagmar Ellerbrock, Lewis Kornhauser, Nora Szech, Ralph Hertwig, Simon Gächter, Christina Leuker, Michael Waldmann, Joachim Krueger, Lewis Kornhauser, Ralph Hertwig, Ulrike Hahn, Nora Szech, Dagmar Ellerbrock, Christina Leuker, Simon Gächter, Joachim Krueger and Dagmar Ellerbrock, Ralph Hertwig, Nora Szech, Michael Waldmann

242

J. I. Krueger et al.

sense that knowledge has the epistemic power to change common assumptions. From this perspective, knowledge is good, and ignorance must therefore be bad—deliberate ignorance doubly so. During the Age of Enlightenment, the attainment of knowledge became accepted as a core value. Philosophers sought to emancipate knowledge from the shackles of religion, thereby asserting the human capacity and obligation to seek understanding. Kant (1784), for example, held that ignorance hinders rational reection and thus compromises ethical behavior. Deliberate ignorance, therefore, is incompatible with the spirit of the Enlightenment.1 Yet, many pre- and postmodern social systems embrace or even demand ignorance and its deliberate cultivation. In certain instances, they demand it, for instance, by imposing taboos or limiting the ow of information, presumably to maintain social stability by preventing individuals from gathering information that could be dangerous for the collective or the ruling class (Simmel 1906). Religions, like all systems of social control, make ample use of information-limiting taboos, which are often conveyed as cautionary tales. In everyday life, there are countless reasons for cultivating deliberate ignorance, as exemplied by the example in the epigraph: it is reasonable to want to preserve the suspense of an action by delaying outcome knowledge, yet this can easily be thwarted (Ely et al. 2015). Recently, scholars from various disciplines have begun to explore the conditions under which deliberate ignorance occurs and may be defensible on moral, psychological, or rational grounds (Gigerenzer and Garcia-Retamero 2017; Golman et al. 2017; Hertwig and Engel, this volume, 2016). There is a certain educational irony to this project, as these scholars appear to be saying “we shall not remain ignorant about deliberate ignorance.” Then again, this project is not entirely new, but rather a re-enlivenment of an ancient one. The preSocratic Greeks understood the potential dangers of too much knowledge and the wisdom of carefully chosen ignorance. Aeschylus described Cassandra as cursed with painful foresight, and Prometheus, after granting humans knowledge of the future, was forced by stronger gods to leave us with “blind hope.” Whereas the accretion of knowledge is deemed benecial in most contexts, the prospect of omniscience is unsettling. In his story of The Golden Man, Philip Dick (1980) gave us a thought experiment on the consequences of perfect foreknowledge. The golden man (a mutant in a postapocalyptic world) can see the future perfectly, including his own actions and those of others. When ordinary humans temporarily catch and examine him (which he knew would happen), they nd his frontal lobes atrophied. His sole psychological capacity is perception, rendering cognition superuous. The Golden Man is a cautionary tale, 1

The present exploration proceeds along post-Kantian lines in the contemporary Western context; however, different understandings of deliberate ignorance hold in traditional, postmodern, or indigenous frameworks (e.g., Fahrmeir and Imhausen 2013; Fried and Stolleis 2009; Joas 1997).

Normative Implications of Deliberate Ignorance

243

illustrating the dangers of having too much of a good thing, be it food, sex, or information. But how do humans know when to forego knowledge that is freely available? Answers to this question will shed light not only on the moral and rational status of deliberate ignorance, but on its wisdom. Norms reect whether certain behaviors, actions, choices, or procedures are defensible (i.e., permissible or explainable) or even mandated (i.e., expected or prescribed). Norms, and the expectations they raise, provide a backdrop against which human behavior can be evaluated and are of great importance in the social sciences (e.g., psychology, economics) and related elds (e.g., philosophy, legal theory) (Popitz 1980; Schäfers 2010). There are fundamental distinctions among norms of morality, norms of rationality, social norms, legal norms, cultural norms, religious norms, aesthetic and other norms across different elds of normative investigation (Möllers 2015). Here, we focus on norms of morality and norms of rationality without claiming this initial analysis to be exhaustive. Before beginning our exploration into the moral and rational dimensions of deliberate ignorance, we make some clarications and disclaimers. First, we focus on cases in which deliberate ignorance does not appear to be unambiguously good or bad in the moral or the rational sense. For this reason, we do not question the rationality and ethical legitimacy of, for instance, the blind audition policy used by major classical orchestras. This policy is a normative advance and has helped to increase female musicians’ access to the most selective orchestras (Goldin and Rouse 2000). Once it has been determined that specic knowledge has a biasing effect, the question is no longer whether deliberate ignorance is moral or rational but whether the desire to obtain the biasing knowledge is immoral or irrational. We are interested in cases in which moral and rational dimensions collide. In some strategic contexts, for example, deliberate (strategic) ignorance can be rationalized along game-theoretic lines but remains ethically problematic (Schelling 1956). According to legal norms, knowledge of a crime or of the perpetrator’s identity entails an obligation to intervene or to testify against the perpetrator. A strategic approach might be to sidestep this obligation by remaining deliberately ignorant: someone without the relevant knowledge cannot, arguably, be blamed for failing to intervene or testify. Deliberate ignorance thus enables rationalizations designed to avoid costly involvement. This type of strategic rationality does not necessarily meet moral criteria. Second, and following the prevailing approach in Western philosophy and psychology, we see rationality and morality as separable domains; in other words, we assume that the norms applicable in one domain do not reduce to the norms applicable in the other (Fiske 2018; Heck and Krueger 2017). Likewise, we focus on psychological theories and paradigms anchored in methodological individualism; that is, the view that norms of behavior and choice operate at the level of individual agents. We acknowledge that other disciplines emphasize the social construction of norms and knowledge (Porter 2005; Schütz 1993; Wren 1990). We consider the social dimension of deliberate ignorance

244

J. I. Krueger et al.

wherever possible, but leave a full interdisciplinary treatment to others. We also touch on questions of changing moral norms and expectations as may be seen across political systems, historical periods, and socioeconomic settings (Joas 1997). Third, related to the previous issue, many instructive case studies of deliberate ignorance (see Appendix 14.1) involve individual decision makers who bear the consequences of their own decisions. Clearly, however, the consequences of an individual’s choice for or against deliberate ignorance often extend to other individuals and larger aggregates. Consider an example of someone who knows they are deep in credit card debt, but does not know whether they owe a large or a very large amount. Suppose they will never be able to service the debt, but would feel much worse if they knew the debt is really very large. Would deciding not to nd out the size of the debt be normative in the sense of morally permissible? Who else might be affected by this decision? Relatedly, there is the difficult question of how to conceptualize a decision maker that is not an individual (see also Kornhauser, this volume), but a supra-individual. Another important dimension in choice is time. How may deliberate ignorance be evaluated when a person’s preferences change, and when a person— like Odysseus—predicts different mental states and preferences within himself in future contexts (Duckworth et al. 2016; Elster 2000)? The Odysseus of myth happened to be correct in his predictions about his future preferences. Other mortals may not be so wise or so lucky. Affective forecasting is fraught with prediction errors, chief among them the inability to appreciate one’s ability to adapt to highly emotional events (Wilson and Gilbert 2005) We also consider variation over contexts. What is normative in one context may not be in another. We hypothesize that preference spaces can be “fractured” instead of held together by a unitary self. These complexities will make it harder to evaluate deliberate ignorance, but they will also afford a more nuanced understanding. Finally, we consider the role of deliberate ignorance in the context of our own craft (see also MacCoun, this volume): research and scholarship. At the social and societal level, government policies and cultural values tend to bias the direction of research. Certain projects are favored while others languish for lack of funding (Lander et al. 2019). Such biases implicate the actions of a collective agent, be it represented by a single person or a small group of powerful decision makers who choose deliberate ignorance with potentially farreaching consequences for societies, local groups, and individuals. Scientic communities and individual scientists manifest deliberate ignorance when investigating some problems while neglecting others. The increasing availability of huge data sets creates ever more situations in which scientists can choose not to look at specic data and not to ask particular questions. By the same token, scientists gathering data over time face the decision of whether to perform sequential analyses. Current conventions discourage such statistical

Normative Implications of Deliberate Ignorance

245

previews, reecting a norm designed to protect the scientic community from “p-hacking,” the practice of testing data periodically and stopping data collection once a desired signicance threshold has been reached (Simmons et al. 2011). In other words, these norms positively demand deliberate ignorance from the researcher.

Taxonomic Issues: The Individual and the Collective Normative analysis can be facilitated by, rst, clarifying who chooses or rejects deliberate ignorance (who has agency) and who is affected by its consequences (who is impacted) and, second, distinguishing between the individual and the collective level, where the latter is the more ambiguous (Passoth et al. 2012). A collective may be a small group of interacting individuals or a large aggregate, crowd, network, or society. These distinctions suggest the two-bytwo taxonomy shown in Figure 14.1. We depart from the pragmatic starting point of an individual choosing or rejecting deliberate ignorance whose consequences are limited to the self (top left quadrant). In reality, of course, pure cases of this kind are likely to be few and far between. People are embedded in social networks, communities, and cultures, and whatever affects them tends to affect others as well. But let us set aside small and unintended side effects on others for the sake of this analysis and consider the case of genetic testing for an incurable disease. The paradigmatic case here is a person, Ian, who has been tested, perhaps due to a family history of the disease, and can now pick up the results. Assuming that no protective or preventive action can be taken, that no further reproductive choices Impact Individual

Agency

Individual

Collective

Collective

• Genetic testing • Hedonic self-management (e.g., suspense regulation) • Not viewing one’s own Stasi le

• Testing for communicable diseases (e.g., HIV) • Not wanting to know what others think (e.g., teaching evaluations) • AntiVax • Conducting blind peer reviews, • Sins of the past: Deciding not blind auditions to nd out about the acts of a • Granting amnesty to junta previous regime members • Not collecting data on certain • Public and private agencies not personal characteristics (e.g., via accessing prominent individuals’ social media) Stasi les

Figure 14.1 A heuristic taxonomy of types of deliberate ignorance by agency/impact and individual collective levels.

246

J. I. Krueger et al.

will be made, and that the negative emotional response to a positive test result will be stronger than the positive emotional response to a negative test result, Ian may opt for deliberate ignorance. In so doing, he reveals a preference for a state of uncertainty over a state of near certainty. Yet how will his loved ones feel about this (Figure 14.1, top right quadrant)? How might Ian take their expected responses into account? Will Ian be correct in predicting not only his own responses but also the responses of others? This example suggests that when the agent is an individual, normative analysis can, and perhaps should, consider both the individual-to-individual scenario and the individual-to-collective scenario. The outcome of the analysis may differ: a strict within-person scenario will return a judgment that deliberate ignorance is (non)normative but an individual-to-collective scenario will return the opposite conclusion. Considering a collective as the agent (Figure 14.1, bottom two quadrants) raises a conceptually thornier problem. A collective could be a governmental office (e.g., the U.S. Environmental Protection Agency) or an individual representing a larger body (e.g., a government official or a CEO). One pragmatic option is to look at a collective as if it were a person. Yet, when considering collective agents such as co-acting crowds, one should be cognizant of the group-mind fallacy; that is, the idea that groups have minds that are functionally equivalent to individual minds.2 The next question is how to distinguish cases in which an individual is affected from cases in which a collective is affected. When a collective agent chooses or rejects deliberate ignorance, many individuals are typically affected. A particular set of circumstances may be required for the decision of a collective agent to affect a single individual. For instance, how might a journal’s decision to switch to blind peer review affect an individual academic? How does a government’s decision to grant amnesty affect a member of a now dissolved junta? And how does it affect the victims? A full analysis of the normative status of deliberate ignorance requires a review of its projected effects on a class of individuals. The taxonomic distinction between individuals and collectives matters because dissociations can swiftly emerge. For example, the same individual act of deliberate ignorance might be normatively defensible if the consequences are limited to the individual, but non-normative if they affect the collective (see Appendix 14.1). Furthermore, the individual–collective distinction intersects with the issue of social dilemmas. In a social dilemma, individuals choose strategies, such as cooperation or defection, but they cannot determine nal outcomes, which are co-determined by the choices of others (Dawes 1980; Fischbacher and Gächter 2010). The decision to engage in deliberate ignorance may amount to cooperation or competition, and it may benet or harm the self or others. In modied dictator games, for example, many participants fail to 2

For arguments in favor of the group-mind concept, see Allport (1924) and Ellwood (1920); for arguments against it, see Krueger et al. (2006).

Normative Implications of Deliberate Ignorance

247

access information about their partners’ payoffs if this deliberate ignorance allows them to claim more for themselves while maintaining a self-image of fairmindedness (Dana et al. 2007). In trust games, by contrast, many players fail to inspect the payoffs of others and thus forego opportunities to protect themselves from betrayal (Evans and Krueger 2011). In prisoner’s dilemmas, and other “noncooperative games,” ignorance of the strategies of others enhances cooperation to the extent that players project their own preferences onto these others (Krueger 2013). Here, deliberate ignorance yields material benets to the individual player as well as to the collective. If players learned the actual strategies of others, their incentives to defect would be greater and the sum of the payoffs would be smaller. In “anti-coordination games” such as chicken (Rapoport and Chammah 1966) or the volunteer’s dilemma (Krueger 2019), knowledge of others’ strategies is benecial and deliberate ignorance is detrimental. The player who knows the other’s move can select the best response, whereas the ignorant player must resort to the probabilistic and less efficient strategy of betting on the probabilistic Nash equilibrium. Social dilemmas such as these, as well as other manifestations of strategic decision making, complicate the normative evaluation of deliberative ignorance.

Norms of Morality and Rationality Whereas some philosophical schools see morality and rationality as closely related, others note important distinctions, by highlighting cases of behavior that are both rational and immoral (Adorno and Horkheimer 2002; Dawes 1988a; Krueger and Massey 2009). A comprehensive review must consider all intersections between (ir)rationality and (im)morality. Much as most contemporary scholarship maintains that morality and rationality are distinct, if related, domains, ordinary social perception tends to map them onto a two-dimensional space, with judgments of morality and rationality being separable and, under certain conditions, orthogonal (Abele and Wojciszke 2014; Fiske 2018) or even compensatory; that is, negatively related (Kervyn et al. 2010). Questions of morality can be approached from two major perspectives: deontology and consequentialism. From a deontological perspective, where the focus is on the morality of an action rather than on its consequences, deliberate ignorance seems wrong. Akin to lying, which according to Kant is always wrong, not taking potentially useful information into account is also wrong. Much like there is a moral duty to speak the truth, there is a duty to access relevant and accessible information. In contrast, consequentialism focuses on the potential outcomes of an action or failure to act and asks whether deliberate ignorance increases total happiness. Although consequentialism and deontology use different axioms, there are attempts at reconciliation (Hooker 2000; Part 2013/2017). For example, if from a deontological perspective the principles of nonmalecence and autonomy need to be traded off (e.g., when deciding

248

J. I. Krueger et al.

whether or not to undergo genetic testing), consequentialist considerations may help to make such tradeoffs in a coherent and principled way. As to rationality, the major perspectives address the coherence, correspondence, and functionality of judgments and choices. Coherence rationality asks whether deliberate ignorance introduces contradictions within the belief systems of individuals or collectives (Dawes 1988b; Krueger 2012); correspondence rationality asks whether deliberate ignorance threatens the accuracy of people’s beliefs (Hammond 2000); and functional rationality asks about threats to an individual’s or a collective’s ultimate interests and goals, such as survival and reproduction (Haselton et al. 2009). These types of rationality are neither mutually exclusive nor do they entail one another (Arkes et al. 2016). If the immorality and irrationality of deliberate ignorance were foregone conclusions, we would simply need to work out how the major normative frameworks justify this conclusion. Yet, empirical cases cast doubt on the idea that deliberate ignorance is necessarily irrational. Returning to the example of Ian, let us suppose that he decided not to pick up the test results, which would reveal whether or not he carries the gene for Huntington disease. One might wonder why he took the test in the rst place. Does Ian’s behavior indicate a reversal of preference and, if so, might such a reversal be regarded as irrational? What if the test results become incidentally available? Consulting them would create a state of knowledge even if there is no necessary call for action. The assumption of no implications for action is crucial here because testing positive might affect reproductive choices (Oster et al. 2013). Implications for action are just one set of consequences. Another set is affective. Ian might choose deliberate ignorance because a positive test result would take a heavy emotional toll, particularly as there is nothing he can do to avoid or mitigate the onset of the disease (Schweizer and Szech 2018). When deliberate ignorance is deployed in the service of anticipatory regret regulation (Ellerbrock and Hertwig, this volume; Gigerenzer and Garcia-Retamero 2017), it may be adaptive and thus rational in the functional sense. From a moral perspective, knowledge of a positive test result might take an emotional toll on Ian’s loved ones. Is there a moral obligation to anticipate these negative emotions and prevent emotional harm by not getting tested or keeping the results secret? Do family members have a moral obligation to get tested as well so that they can respond in the best interest of their families? These are difficult questions, in part because a right not to know has been asserted in the eld of genetic testing (Berkman, this volume; Wehling 2019). Moral Principles Is deliberate ignorance morally good or bad, neutral, or ambivalent? How might prevalent metatheories of ethics and morality be applied to deliberate ignorance (Waldmann et al. 2012)? As mentioned above, the principal candidate

Normative Implications of Deliberate Ignorance

249

theories here are deontology and consequentialism (specically, utilitarianism). In the context of deliberate ignorance, deontology asks whether there is a moral obligation to consider freely available information regardless of the possible consequences, or whether it is permissible to remain ignorant. Any obligation to access such information will likely require the commission of an act, whereas the permission not to access information may result in acts of omission. From the perspective of consequentialism, the morality of retrieving or not retrieving information depends on the totality of the consequences. In all but the simplest contexts, this perspective makes unrealistic psychological demands on a person’s ability to foresee future states of the world. Consequentialism must be constrained by empirically sound assumptions about psychological capacities, lest human morality is judged on the basis of unrealistic standards. Whereas utility theories describe human welfare as resting on subjective preferences and their satisfaction, some moral philosophers propose the existence of objective goods, which individuals may not recognize or use in their decision making (Rice 2013). In addition to health, freedom, and social connectedness, such objective goods may include knowledge, and thus information. This would imply that information should be retrieved whenever possible. It seems that objective list theories of well-being, though grounded in objectively desirable consequences (e.g., having attained knowledge), can be regarded as a variant of deontology. Relatedly, a case can be made for goods that have no direct or measurable consequences for humans. A healthy ecosystem in a remote location, for example, may be considered desirable even if it has no direct effects on human consumption or happiness (Sen 1987). Failing to acquire relevant information (or banning the acquisition thereof) may then be regarded as unethical. “Information consequentialism” can be distinguished from “action consequentialism” and “rule consequentialism.” According to information consequentialism, if information can be used to advance human welfare, such information must be acquired. An interesting problem arises when those who decide on behalf of a collective ignore the collective’s preferences. For example, capital punishment may be deemed categorically wrong even if a majority of the population is in favor of it. A deontological analysis must ask not only whether decision makers should ignore popular will, but also whether they are even morally obliged not to nd out what that will is. Deliberate ignorance may be warranted because a popular vote may create unwanted pressure to pass laws that undermine human dignity, human rights, or, for example, constitutionally enshrined rights for minorities (Anter 2004; Jellinek 1898).3 3

It is important to note that a consequentialist social planner, who makes decisions for collectives or a society, may exercise deliberate ignorance because efficiency is an attractive benchmark. Thus, a planner may choose a policy solution that better satises individual preferences, the more important they are for individuals. This benchmark can be invoked even if these preferences are nonstandard (e.g., are decidedly other-regarding rather than self-regarding).

250

J. I. Krueger et al.

There is no “one size ts all” moral principle or denition of human welfare by which to judge deliberate ignorance. A tractable project would be to explore what deontology and consequentialism have to say about specic cases, thereby gaining a deeper understanding of deliberate ignorance itself as well as the conditions under which it promotes or hinders human welfare. With the geometric growth of genetic data, for example, incidental information provides opportunities for incidental knowledge, some of which has unforeseeable consequences (Berkman, this volume). Do individuals or collectives have the right not to know this information? Cast as a permission, the right not to know is deontological. From the consequentialist perspective, agents may choose to remain ignorant if they perceive negative consequences outweighing positive ones.4 In some contexts, the growing quantities of “potential information” yielded by technological advances present challenges to personal identity. Algorithms that harvest social media data can predict sexual, religious, or nancial preferences more accurately than friends and peers (Kosinski et al. 2015). Here, does an individual also have a right not to know to shield the individual from discovering potentially unsettling and formerly hidden sides of the self? What is the normative force of the Socratic injunction to know thyself? More generally, do people have the right not to know things about themselves that they either do not want to know or that challenge their beliefs? It is unclear whether the greater accuracy of such big-data inferences about personal traits limits the individual right not to know, but it would be odd if it did not. Arguably, the right not to know becomes stronger with the accuracy of algorithmic prediction, thereby bringing morality into greater conict with certain types of rationality. Moral Principles and Moral Intuitions One important and rich research question in the context of deliberate ignorance concerns if, when, and why moral principles and moral folk intuitions conform or fail to conform (see also Heck and Meyer 2019). For instance, some consumers fail to obtain information on the conditions under which goods were produced, knowing that it might reveal practices such as child labor or factory farming. Research has shown that such “willfully ignorant” consumers also denigrate others who seek such information (Zane et al. 2016). This

4

Outcomes, however, may become repugnant if this efficiency-based approach permits any kind of preferences. Take, for instance, “happiness.” The satisfaction of happiness can justify nearly any kind of intervention. To avoid this problem, policy makers may choose not to nd out about specic preferences or turn a blind eye if they happen to know or anticipate them (for more on this “laundering” of policy-relevant preferences by public policy makers, see Bierbrauer, this volume). Interestingly, observers tend to regard those who engage in medical deliberate ignorance as less moral than those who do not, regardless of whether the information is actionable and regardless of information valence (Heck and Meyer 2019).

Normative Implications of Deliberate Ignorance

251

suggests that moral intuitive judgment is shaped by an individual’s own choice of deliberate ignorance. However, principles and intuitions can also conform, while behavior diverges. Serra-Garcia and Szech (2018) experimentally created moral wiggle room by giving participants a choice between $2.50 in cash and a sealed envelope that potentially contained $10 for a worthy cause. Many respondents left the envelope unopened and took the money. This sort of selfserving reasoning is condemned by both normative models and folk judgment (Kunda 1990). Moral judgment may be sensitive to past behavior that has no differential impact on future consequences or mental states. Let us return to the case of Ian who has undergone genetic testing but chooses not to pick up the results. Now consider an otherwise identical person, Niall, who never got tested. In this case, Ian has moved closer to obtaining the test results by having had, at some point, the goal to nd out his genetic status. He is thus likely to be judged more harshly because it is easier to imagine him obtaining the results (Miller and Kahneman 1986). For Ian, only one event (retrieving results) needs to be changed (reversed); for Niall, two events (getting tested and retrieving results) are involved. In other words, reversing a previous decision against deliberate ignorance may seem particularly blameworthy. More generally, the distinction between acts and omissions is important both in formal deontological models (Callahan 1989) and in folk judgment. A person may achieve a state of deliberate ignorance by acting to block information or by omitting to retrieve it. In general, when the likely (and identical) consequences are negative, acts are evaluated more harshly than omissions, and presumably so when intentions are held constant (Haidt and Baron 1996). Many decisions are made with goals in mind (Higgins 1997). The actors intend to produce certain consequences, and these intentions are also relevant to moral judgment (Malle et al. 2014). Someone who elects not to retrieve medical test results or to discover hidden sides of their personality (from their social media footprint) to avoid emotional distress may be judged less harshly than someone who remains ignorant to keep others in the dark (although informing others may involve a second and separate decision or the anticipation of emotional leakage that would reveal the result). These questions suggest that there is a rich set of issues that pertain to moral folk intuitions about deliberate ignorance. In the wild, moral judgment may shift from deontological to consequentialist concern without any change in the consequences, as demonstrated by the trolley problem (Greene 2016). Why these shifts occur is a psychological question. One interesting argument is that the causal framing of the scenario determines the moral lens through which it is seen (Waldmann and Dieterich 2007). As shown by the different cases listed in the Appendix 14.1, both deontological and consequentialist perspectives are instructive for a normative and psychological understanding of when (not) and why normative benchmarks and intuitive judgments consider it to be ethical or unethical.

252

J. I. Krueger et al.

Rational Principles The Standard Model of Rational Choice and Its Shortcomings The standard model of rational choice5 places Bayesian inference in the service of expected utility maximization. In this model, observations, or data, reduce uncertainty, or at least do not increase it (Good 1950; Oaksford and Chater 2007; Savage 1972). Thus, there is no rationale for not seeking or not using easily available information, particularly information with large potential benets (Howard 1966; for a critical discussion, see Crupi et al. 2018). Indeed, inferences tend to become more accurate as more information is used unless that information is systematically biased. Statistical hypothesis testing recognizes the value of observations such that larger samples are more likely to yield “signicant” results if there is indeed an effect (Krueger and Heck 2017). Why then should people sample less data when the costs of sampling are negligible? Bayesian and frequentist (signicance testing) models of statistical evaluation often become metaphors of mind (Gigerenzer and Murray 1987). The assumption is that lay people reason (or rather should reason) much like scientists (Nisbett and Ross 1980), preferring more information over less, lest their inferences suffer (Tversky and Kahneman 1971). Descriptive and experimental studies on human reasoning show many departures from this rational ideal. Selectivity in information search is a common nding (Fischer and Greitemeyer 2010), and this selectivity is often interpreted to be part of a motivated bias to conrm existing beliefs or expectations (Nickerson 1998). The experience of disappointing initial observations can also prompt people to truncate information search (Denrell 2005; Prager et al. 2018). Cast as a variant of selective information search, deliberate ignorance thus appears to be irrational from the perspective of the standard model. Yet, inferences based on limited information can also be superior to inferences based on full information, thus making simplistic verdicts of irrationality problematic. We turn now to some examples of this counterintuitive nding (from the point of view of the standard model) and consider implications for deliberate ignorance, before discussing other shortcomings of the standard model. When less information yields better inferences. Kareev et al. (1997) discovered that small samples are particularly sensitive to true correlations because they are likely to amplify them (but see Juslin and Olsson 2005). Perceivers with low working-memory capacity can thus be more accurate in detecting a true correlation than perceivers with a large capacity. The latter might therefore 5

Here the “standard model” of rational choice represents the dominant view in economics, psychology, and philosophy. In the humanities, by contrast, there is greater emphasis on the cultural relativity of epistemic practices and thus a greater reluctance to ascribe general normative force to any particular framework.

Normative Implications of Deliberate Ignorance

253

choose deliberate ignorance to do as well as the former. In a context the allows strategic behavior, Kareev and Avrahami (2007) found that when employers rewarded good performance with a bonus and without monitoring workers closely (thus exercising deliberate ignorance), both strong and weak workers exerted more effort, which improved the accuracy of assessment and increased productivity. Exploring the conditions under which small samples benet inference can have far-reaching implications for the rationality of deliberate ignorance and the limitations of standard assumptions (Fiedler and Juslin 2006; Hahn 2014). Note that the sample information must be valid in the sense that it comprises observations from the latent population that the decision maker seeks to understand. In other words, it is critical to distinguish between information that is essentially relevant and information that is not. Negative affect can restore rationality. Affect regulation is a variant of motivated reasoning. Individuals may choose deliberate ignorance if they suspect that retrieved information would be distressing. Many people see keeping up with news and world events as a moral obligation, but chafe under the relentlessly negative focus and tone of the coverage. Moreover, most news stories do not call viewers to action (Dobelli 2013). Hence, there is a case for rationally curtailing one’s news intake. Foregone knowledge need not worsen a person’s epistemic state, but the anticipated negative effect of bad but unactionable news may be factored into their expected and experienced utility. Likewise, individuals who choose not to read les compiled on them by the Stasi (the secret police of the former East German government) may be engaging in rational affect management (see Appendix 14.1 as well as Ellerbrock and Hertwig, this volume). Some of the negative affect triggered by avoidable information can have epistemic signicance. The information may raise more questions than it answers, thereby deepening the person’s unpleasant feeling of ignorance. In other words, actual and experienced ignorance can be inversely related. Bringing this dissociation to light was the devilish charm of the Socratic method. This dynamic has recently seen a renaissance in research on the so-called illusion of explanatory depth (Rozenblit and Keil 2002; Vitriol and Marsh 2018). In learning more, people come to realize how little they know, possibly blunting the mood at least for the epistemically ambitious. Choosing deliberate ignorance early on can protect them from this experience and thus be a strategy of effective emotion regulation. In the standard model, however, Bayesian expectations about the possible outcomes of obtaining information already factor in all outcomes, including affective ones (Weiss 2005). A person should be neutral toward receiving information that requires no action and instead seek instrumental information (e.g., medical testing may have actionable implications for getting treatment, making career choices, family planning, and revising saving plans). In short, the possibility that information retrieval will cause negative affect fails to justify deliberate ignorance from the perspective of the standard model of rational

254

J. I. Krueger et al.

choice. In this context, it is interesting to note that humans with lesions to a range of brain regions implicated in the processing of emotions counterintuitively are more likely to conform to norms of the standard model than “normal” individuals (Hertwig and Volz 2013). The role of transformative and disruptive information. In some extreme cases of deliberate ignorance, information is ignored because it might upset or transform a person’s set of preferences. This possibility applies to Ian and the genetic test for Huntington disease, or to anyone with reason to fear that their spouse was a Stasi informant (Ellerbrock and Hertwig, this volume). It is difficult to predict or appreciate how much damage receiving unwelcome knowledge of this type might do to a person’s inner world. It is thus also difficult to see how adequate decisions can be made on the basis of current preferences alone. In other words, these cases of “transformative experience” and decisions have been suggested to profoundly challenge the standard model of rational choice (Paul 2014), but these challenges are difficult to model quantitatively. Deliberate ignorance can be an adaptation to prohibitive information costs. In the modern world, deliberate ignorance may occur because environmental change outpaces the evolution of the human mind (Higginson et al. 2012). Ancestral environments afforded certain costly learning opportunities that are no longer pervasive or relevant. An early hominid could learn to discriminate between lethal and harmless species of snakes and spiders by being bitten (i.e., by receiving costly and dangerous information). The relevant information motivating avoidance of all snakes and spiders is now efficiently acquired through cultural transmission (Larrick and Feiler 2015). The modern world offers opportunities to approach snakes and spiders in safe environments (e.g., zoos), yet many people respond like early hominids, showing strong aversion to such creatures, even behind glass. Although the cost of obtaining information about the animals is now low, many people opt for deliberate ignorance. In this sense, some modern manifestations of deliberate ignorance can be seen as being rooted in adaptations to risky worlds that no longer exist. The Standard Model of Rational Choice and Deliberate Ignorance: What Gives? It would seem rash to consider the case for rational deliberate ignorance closed for at least two reasons. First, some of the points raised above suggest that decision theory is too limited. Second, it seems odd that the standard model of rational choice, by disregarding human constraints, pronounces even behaviors where people “do the best they can” as irrational. Constraint-sensitive notions of bounded rationality are unlikely to replace

Normative Implications of Deliberate Ignorance

255

the unbounded standard, but we wish to retain the prospect of normative guidance in an area where it is possible for the agent to act. Potential solutions to such problems may lend themselves to more differentiated treatments of deliberate ignorance once an adequate formal machinery is in place. Therefore, it appears worthwhile to examine extensions to decision theories as well as their ability to provide a subtler normative treatment of deliberate ignorance. Bounded and Ecological Rationality and Bounded Optimality The standard model of rational choice fails to take search and acquisition costs or processing costs into account. Extensions to the model that respond to these limitations have interesting implications for deliberate ignorance. For example, theories of bounded (Simon 1997) or ecological rationality (Gigerenzer and Selten 2001; Hertwig and Herzog 2009) map evolved mental capacities, ask what types of judgment or decision task they will be able to contend with, and posit parsimonious rules or heuristics as a solution. This research paradigm shows that a surplus of information or cues raises the danger of overtted forecasts. Adding valid information (e.g., parameters) increases the t between a model and the data used to build it. Future data bring new uncertainty, however, not only from random sources but also from systematic ones, such as features of the environment or agents’ preferences. New factors that are unable to be foreseen cannot be part of the model. A model that uses just a few valid predictors is likely to be more robust; that is, it will perform better than a fully parameterized model (Dana 2008; Dawes 1979). Being willing and able to deliberately ignore information to avoid overtting is the ecological decision maker’s secret weapon (Katsikopoulos et al. 2010). Theories of bounded optimality take an alternative approach: the inferential “optimizing” of the standard model is retained but viewed as operating within a set of limitations or bounds. For example, the ideal observer analysis (Geisler 1989, 2011) attempts to understand human capacities by comparing them against those of an ideal agent. Where discrepancies are found, the ideal agent is given realistic constraints (e.g., on the nature of the input) until close alignment is achieved. In other words, optimal rationality is reduced to the mind’s realistic boundedness. The intended result is an understanding of the mechanisms and processes designed to achieve what is possible given the available constraints (Griffiths et al. 2015; Howes et al. 2009; Lieder and Griffiths 2019). The Attention Economy and the Strategic Rationality of Deliberate Ignorance Consumers of goods and services, much like consumers of news, must contend with the growing power of online actors to misdirect their attention. Simon (1971:40–41), the father of bounded rationality, anticipated an “information-rich world,” a dystopia in which a “wealth of information creates

256

J. I. Krueger et al.

a poverty of attention and a need to allocate that attention efficiently among the overabundance of information that might consume it.” Compared with today’s reality, Simon’s vision seems quaint. As a psychological resource, attention is more precious than ever, and deliberate ignorance can help preserve it. Many agents (e.g., companies, advertisers, media, and policy makers) design “hyperpalatable mental stimuli” to engineer preferences and erode autonomy (Crawford 2015). In the same way that obesogenic environments are replete with food products designed to hit the consumers’ bliss point (i.e., the concentration of sugar, fat, or salt at which sensory pleasure is maximized), informationally fattening environments reduce consumers’ control over their information intake. Many human-interest stories are attention bait masquerading as information. They have opportunity costs that people often fail to notice. In this kind of information ecology, deliberate ignorance can support individuals’ agency and autonomy; it may even qualify as a psychological competence of rational decision making. This task remains admittedly difficult as the methods for the cultural production of ignorance evolve, a development studied under the label agnotology (Proctor and Schiebinger 2008). Consider, for instance, the phenomenon of “ooding” in the coverage of news. On August 4, 2014, an earthquake in China’s Yunnan province killed hundreds and injured thousands. Within hours of the earthquake, Chinese media were saturated with coverage of an Internet celebrity’s alleged confession of having engaged in gambling and prostitution. News of the earthquake was thus not censored but rather crowded out. The ooding of the media with reports of a trivial scandal reected a concerted government effort to distract the public from the devastating effects of the earthquake, as objective coverage would have revealed severe weaknesses in the government’s readiness for and response to natural disasters (King et al. 2017; Roberts 2018). In treating attention as an essentially unlimited resource, the standard model of rational choice overlooks the dangers of the attention economy. This blind spot, perhaps ironically, may be seen as an element of deliberate ignorance built into the standard model itself. It remains to be seen how such costs or indeed errors resulting from junk stimulation can be modeled. Adding capacity constraints to the model need not be difficult; the question is how limits in attention can best be captured. Woodford’s (2009) model, for example, includes entropy-based information costs. Of course, the aim of the standard framework is to be as simple as possible. Thus, the question is: In which contexts are additions to the framework important enough to deserve coverage? Expanding the Machinery of the Standard Model Several approaches seek to expand the scope of the standard model itself. Utilities, the currency of the standard model, may not be stable but statedependent. Beliefs, classically treated as distinct from utilities, may be

Normative Implications of Deliberate Ignorance

257

partially constitutive of them (Loewenstein and Molnar 2018). People often avoid information they suspect will challenge cherished beliefs—beliefs that have high utility for them and their sense of identity (Abelson 1986; Brown and Walasek, this volume; Tetlock 2002). Other revisionist models incorporate strategic unawareness (Golman et al. 2017). Whereas the standard decision model assumes that decision makers can describe all possible contingencies, all possible actions, and all possible consequences, unawareness models relax this assumption by allowing decision makers to describe their world in terms of subsets of objectively possible contingencies/actions/consequences and allowing that awareness to change over time. If decision makers are unaware of their unawareness, behavioral predictions are not fundamentally different from those of the standard model. In particular, decision makers who are unaware of their unawareness have no incentive to choose deliberate ignorance. The question of how to model decisions under awareness of unawareness awaits further attention (see Trimmer et al., this volume). Rational Collectives Thus far we have focused on individual agents, partly because various strategic contexts (e.g., the choice of ignorance) can be rationalized along game theoretic lines (Schelling 1956) and thus the issue of its rationality prompts less debate. Let us now briey consider collectives. Do they present specic challenges for an analysis of the rationality of deliberate ignorance? The question of whether organizations can be thought of as having mental states is thorny, reaching beyond the empirical realm and into the metaphysical. Lacking a compelling normative answer, researchers have investigated the conditions under which lay perceivers attribute mental states to groups or organizations (Cooley et al. 2017; Jenkins et al. 2014). It seems prudent to say that organizations or institutions should not be treated holistically or anthropomorphized, nor should they be treated as mere aggregations of individuals, where knowledge, foresight, and intentions can be attributed only to each individual separately. For some purposes, governmental branches may be assumed to have knowledge and intentions, as some philosophers have argued (Pettit 2003). Any evaluative standard, such as correspondence or coherence, can make conicting demands at the individual versus collective level, and thus potentially justify deliberate ignorance. In social choice (as opposed to social agency), the aggregated judgment is more important than the judgments of individuals (Paldam and Nannestad 2000). Here, information benecial to the individual may harm the performance of the collective, even in nonstrategic contexts. Variants of Condorcet’s jury theorem (e.g., Ladha 1992) show that collective accuracy (i.e., the probability that the majority vote on a binary proposition captures the true state of affairs) depends both on the mean individual accuracy of the group members as well as on their degree of independence (Hastie and Kameda 2005). The same dynamic holds for “wisdom of

258

J. I. Krueger et al.

the crowd” scenarios concerned with estimation for discussion on the diversity prediction theorem (see Page 2008). Here, collective error is an increasing function of individual error and a decreasing function of the diversity (variance) of the individual estimates. As a result, additional information given to individuals (e.g., through interagent communication) may be detrimental to collective accuracy even though it potentially increases individual accuracy (Goodin 2004). This is not merely a theoretical possibility; it has been demonstrated in experimental investigations (Lorenz et al. 2011; cf. Becker et al. 2017 and Jonsson et al. 2015) and simulations (Hahn et al. 2019). Collectives and individuals may place conicting demands on information acquisition and, hence, on the rationality of deliberate ignorance. Where collective accuracy is paramount, avoiding communication (and hence information uptake) may improve performance. Intertemporal Choice and Multiple Selves In the standard model of rational choice, updated preferences are consistent with initial preferences, a feature known as dynamic consistency (Sprenger 2015). A dynamically consistent decision maker is indifferent between committing to a course of action conditional on receiving a later signal and taking the preferred action once the signal is received. Many experiments have shown systematic violations of dynamic consistency due to temptation, self-control problems, or updating of multiple priors. Suppose a decision maker chooses from a set of available actions, each of which is optimal in a different state of the world. Evaluating the utility derived from the chosen action relative to the utilities of the foregone alternatives may cause feelings of regret. Even if a decision maker can expand awareness of available actions at no cost, this expanded awareness can cause more potential regret. Thus, decision makers may not aspire to consider a wider range of options. Intertemporal choices often show dynamic inconsistencies. People making choices often ignore consequences that will occur only in the distant future. This discounting of the future is particularly likely in the presence of temptation; that is, when attractive rewards in the near future have a high risk of adverse consequences in the more distant future. The pleasure of each cigarette smoked is immediate, whereas the risks of disease or untimely death are faraway and uncertain. The standard model assumes that people exponentially discount streams of utility over time such that preferences are consistent with or independent of time. The relative preference for well-being at an earlier date over a later date is thought to be the same regardless of whether the earlier of the two dates is near or remote. With exponential discounting of the future, such preferences are rational in that they are coherent. The empirical evidence, however, shows that the near future is discounted more steeply than the distant future (Ainslie and Haslam 1992). For example, when presented with a choice between doing seven hours of house cleaning on December 1 or eight hours

Normative Implications of Deliberate Ignorance

259

on December 15, most people (asked on October 1) prefer the seven hours on December 1. When faced with the same choice on December 1, most chose to put off the chore until December 15. This preference reversal is also known as present bias (Jackson and Yariv 2014). When considering trade-offs between two moments in the future, present bias puts greater weight on the earlier date as it draws closer. In the struggle between the pursuit of short-term and long-term preference, deliberate ignorance can be detrimental in the long term. Take the example of smoking. By one estimate, each cigarette smoked reduces life expectancy by about 15 minutes or half a microlife; that is, by about the time it takes to smoke another two cigarettes (Spiegelhalter 2012). Smokers who enjoy the nicotine buzz and would rather not worry about the future health risks choose to ignore relevant information and remain trapped in a cycle of self-damaging behavior. One useful psychological perspective on this phenomenon is that of “multiple selves” (Jamison and Wegner 2010). While the present self enjoys the act of smoking or its direct physiological effects, it would nevertheless like its future self to get informed and quit smoking if long-term, detrimental consequences are probable and severe. Once this future self becomes the present self, however, it too will yield to the temptation of the present and postpone seeking information on health risks. Deliberate ignorance is irrational when it contributes to time-inconsistent preferences and self-destructive behaviors. In the case of some medical treatments, however, deliberate ignorance may turn out to be the wise choice. Consider the case of a highly effective drug that also happens to have dreadful but highly improbable side effects. People tend to overweight such low probability and might therefore forfeit the restoration of their health. Those who elect not to review these side effects have a better prognosis (Carrillo and Mariotti 2000; Mariotti et al. 2018). Deliberate ignorance may also protect a person from the danger of certain medical interventions of low utility and a risk of overdiagnosis. The U.S. Preventive Service Task Force recommends that men 70 years of age and older should not submit to the PSA test to screen for prostate cancer: “Many men with prostate cancer never experience symptoms and, without screening, would never know that they have the disease” (Grossman et al. 2018:1901). Choosing not to take the PSA test can protect older men from the psychological harm associated with false-positive results (distress and worry) as well as from the harmful effects of invasive treatment (e.g., incontinence or erectile dysfunction). We have explored the implications of dynamic inconsistencies for (ir)rationality, but there is also a moral dimension. From the deontological perspective, different temporal selves can lay claim to their own unique rights and obligations. Much like the present generation must grant rights (and obligations) to future generations (Gosseries 2008), present individual selves must be mindful of their own future incarnations. An analysis along consequentialist lines suggests the same conclusion.

260

J. I. Krueger et al.

The Case of Science Decisions about which research projects to pursue imply decisions about which areas we, individually and as a society, want to learn about and which we wish to ignore. An exploration of the dynamics of knowledge production in science and scholarship may go beyond the strict denition of deliberate ignorance, but it is instructive with regard to the broader impacts of choosing for or against knowledge. Scientists do not gather information that is lying around ready for the taking; they operate at the interfaces of discovery, knowledge production, and knowledge construction. Hence, the core question driving the exploration of deliberate ignorance remains: How should deliberate ignorance be managed when foregoing knowledge has potentially large, though uncertain, impacts? Here, we focus on the high- and mid-level strata at which deliberate ignorance can affect science and research in ways that may be questioned on normative grounds (for related phenomena, see Proctor and Schiebinger 2008). As a potent example, consider the decision of the U.S. federal government not to fund research on gun violence or policies that might mitigate its effects (i.e., the 1996 Dickey Amendment). This decision amounts to an attempt to keep a population of stakeholders ignorant, and it was soon criticized as a strategy to protect the gun industry (Jamieson 2013). Supporters of the policy argue that research would endanger rights guaranteed under the Second Amendment of the U.S. Constitution. Focusing on a trade-off between values, they make a deontological argument in favor of deliberate ignorance. Other examples of policy-based deliberate ignorance have a weaker deontological grounding, such as the historical suppression of research on the harmful effects of tobacco or, more recently, sugar. There are also cases where there is a collective demand for deliberate ignorance without reference to commercial interests or competing political values. Research on the hydrogen bomb is such a case, at least in hindsight. Robert Oppenheimer, himself instrumental in the development of the atomic bomb, warned in vain against research on a hydrogen bomb. The future will tell whether contemporary research on articial intelligence will be judged similarly. Some serious risks are currently being discussed (Tegmark 2017), and it is not clear whether deliberate ignorance will be achievable in this area. The implications of articial intelligence outstripping human intelligence are by denition unpredictable, and we will not know until it is too late (Hawking et al. 2014). Similarly, there is room for debate on whether deliberate ignorance is advisable, ethical, and feasible in the context of biological research on deadly viruses or human cloning (i.e., not doing such research). Considering the distinction between individual and collective agents is also instructive in the context of science. The examples presented thus far highlight collective decisions. But individual researchers may also decide not to perform particular types of work. On one hand, choosing one research

Normative Implications of Deliberate Ignorance

261

topic inevitably has opportunity costs: it takes away time from other projects. A scientist may therefore have to choose where to engage in deliberate ignorance, not whether to do so. On the other, strategic considerations come into play. A researcher may decide that a particular project cannot be done in good conscience. Yet, they may reasonably suspect that others have fewer scruples. The researcher is now caught in a prisoner’s dilemma where opting for deliberate ignorance is the cooperative choice and rejecting it is an act of selsh defection. In their denition of deliberate ignorance, Hertwig and Engel (this volume, 2016) specied that information-acquisition costs incurred are zero or negligible. Clearly, the nancial costs of pursuing research are not negligible, at least for society, yet science may nevertheless be considered in this context. Contemporary Western societies have committed resources to doing science, such that the question of which issues to pursue is not determined primarily by cost. This locates the case of science within the potential space of deliberate ignorance, at least for those cases where costs play a negligible role in the choice of what to study. Some scientists may reject deliberate ignorance because they seek to enhance their reputations (Falk and Szech 2019; Loewenstein 1999). For example, many physicists and engineers saw involvement in the Manhattan project to develop the rst nuclear weapons as a historic opportunity (Mårtensson-Pendrill 2006). Using the replacement logic of the prisoner’s dilemma, these individual scientists could reject deliberate ignorance by arguing if they did not do the work, someone else would (Falk and Szech 2013). At the same time, involvement in a collaborative research project provides opportunities to diffuse responsibility and blame if outcomes turn out to be more damaging than desired (El Zein et al. 2019; Fischer et al. 2011; Rothenhäusler et al. 2018). The benets and harms of scientically attained knowledge are not always predictable. Whether basic discoveries prove to be relevant or applicable is often a matter of time. This unpredictability is inevitable. If research outcomes and their consequences were already clear, there would be no need to do the research in the rst place. Biological research may lead to medical advances, but it may also create new toxins or diseases. Uncertainty extends into the future. It is impossible to tell how future generations will evaluate what now appears to be scientic progress. As preferences can change within individuals, they also often change across generations. This sketch of deliberate ignorance in the context of science points to larger issues beyond the scope of this preliminary exploration. Science—as a personal, group, or social project—represents a wager on an uncertain future. Much of its yield, and the decisions underlying it, will be comprehensible only in hindsight. To say that science is basically rational and morally neutral is perhaps a useful normative starting point. Once deliberate ignorance is recognized as one of the forces shaping the direction of scientic work, this set of assumptions will require continual reevaluation.

262

J. I. Krueger et al.

Open Questions This exploration of the normative issues raised by deliberate ignorance began with taxonomic questions, continued with concerns about morality and rationality, and ended with questions about intertemporal choice and scientic work. Some questions found preliminary answers, others need to be addressed in future work. At this stage, it seems that established frameworks for moral and ethical judgment cover most manifestations of deliberate ignorance. The same general principles (deontology, consequentialism) that apply to other types of action (or failures to act) seem sufficient for the normative evaluation of deliberate ignorance. This does not, however, rule out the possibility that the application of those principles to deliberate ignorance may raise unique issues. Only a continued normative analysis of specic examples will tell. With respect to rationality norms, however, an extension of the standard model of rational choice seems necessary to accommodate deliberate ignorance. Some extensions already exist (see Brown and Walasek, this volume), and they suggest that some forms of deliberate ignorance are irrational only by the lights of the standard model. Continued renements, or radical alternatives, will contribute to a nuanced assessment of this intriguing phenomenon. One particular challenge for future research is the modeling of collective decisions (see Trimmer et al., this volume). We have repeatedly seen that a comprehensive understanding of deliberate ignorance in normative terms requires a careful analysis of specic cases, instances, and episodes. In this chapter we have been able to consider only a limited subset of cases. There is no way to tell how badly this sample might be biased. Indeed, there is as yet no sense of whether a true but latent population of deliberate ignorance cases can even exist. To stimulate further debate and analysis, we have assembled a set of cases (see Appendix 14.1) to provide broader coverage of the domain. We now conclude our exploration with a few nal observations. Expanding the Domain We recommend a graduated consideration of cases of deliberate ignorance, as illustrated in Figure 14.2, beginning with the strict criterion of “no [use of] knowledge,” proceeding to “delaying knowledge” and “disregarding knowledge,” and ending with “negating knowledge.” This nested hierarchy offers progressively more open denitions of deliberate ignorance. It covers instances in which an agent chooses to delay accessing information or to disregard information that is already known. The broadest denition includes instances in which the agent acts on information known to be false. We are not committed to the idea that a conceptual expansion of this type is necessary. The core denition restricts deliberate ignorance to situations in which an agent chooses not to learn a knowable fact that may, in principle, offer large benets

Normative Implications of Deliberate Ignorance

263

No use of knowledge

Delaying knowledge

Disregarding knowledge

Negating knowledge

Figure 14.2 The nested structure of kinds of deliberate ignorance: no use of knowledge, delaying knowledge, disregarding knowledge, negating knowledge.

(Hertwig and Engel, this volume, 2016; see also Schwartz et al., this volume). This denition is attractive because of the sharp demarcation line it draws, thus facilitating rigorous analysis, yet it is limited in that it restricts the degree to which analytical or empirical results can be generalized. A tight denition may also run into problems in the context of collective agents when states of “ignorance” and “knowledge” are differentially distributed across members of the group, making it difficult to assess the ethical legitimacy and consequences of a collective decision to exercise deliberate ignorance. Individual Autonomy There are at least two interpretations of the value of autonomy and their assessments of deliberate ignorance. One account views autonomy as the free exercise of an agent’s will. From this perspective, instances of deliberate ignorance will be assessed on the basis of their consequences. Some instances may be acceptable, others not. The second account sees autonomy as the free exercise of an agent’s will as informed by all relevant reasons. In other words, full autonomy can, by denition, be exercised only if all relevant knowledge is available. This account appears to condemn every instance of deliberate ignorance because the deliberate choice not to know is, by denition, anathema to a fully autonomous individual.

264

J. I. Krueger et al.

Norm Conict In transitional or transformational societies (Ellerbrock and Hertwig, this volume), conicting and changing norms raise difficult issues. A society can pursue, in the process of recovering from a troubled past, many different but ethically justiable goals and principles, including repairing the social fabric and fostering justice, transparency, and peace. Such goals individually have pragmatic and moral force, but they may be mutually incompatible. On what normative or empirical grounds can a preference for one goal over others be justied? One empirical approach is to conduct eld experiments to implement policies serving different goals and to measure their long-term effects on various outcome indicators (Campbell 1969; Staub 2014; Staub et al. 2005). Although this evidence-based approach is important, it does not provide a full answer to the question of how norm conicts can be resolved: Individuals may place different weights on an outcome’s indicators depending on the normative goals they prioritize. It may prove benecial to take a closer look at how norm conicts are approached in law (e.g., in human rights cases). From Analysis to Empirical Testing Exploration of the normative contexts and subtexts of deliberate ignorance is only just beginning, but some empirically tractable questions suggest themselves. To what extent, for example, would the general public agree with the various ethical principles presented here? How can research probe folk intuitions? To what extent do actual decision makers meet the (new) assumptions made in this chapter? A decision maker may still be described within a rationality framework if assumptions are added or adjusted. How then can we test whether these assumptions are descriptively accurate? And to what extent are decision makers rational in their choice of deliberate ignorance? For instance, a decision maker considering getting tested for Huntington disease may think about the consequences, or utilities, of this knowledge in narrow terms (e.g., how it will affect my health decisions) or in much broader terms (e.g., how it will affect my social, nancial, and professional decisions and, by extension, my family’s well-being). Deliberate Ignorance in the Context of Political and Normative Change Our normative analysis did not assume any specic temporal coordinates, thus acting as if norms were stable in time. This is, of course, a simplifying assumption. Norms are subject to variation across time and cultural space. This kind of normative change is salient in times of political upheaval. Given these temporal dynamics, how might deliberate ignorance be deployed to consolidate

Normative Implications of Deliberate Ignorance

265

sociopolitical change? Modern democracies tend to legitimize norms by grounding them in human rights and freedoms. The right to know and the right to have access to all information collected on oneself by governmental agencies has been central to modern statehood and notions of liberty since the French Revolution. The right not to know has recently complemented our understanding of modern democratic theory, although the relation between these two types of rights continues to be debated. Future work will need to explore how instrumental deliberate ignorance can be in the development, consolidation, or erosion of social norms. Conversely, how do norms shape the practices, opportunities, and (moral) outcomes of deliberate ignorance? As the analysis of deliberate ignorance in the context of the Stasi les demonstrates (Ellerbrock and Hertwig, this volume), deliberate ignorance has an ambivalent normative quality in transformational societies: it can stabilize norms or erode them. Interestingly, the genesis and formation of normative orders has only recently emerged as the subject of historical study, emphasizing the interdependency of formation of knowledge and morals (Frevert 2019; Knoch and Möckel 2017). Deliberate ignorance is not (yet) an object of research in this context, but it is clearly relevant. In modern Western societies, normativity is negotiated and legitimized in the public discourse. Analysis of the prevalence of deliberate ignorance in the public discourse, its structure, and its role in situations of norm conicts should therefore pique the interest of both historians and behavioral scientists. Similarly, the relatively recent reevaluation of deliberate ignorance in the context of political transformation deserves detailed analysis. Such analysis would offer new opportunities to understand when and under which conditions societies treat deliberate ignorance as an ethically legitimate or condemnable practice.

In Lieu of Closure You are not obliged to complete the work, but neither are you free to desist from it. —Rabbi Tarfon (Pirkei Avot, 2:16)

We have embarked on a journey toward the demystication of ignorance, and especially its deliberate variant. In Western societies, ignorance (not knowing) is associated with stigma. Babies are ignorant, and overcoming that ignorance is an essential part of growing up. How then can ignorance be a deliberate choice? We hope to have shown that some instances of deliberate ignorance are normatively defensible, but that depends on the conuence of the type of norm (moral or rational), the type of agent (individual or collective), and the type of person or group bearing the consequences. Choosing deliberate ignorance in a context in which such a choice is normatively defensible may be the mark of wisdom, and continued research efforts

266

J. I. Krueger et al.

are needed to enable people to choose wisely. Peeling back the veil of ignorance remains a powerful normative mandate: Overall, accepting ignorance is normatively less defensible than deliberately choosing it, but sometimes it must be chosen.

Appendix 14.1 To stimulate further debate and analysis, we have assembled ten real-life examples to illustrate the primary functions of deliberate ignorance, the actors affected, as well as the ethical (consequentialist, deontological ethics) and rationality principles that may be involved (e.g., expected utility maximization, game theory). 1) Huntington disease (also known as Huntington’s chorea) Background: This inherited autosomal disease progressively breaks down nerve cells in the brain, resulting in severe impairments in a person’s ability to move, think, and reason. An affected person eventually requires help with all daily activities, although language comprehension and awareness of family or friends do not diminish. Although most people develop symptoms in their 30s or 40s, the rate of disease progression varies. Genetic testing provides reliable diagnosis at any age. Yet, even with a family history, some people deliberately choose not to take the test. Function of deliberate ignorance: Since a positive test result augurs an early, agonizing death, individuals may choose to regulate their fear by remaining deliberately ignorant. This choice, however, also impacts others: family members will be unable to prepare for the role they may need to assume as the individual’s health deteriorates or the trauma they will experience in witnessing a loved one’s physical demise and early death. Ethical principles: From a consequentialist perspective, if the test result is negative, the choice to know will undoubtedly bring about the best result for all. If, however, the test result is positive, it is not obvious whether a consequentialist perspective favors deliberate ignorance or knowledge. The emotional consequences are difficult to predict, and people generally are not very good at affective forecasting (e.g., Wilson and Gilbert 2005). Moreover, a positive test result may also have profound consequences for relatives who learn that they are likewise at risk. Rationality principles: The choice of deliberate ignorance cannot be accommodated within an expected utility maximization or game theory framework. Additional assumptions, such as belief-based utility, are necessary to model it.

Normative Implications of Deliberate Ignorance

267

2) Genetic testing (23andMe) Background: 23andMe is a genetic testing service that provides information on customers’ ancestry composition and genetic predisposition to health risks. A person who does not get tested may not know that they are at an increased risk for a certain disease (e.g., cancer, cardiovascular disease), which may increase the likelihood of its manifestation (e.g., because no precautions are taken). Moreover, if one family member has their genes sequenced, other family members are able to infer that they likewise have an increased risk of certain diseases. Function of deliberate ignorance: Results pointing to an increased risk of a certain disease may imply monetary, emotional, and other costs for the individual and others (e.g., partner, family). Not getting tested helps to regulate these emotions. Ethical principles: From a consequentialist perspective, which actions bring about the best result will depend on how the actor and their loved ones respond if the actor indeed has an above-average propensity of developing a serious and potentially life-threatening disease. From a deontological perspective, the principle of nonmalecence can be invoked: “Do not hurt the feelings of your family and loved ones. Rationality principles: The choice of deliberate ignorance cannot be accommodated within standard rationality frameworks such as game theory or expected utility theory; additional assumptions such as belief-based utility would be necessary to model it. 3) Respecting privacy: Reading a family member’s e-mails or diary Background: A person has the opportunity to secretly read a family member’s private correspondence (e.g., e-mails, love letters, diary). Function of deliberate ignorance: The choice not to breach another’s privacy maintains trust in signicant social relationships. Accessing another’s e-mail account is a breach of trust, irrespective of what the e-mails might contain. Both the immediate “victim” and others are likely to lose trust in the snooper. Ethical principles: From a consequentialist perspective, the choice to respect privacy seems to bring about the best result, as this choice reduces the risk of collective mistrust and its downstream effects. From a deontological principle, people are obliged not to betray the trust of others and to respect their privacy. Rationality principles: Interactions between family members can be understood as repeated games, and the decision not to know (i.e., the decision not to breach another’s privacy) can thus be modeled as rational.

268

J. I. Krueger et al.

4) Bone marrow donation Background: Bone marrow produces new, healthy blood cells (around 200 billion every day). Healthy people can become bone marrow donors for patients ghting life-threatening illnesses (e.g., some types of cancer). The donation is a surgical procedure in which liquid marrow is drawn from the donor’s pelvic bone and transferred to the recipient. The blood type of donor and recipient must match. Donors may experience side effects such as headaches, dizziness, fatigue, muscle pain, and nausea. Function of deliberate ignorance: Choosing to remove one’s name from a bone marrow donor registry helps to eschew responsibility; the potential donor will never nd out if there is a need for their tissue (Dana et al. 2007). Ethical principles: From a consequentialist perspective, everybody being registered and getting notied in case of need would bring about the best result. The choice of removing one’s name from the registry seems to be in conict with the deontological principle of benecence (helping others), because a potential recipient may die. At the same time, the principle of autonomy leaves the choice of whether or not to become a donor to the individual. There seems to be societal consensus that it is undesirable, but not unethical, to “opt out” of being a bone marrow donor or, more generally, an organ donor. Rationality principles: The decision not to be on a registry can be rationally reconstructed if the choice is understood as one of strategic ignorance that allows the agent to eschew moral responsibility. Dana et al. (2007) refer to such a strategy as exploiting moral wiggle room. 5) Society’s sins of the past Background: Societies that undergo transformations from one political, knowledge, value, and social system to another (e.g., Germany after the defeat of the Third Reich) may decide not to ask, tell, or nd out about the sins of its citizens under the old regime. Function of deliberate ignorance: From a collective perspective, deliberate ignorance may help to maintain social cohesion and peace. From an individual perspective, choosing not to know can help regulate emotions (e.g., not having to grapple with the fact that one’s grandparents may have been Nazis). Ethical principles: From a consequentialist perspective, the choice of deliberate ignorance may bring the best result in terms of social cohesion and peace, especially if the number of victims is relatively small or the number of perpetrators relatively large. From a deontological perspective, not punishing past sins and failing to compensate victims is unethical.

Normative Implications of Deliberate Ignorance

269

Rationality principles: A game theoretical view might suggest that social welfare is better served under deliberate ignorance than under a knowledge regime (at least for certain periods of the transformation process). 6) Sexual orientation algorithm Background: Wang and Kosinski (2018) developed an algorithm that, on the basis of ve facial images per person, can detect a person’s sexual orientation with 91% and 83% accuracy for men and women, respectively. This is higher accuracy than is achieved by humans. Should policy makers prevent the development of such algorithms? Function of deliberate ignorance: The key function of preventing the algorithm’s use is to maintain impartiality and fairness and to leave the choice of whether or not to disclose one’s sexual orientation to the individual. The party exercising deliberate ignorance here is not necessarily the individual but the community (e.g., citizens, regulators). Ethical principles: From a consequentialist perspective, the regulatory choice to ban such algorithms may bring the best result given that, relative to the meager benets (e.g., targeted advertising), people’s right to privacy is at risk (e.g., in the case of a person who has chosen not to make their homosexuality public). From a deontological perspective, this regulatory action is not in conict with any key duty. On the contrary, in a society that respects the human right to freedom from discrimination based on sexual orientation, deontological principles (avoiding harm) would also be consistent with banning such algorithms. Rationality principles: From a marketing perspective, not creating the algorithm may be irrational as it means foregoing prot. A choice of deliberate ignorance here can thus not be explained by rational principles unless ethical principles are invoked or added. 7) Governmental data collection Background: The collection of any data on race, ethnicity, or religion is prohibited under French law. Function of deliberate ignorance: The primary goal of the law is to maintain impartiality and fairness. Ethical principles: France initially applied this form of deliberate ignorance to avoid various forms of discrimination. From a consequentialist perspective, it may indeed bring the best results; however, it depends on the measurable effect. A probably unanticipated consequence of not collecting these data is that it is harder to prove discrimination in the workplace based on race and ethnicity or to design and implement policies to counter its effects (e.g., affirmative

270

J. I. Krueger et al.

action). From a deontological perspective, this law is not in conict with any key duty. Rationality principles: Outside the realm of expected utility theory and game theory. 8) Blind auditioning Background: In an attempt to overcome bias (e.g., gender, race, affiliation to certain teachers or musicians) in the hiring of musicians, most major U.S. orchestras moved in the 1970s to change their audition policies, both in terms of democratizing the decision-making process and in hiding a musician’s identity from the selection committee. Holding auditions behind screens, at least in the early rounds, became the new standard. In 1970, fewer than 5% of the musicians in the top ve orchestras in the United States were women; by 1980, the proportion increased to 10%; by 1997, to 25%. This increase has been attributed, at least in part, to the adoption of blind auditions (Goldin and Rouse 2000:715). Function of deliberate ignorance: The primary goal of having musicians perform behind a screen is to ensure impartiality and fairness; otherwise, the selection committee may be biased (consciously or unconsciously) to select male candidates, candidates known to them, or individuals recommended by teachers or musicians from major conservatories or other orchestras. Ethical principles: From a consequentialist perspective, shielding the selection committee from irrelevant but biasing information (gender, ethnicity) fosters the best result. From a deontological perspective, the selection committee’s act of deliberate ignorance is not in conict with any key duty. Rationality principles: The fact that blind auditioning is even necessary cannot be explained; it entails that biases and discrimination exist in the rst place. A rational decision maker should be able to consider only decision-relevant knowledge (i.e., the musical performance) and ignore irrelevant or misleading information (e.g., who studied with whom). 9) Teaching evaluations Background: Teaching evaluations are systematic procedures for reviewing teacher performance by having, for example, students complete a questionnaire. Ideally, the teacher will use the resulting feedback to improve their teaching practice. Some teachers, however, may decide not to read students’ teaching evaluations. Function of deliberate ignorance: A teacher may avoid reading evaluations to avoid any negative emotions elicited by negative feedback. By remaining ignorant of the feedback, however, they will forego insights into how their teaching could be improved.

Normative Implications of Deliberate Ignorance

271

Ethical principles: From a consequentialist perspective, this act of deliberate ignorance does not produce collectively the best outcome. Not reading teaching evaluations makes it harder to improve one’s teaching and thus maximize students’ learning outcomes. From a deontological perspective, this choice may risk harming others, with harm being interpreted widely and relating to students’ learning outcomes. Rationality principles: Deciding not to read student evaluations can potentially be deemed rational to the extent that acting on them implies the disutility of additional work in the future, but only if we assume that it is impossible for a teacher to simply disregard critical evaluations. 10) Entrepreneurial success Background: It is thought that 80–90% of all start-ups fail. An entrepreneur may decide not to explore the chances of their company’s success (e.g., by analyzing the market or retrieving data on the success rates of start-ups in a comparable domain). Function of deliberate ignorance: Not knowing that most new businesses fail can safeguard an entrepreneur’s motivation to set up a new business and their belief in its success (and thus the motivation to invest signicant time, cognitive, and nancial resources). Ethical principles: From a consequentialist perspective, individuals exercising this type of deliberate ignorance may dare to take entrepreneurial risks, to innovate, and potentially reap large rewards. From a deontological perspective, it may harm others to the extent that they invest their capital or labor in a business with a low chance of success. The nancial well-being of the new business’s employees or the entrepreneur’s family may be adversely affected should it fail. Rationality principles: From a utility perspective, not knowing cannot be rationalized; the probability of success should be found out and assessed.

Institutions

15 Institutions Promoting or Countering Deliberate Ignorance Doron Teichman, Eric Talley, Stefanie Egidy, Christoph Engel, Krishna P. Gummadi, Kristin Hagel, Stephan Lewandowsky, Robert J. MacCoun, Sonja Utz, and Eyal Zamir Abstract This chapter examines the institutional implications associated with facilitating or combatting deliberate ignorance, and explores concrete institutional mechanisms that could serve to limit, distort, or otherwise structure peoples’ informational environment. It examines the basic building block that individuals might use to achieve their goals— contracts—and highlights the advantages and problems associated with consensual mechanisms that could be used in this regard. The chapter further analyzes how organizational structures and mechanisms (e.g., corporations) may be utilized to compartmentalize information and construct the informational environment. Finally, it introduces a new institutional frontier—technology—and shows how developments in the areas of articial intelligence and machine learning can promote the goals discussed throughout the chapter.

Introduction Other chapters in this book have established that under certain conditions, individuals might hold a preference to remain ignorant. Such preferences raise tricky normative concerns: whereas in some situations it might be preferable for people to full their preferences for ignorance, in other settings, it might be Group photos (top left to bottom right) Doron Teichman, Eric Talley, Sonja Utz and Stefanie Egidy, Kristin Hagel, Christoph Engel, Doron Teichman, Robert MacCoun, Krishna Gummadi, Eric Talley, Eyal Zamir, Sonja Utz, Kristin Hagel, Stephan Lewandowsky, Eyal Zamir, Doron Teichman, Robert MacCoun, Stefanie Egidy, Christoph Engel, Stefanie Egidy, Krishna Gummadi, Stephan Lewandowsky

276

D. Teichman et al.

better to discourage or completely deter the ability of people to remain ignorant. In this chapter, we evaluate the institutional implications stemming from these insights, exploring which policy tools should be used in conjunction with deliberate ignorance. Given the broad scope of the discussion, we begin by clarifying the types of situations that constitute the focus of this chapter and distinguish between different units of analysis germane to our discussion. A rst category of cases concerns a lone actor, someone who seeks to perpetuate ignorance as to a personal matter for which others are unaffected or indifferent. A second category consists of actor versus actor scenarios, where the decision not to inform oneself entails externalities as to other persons. In principal-agent scenarios, for example, one actor (the agent) carries out actions on behalf of another (the principal). In such cases the principal may arrange for the agent to be kept ignorant (e.g., as when an editor blinds the identity of a manuscript author to reviewers), or the agent may act based on information the principal affirmatively does not wish to know (e.g., for efficiency, or in cases of “plausible deniability,” as a means of avoiding responsibility). More complex cases involve distributed agency, where individuals operate as members of larger collectives such as corporations or government agencies. Here, it may be meaningful to describe the collective entity as deliberately ignorant even though some members of the collective are not ignorant; alternatively, the collective entity might be said to “know” something even though each individual member is partially blind to the whole. Cutting across all these scenarios, there are also actor versus audience issues where third parties have an attenuated personal stake in a decision, but may nonetheless care about how it is handled (e.g., citizens may demand transparency as a matter of good governance, or officials may opt for transparency to promote a sense of legitimacy). Three distinct strategies could be deployed in these situations to design the preferred informational environment. First, one could focus on the elimination of unwanted information. Such a goal might be achieved ex ante by preventing the creation of the information in the rst instance (e.g., do not collect data on race), or ex post by destroying existing information (e.g., burn the Stasi les; see Ellerbrock and Hertwig, this volume). Second, one could take the existence of information as given and attempt to shield the person from the information. This goal could be attained either by quarantining the person (e.g., software that lters information out of a person’s environment) or the information itself (e.g., information escrows). Finally, if individuals have been exposed to the unwanted information, one could still attempt to limit its impact by adopting a decision rule that requires actors to ignore it.1 1

Existing studies in the area of judicial decision making suggest that people often cannot ignore relevant information (see Zamir and Teichman 2018). However, specic case studies do show that this strategy could prove effective (Rachlinski et al. 2011).

Institutions Promoting or Countering Deliberate Ignorance

277

That said, to the extent deliberate ignorance is undesirable, the polar opposite policies are then warranted. Policy makers could mandate the creation of information by requiring the collection of data, barriers to the free ow of information could be removed, and decision makers could be obligated to incorporate certain information into their choices. In this chapter, we examine these potential institutional responses. Before delving into the details, however, two preliminary remarks bear emphasis. First, we adopt the denition of deliberate ignorance as presented by Hertwig and Engel (this volume, 2016) according to which deliberate ignorance is dened as “the conscious individual or collective choice not to seek or use information.” We fully acknowledge that there are borderline cases which test the boundaries of this denition, and thus we have limited our discussion to what could be dubbed as the “easy cases” of deliberate ignorance. Second, any institutional reaction to deliberate ignorance presupposes a normative judgment regarding the “all-things-considered” desirability of ignorance in the context under consideration. Whereas in some situations deliberate ignorance might be desirable behavior that should be facilitated, in other settings deliberate ignorance might reect problematic behavior that should be discouraged. In still other settings, the relevant decision makers may not be condent about the normative desirability of ignorance. Here we limit our analysis to mapping the potential institutional tools that are geared toward ignorance, assuming the policy goal is prespecied and clearly understood. Below, we explore concrete institutional mechanisms that could serve to limit, distort, or otherwise structure peoples’ informational environment. We begin by examining the basic building block that individuals might use to achieve their goals—contracts—highlighting the different consensual mechanisms that could be used and exploring whether they should be regulated. We then discuss the role of organizations by examining how organizational structures and mechanisms (e.g., corporations) construct the informational environment and how different mechanisms might be utilized to further or limit peoples’ ability to compartmentalize information into different organizational units. Our discussion of “deliberate opacity” progresses up the societal ladder and looks at the role of ignorance within the state, and highlights limits of transparency. We then introduce a new institutional frontier—technology— and explore how developments in articial intelligence and machine learning might be used to promote the goals discussed throughout the chapter. In closing, we offer a roadmap to chart future research on the topic.

Contracting for Ignorance Individuals may have a preference to ignore a piece of information or information within a dened category. A key institution that enables individuals to act on such preferences is through the making of enforceable promises or

278

D. Teichman et al.

contracts. Contracts that effect ignorance could, in principle, be narrow agreements that are limited to the informational assets (e.g., condentiality/nondisclosure agreements). They might also be bundled into a broader transactional framework in which assent to the ignorance component is not independently elicited (e.g., the terms and conditions of a website). Here we examine the extent to which contracts can effectuate allocations of information and explore some of the potential challenges associated with such contracts. The Feasibility of Contracting for Ignorance The core of a contract for ignorance is a promise to enable the party or parties to structure the informational environment they wish to have. At times, the informational preference might be a by-product of a broader underlying contractual relationship. This occurs, for example, when patients have preferences regarding the type of genetic information they wish to receive. Alternatively, the contract might focus on information created by third parties. For instance, an Internet platform might promise its users to shield them from certain types of undesirable information (e.g., violence, pornography, hate speech). Contractual arrangements can both eliminate unwanted information and quarantine it. As to the former, many commercial agreements include provisions that require the destruction of information. For example, Section 5a of the American Bar Association (ABA) Model Condentiality Agreement, associated with corporate acquisitions, includes the following information provision mandating ex post destruction of information: If either party to this letter of agreement determines that it does not wish to proceed with a Transaction, it will promptly inform the other party of that determination. In that case, or at any time upon the request of the Discloser for any reason, the Recipient will promptly, and in any event no later than 30 days after the request, deliver to the Discloser or, at the Recipient’s option, destroy all Evaluation Material (and all copies, extracts, or other reproductions thereof), whether in paper, electronic, or other form or media, furnished to the Recipient or its Representatives by or on behalf of the Discloser pursuant to this letter agreement. In the event of such a determination or request, all Evaluation Material prepared by the Recipient or its Representatives shall be destroyed within such 30-day period and no copy, extract, or other reproduction thereof shall be retained, whether in paper, electronic, or other form or media.

Similarly, even if a contract does not go as far as to order the destruction of information, many such agreements may instead require that information be held by some sort of escrow agent, such as an external law rm, or limited only to certain divisions within an organization, such as the general counsel’s office (ABA Model Condentiality Agreement 2011, commentary at 356). To be sure, contracts cannot realistically guarantee individuals the precise informational environment they desire. For one, informational barriers often impede the possibility of specifying all of the information from which the

Institutions Promoting or Countering Deliberate Ignorance

279

promisee wishes to be shielded. Consequently, a contract calling for ignorance might not be able to capture all of the benets associated with trade. For example, a patient might not realize that a benign piece of information obtained from a routine procedure (e.g., blood type) could result in unintended consequences and distress (e.g., misattributed parenthood). In addition, practical constraints limit the extent to which a promisee can be shielded from certain types of information. For instance, healthcare experts exposed to such information may have difficulties disregarding it, even though they are contractually instructed to do so, and in subsequent interactions with a patient, they might unwittingly reveal its content. Moreover, information is also obtained through real-world encounters (e.g., a billboard viewed every day, television ads) and might be difficult for a promisee to overlook. For contracts to serve as an adequate tool through which individuals limit their access to information and achieve deliberate ignorance, these limitations must be understood and addressed. Regulating the Ignorance Contract While contracts might help individuals design the informational environment they seek to create, numerous imperfections might lead to a need to regulate aspects of the contractual relationship. Most immediately, legal institutions can help facilitate the task of ignorance contracting when it is desirable via default rules, interpretive conventions, and remedies for breach. In addition, legal regulations may sometimes limit ignorance contracting when it appears undesirable for normative reasons. Reviewing the entire range of possible regulatory interventions is beyond the scope of this chapter. Such interventions could range, however, from procedural rules that help facilitate the transfer of information between parties during the formation of the contact, to substantive rules that aim to alter the content of the contract (through both default and mandatory rules). In addition, the legal system could decide not to enforce certain contracts, if they are in conict with public policy concerns. The precise legal tool depends on both the contractual context and the regulatory environment in each jurisdiction. An initial set of problems that might merit a regulatory intervention in the ignorance contract stem from internalities; namely, the need to protect the interests of one of the contracting parties given the imperfections of the contracting process. Even perfectly rational parties make suboptimal contracting decisions due to a host of market failures. First among these is the lack of information available to customers about the content of their contractual obligations: most contractual clauses in most current contracts (i.e., in standard form contracts) are practically invisible to customers because they do not (and cannot reasonably be expected to) read them. This, along with a host of other market failures, allows suppliers to include provisions in the contract that appear to reect the customer’s wish to remain ignorant of some information, or to receive other types of information. When such clauses are the product

280

D. Teichman et al.

of suppliers’ exploitation of customers’ lack of information or other market failures, regulating them might rest on efficiency considerations (and often on non-efficiency grounds as well). To the extent that contractual parties are not perfectly rational value maximizers, the case for a regulatory intervention grows more compelling. The complexity of the ignorance contract, coupled with issues such as bounded rationality, illiteracy, and limited education, might all cause promisees to agree to contracts that do not serve their interests. The doctrine of informed consent, and the inability of patients to contract around it in certain instances, serves as a case in point. When patients receive medical treatment, the law usually requires medical professionals to explain the potential risks and side effects of the treatment. Only then are patients able to give informed consent. A patient, however, might not want to hear any of this information and choose instead to remain deliberately ignorant (see also Zamir and Yair, this volume). This might occur when the patient experiences anticipated anxiety. In addition, knowledge of side effects might increase the probability that they will actually occur. Evidence of the latter has been obtained in randomized clinical trials, where it was shown that if a patient forms a negative expectation regarding certain side effects of a medication, the patient may experience the anticipated effects even when treated with a placebo—a nocebo effect (Häuser et al. 2012). Notwithstanding this potential preference for ignorance, some restrictions, at least in the German legal system, seem to be derived from a paternalistic idea of protecting the patient’s autonomy and right not to be informed. As there is a dearth of case law, the exact extent of these restrictions is, of course, subject to debate. In essence, there seems to be agreement that patients are required to have an overall understanding about the medical process and the general risk level in order to be able to consent to a procedure. That is, German law removes from peoples’ choice set the option to remain ignorant. For experimental treatments, the patient’s right not to know seems to be even more constrained. A second set of issues that might merit regulatory intervention in the ignorance contract arises from the effect of ignorance-creating contracts on third parties. Information can often be useful for many people. As a result, a contract that eliminates or quarantines information might entail signicant negative externalities, since it prevents parties who are not part of the contract from utilizing and beneting from the information at hand. Contracting parties may agree to destroy information that could be benecial to third parties, or they may agree that one of them would be shielded from some kind of information, although such ignorance might adversely affect people interacting with that person. To protect the interests of such third parties, a regulatory response might be required. One example of this conict is the case of sperm donation. To facilitate the creation of a child from a sperm donation, different agreements have to be

Institutions Promoting or Countering Deliberate Ignorance

281

concluded. The actors involved are the sperm donor, the fertility clinic, and the recipient (e.g., the birth mother). The sperm donor often has an interest in remaining anonymous and ignorant (also in the future) of any biological children created, and might want to protect this interest within the sperm donation contract with the fertility clinic. The contract between the fertility clinic and the sperm donor will thus often include a clause that ensures the sperm donor’s privacy, and possibly the deletion of his identifying information. This agreement will also be reected in the contract between the fertility clinic and the recipients of the sperm donation. This legal construction of the relationship means that any child born through a sperm donation will be unable to gain access to information about the identity of his or her biological father for any of the following three reasons: the information will already have been deleted by the time the child is old enough to inquire, the child will not know whom exactly to ask, or the fertility clinic will deny access to the information because of its contractual obligations. In Germany, regulators have decided that the risk of negative externalities merits regulatory intervention. Under the Grundgesetz (the constitutionally mandated Basic Law), individuals have a fundamental right to know their biological origin, as part of the allgemeines Persönlichkeitsrecht (provision that delineates general personal rights) enshrined in Articles 1 and 2. The contractual agreement described above concerning sperm donation would limit, however, an individual’s ability to exercise this right. One potential solution to this problem could be not to enforce such secrecy agreements. Yet given the possibility that information could be destroyed before an individual pursues its right, this response may not suffice. Against this backdrop, Germany has recently enacted a statutory regime that mandates the creation and preservation of information. The Samenspenderregistergesetz (law governing the registration of sperm donors) establishes a sperm donor registry. The law obliges fertility clinics to collect and transfer personal information about sperm donors (name, place and date of birth, nationality, address), recipients (name, place and date of birth, address, number and birth dates of any children born), and the fertilization process (time of use, successful conception, calculated due date) to the registry. Both donor and recipient must be fully informed about the process and agree to this informational component of sperm donation. The sperm donor can also include a personal message in the register, in which he can state his (unenforceable) wish not to be contacted in the future. The data is kept for 110 years (i.e., the maximal life expectancy according to German legislation). The statute allows persons who suspect they are donor offspring to access any relevant information from the registry. Once a request has been made, the law demands that the sperm donor be informed about the request four weeks before the data is handed out, thus creating awareness of any potential future contact initiated by the donor’s offspring. This legal mechanism eliminates the possibility of deliberate ignorance among sperm donors.

282

D. Teichman et al.

Finally, it is worthwhile agging a more recent context in which broad societal considerations might play a role in determining the desired institutional structure: the context of public and political discourse. In recent years, a significant part of public and political discourse has shifted to the digital city square. Platforms such as Facebook, Twitter, and the like have become the central point of political campaigns and public debates. Although these debates have been subverted by fake news (Allcott and Gentzkow 2017), the business model of platforms such as Facebook currently prots from attention and “likes,” thus reducing their incentive to facilitate deliberate ignorance among users (see also Krueger et al., this volume). While constitutional provisions that safeguard freedom of speech generally protect the creation of information, they are imperfect when it comes to enabling individuals to quarantine information that they deliberately wish to ignore. As ever more information arrives to us via digital channels, those digital channels can be designed to further personal preferences regarding information. People could choose, for example, to remove information from their digital environment that relates to opposing political parties, social movements that they nd repugnant, and so forth (we discuss this further below in the section on “The Role of Technology”). In fact, social media algorithms often reinforce deliberate ignorance, for example, by not exposing people to opposing opinions and creating lter bubbles and echo chambers (Pariser 2011). Technically, it is possible to program algorithms that expose people to the full range of arguments on an issue or that lter/ag fake news. Nonetheless, the decision to regulate such private platforms hinges on thorny normative questions that dene what is the proper shape of public discourse in modern democratic societies. Beyond Private Ordering Markets hold promise as entrepreneurs hope to prot from addressing the preferences of potential clients. Entrepreneurs have an incentive to understand these preferences as precisely as possible, and to design solutions that exactly match these preferences. In principle, markets are powerful because buyers are protected by competition. If one provider does not satisfy them, they can stop purchasing their services and trade with a competitor. Yet competition in markets for content is notoriously precarious. Many content markets are in the hands of a very small number of providers, if not a single one. This is also the case for many commercial platforms. The main economic reason is network externalities: the value of the service grows nonlinearly with the number of customers. In such markets, the only competitive pressure results from the possibility that one content provider or one platform is superseded by a superior (or merely more popular) new player. The less credible the competitive threat, the more it is likely that desirable information is withheld from a customer. Such market failures can justify regulatory oversight.

Institutions Promoting or Countering Deliberate Ignorance

283

In addition, markets by their very nature cater to preferences that are backed by an ability to pay. As a result, the power to design a person’s informational environment might be limited only to those able to purchase this service. To the extent that societies care about an egalitarian distribution of the right not to know, this would imply that the state might need to regulate the provision of this service or turn to providing it via nonmarket mechanisms.

The Role of Organizations Much of human activity is conducted within organizations. States, public and private rms, labor unions, and the like all play a central role in modern lives. Given this crucial function, we focus here on the interplay between organizational structures and deliberate ignorance. Organizational structures add another level of complexity: What does deliberate ignorance mean on a collective level? After a brief overview of the way in which information is produced, transferred, and stored within organizations, we explore how different liability regimes inuence the knowledge acquired by rms and individuals within them, and highlight the ability of organizations to quarantine information into a dened domain, thus facilitating deliberate ignorance. Institutional Knowledge The literature on knowledge within institutions has focused primarily on knowledge sharing (rather than on deliberate ignorance) and how various knowledge management tools or practices can stimulate it. In this body of work, different perspectives on organizational knowledge can be distinguished. These perspectives inuence which knowledge management strategy is selected by a company, but they could also be helpful for the discussion of deliberate ignorance. According to Wasko and Faraj (2000), knowledge can be viewed as an object, as embedded in the individual, or as embedded in the community. When knowledge is viewed as an object, it is assumed that knowledge can easily be codied and that employees can easily store their knowledge in a repository. Accordingly, organizational knowledge is the aggregate of all knowledge pieces in an organization, and management provides a knowledge repository with search facilities and motivates sharing with nancial incentives. Conversely, the knowledge-as-embedded-in-individuals’ perspective argues that it is not as easy to separate knowledge from people, as not all knowledge can easily be codied. As a consequence, knowledge is often lost when an expert leaves the organization. Knowledge management should thus help identify the relevant experts and motivate them through recognition or status. The knowledge-as-embedded-in-community perspective goes one step further and argues that knowledge emerges through shared practices and routines. It is thus

284

D. Teichman et al.

more than the sum of individual pieces of knowledge; it is collectively owned and collectively produced in discussions and routines (for a more detailed discussion, see Wasko and Faraj 2000). Cognitive psychology literature, on the other hand, has dedicated signicant attention to the way in which information is dispersed within organizations. The concept of “executive ignorance” (Turvey 1977) refers to the notion that as a matter of efficiency, and perhaps necessity, the conscious component at the top of a hierarchical cognitive system will not have access to or knowledge of the details of lower-level processing. The term has been adopted in organizational behavior literature to refer to the notion that superiors in a hierarchy should delegate authority to subordinates and should not attempt to “micromanage” them; that is, the executive’s time and attention is better occupied by higher-level goals with a longer time horizon. Liability Rules and Deliberate Ignorance In many areas of law, corporations are the nominal defendant in either criminal or civil litigation. Nearly all of that litigation is fault based rather than strict liability based. Consequently, to prevail, the plaintiff/state must demonstrate by an appropriate standard of proof that the defendant acted with a requisite state of mind. In the United States, securities fraud, for example, usually requires a type of recklessness associated with a material misstatement or omission, tort cases usually involve showing negligence, and criminal cases usually require either extreme recklessness or willfulness to secure liability. Consequently, courts are routinely required to assess the information ow within the corporation and to examine whether this knowledge can be attributed to the corporation. To take a concrete case, suppose a middle-level manager for an oil and gas company (Camile) makes an impromptu public statement about the company’s excellent proven reserves and its superb nancial condition, and this statement gets investors excited and causes trading markets to respond upward. However, one of the company’s on-site oil-eld managers (Emiliano) recently discovered that the company’s proven reserves are almost fully depleted. Meanwhile, the CFO (Tamika), a member of the company’s board, also recently discovered a material weakness in the company’s nancial records. It is later revealed that Camile’s statements are both completely false, and the stock price crashes back to (or even below) its initial level. Given this market correction, private investors or the government (or both) are likely to sue the corporation, alleging securities fraud. Under U.S. law, an element of their case is that the plaintiff/the state must prove that “the corporation” acted recklessly in making the false statement; without delving into the nuances of legal doctrine, this requires some type of awareness on part of the corporation. Consequently, corporations are often incentivized to construct their institutional knowledge

Institutions Promoting or Countering Deliberate Ignorance

285

so that it limits their legal liability—a task that often involves creating and perpetuating ignorance. Jurisdictions have developed different legal rules that dene the conditions under which one can attribute knowledge to a corporation. By most accounts, there are three prevailing “tests” for examining the corporate state of mind: (a) the common-law “bad actor” test; (b) the “collective scienter” test; and (b) the “puppet-master” test. Let us now review these rules and evaluate the ways in which they inuence the incentives of corporate actors to engage in deliberate ignorance. Under the common-law “bad actor” approach, the fact nder must inquire into the state of mind of the individual corporate official who actually acted or made the false or misleading statement.2 In our running hypothetical, this would require the determination of whether Camile (the corporate speaker) knew, or was reckless in not knowing, that her statements were false. This approach focuses on the knowledge-embedded-in-an-individual perspective described above. In this case, proving Camile’s willfulness/recklessness would be difficult based on what is known, since it appears that she was not at the hub of information transmission about the company’s proven reserves or nancial condition. Indeed, if the oil company knows it is going to be subject to the bad actor rule, it will plausibly organize itself to ensure Camile does not have that access since ignorance will shield the company from liability. According to the collective scienter approach, corporate state of mind boils down to determining whether the totality of the officers’, directors’, and employees’ knowledge—if all are aggregated and collected (hypothetically) within the mind of a single person making the statement—meets the required level of sufficient knowledge to assign liability.3 Similar to the knowledge-asan-object perspective, the collective scienter approach considers the aggregate of single pieces of information as the corporate knowledge. In our example, the collective scienter test effectively ascribes any knowledge that Emiliano and Tamika have to Camile (even if she did not, in fact, know it), and then determines whether (according to that ascribed knowledge) Camile acted recklessly. Given the facts outlined above, it seems almost certain that there would be liability. Finally, the “puppet-master” test asks whether any of the company’s senior officers/directors had the requisite state of mind, regardless of whether they were the ones engaging in the assertive act.4 In our running hypothetical, the critical person is Tamika, who clearly knows about the nancial weakness of the company, but seemingly does not know (yet) about the lack of proven 2 3 4

See, e.g., Southland Securities v. INSpire Ins. Solutions, 365 F.3d 353 (5th Cir. 2004); Phillips v. Scientic-Atlanta, Inc., 374 F.3d 1015 (11th Cir. 2004). See, e.g., U.S. v. Bank of New England, 821 F.2d 844 (1st Cir. 1987, criminal case); Monroe Employees Retirement Sys. v. Bridgestone 387 F.3d 468 (6th Cir. 2004). See Glazier Capital Manage, LP v. Magistri, 549 F.3d 736 (9th Cir. 2008).

286

D. Teichman et al.

reserves. Thus, under this test, one might imagine that the plaintiffs/government would be able to prevail only in their fraud claim as to the nancial health of the corporation, but not as to the misstatement about the company’s proven reserves. Much like the case of the bad actor test, the “puppet-master” test might also create strategic incentives for deliberate ignorance. Organizations as Facilitators of Deliberate Ignorance Thus far our analysis has focused on the structure of information within a single rm. Yet since rms and organizational structures create clear boundaries between assets, they can be utilized to partition knowledge as well. Consequently, organizational mechanisms can serve to quarantine knowledge, thus assuring that certain people remain ignorant. Isolating information into a discrete organizational tool can serve numerous goals: incentivizing the creation of knowledge, addressing the conict of interests, and strategically avoiding liability. We explore these three functions in turn. Governments hold a vast amount of information regarding individuals. This information could serve many competing legitimate goals the government might wish to promote. Nonetheless, individuals might be reluctant to transfer information to the government if that information can be freely used to promote any end the government sees t. For example, while citizens might agree to transfer biometric information willfully to the government to obtain a passport, they might not agree to allow the government to make any other use of this information. Thus, if the government wishes to promote the transfer of information from its citizens, it might wish to credibly commit to limit the use of the information it receives. To this end, the government may adopt rules that prevent the free ow of useful information within it and partition the information it holds into separate organizational units. Viewed as a whole, the government in this setting can be said to be partially ignorant: while it holds the relevant information, certain sections of the government cannot gain access to this information and consequently remain ignorant. Similarly, this framework holds for scientic institutions, which often rely on the collection of public data to address scientic questions. According to the principle of the protection of data privacy and good scientic practice, the collection of data that can reveal an individual’s identity (including IP addresses in online surveys) is either prohibited or subject to an anonymization process. In this sense, deliberate ignorance is highly desired to promote common knowledge. Organizational partitioning of information may help address situations involving conicts of interests. For example, an accounting rm might operate as both an auditor and as a business consultant for the same client. To full both roles faithfully, the accounting rm might wish to conduct the two activities within two separate organizational frameworks. Once separate entities are created, different barriers to the free ow of information can be constructed (i.e., so-called Chinese walls/rewalls).

Institutions Promoting or Countering Deliberate Ignorance

287

Finally, it should be noted that organizational structure can also be used strategically to avoid legal liability. As discussed above, legal liability often hinges on the ability to attribute knowledge to the organization. By adopting a complex organizational structure in which corporate knowledge is dispersed between numerous subsidiaries, the rm might be able to insulate itself from liability. Numerous real-world cases suggest that although many corporations are relatively at, others maintain a highly segmented corporate structure involving dozens (if not hundreds) of subsidiaries and just as many lower-generation subsidiaries. To take a particularly extreme example, at the time of the Deepwater Horizon oil spill in 2010, British Petroleum (BP plc) had 75 immediate subsidiaries, which in turn collectively had 90 second-generation subsidiaries, which in turn collectively had 54 third-generation subsidiaries, which in turn had 25 fourth-generation subsidiaries, which in turn had two fth-generation subsidiaries (E. Talley, pers. comm.). Most of BP’s subsidiaries were identied with either a unique subindustry (e.g., chemicals), a unique geographic region (e.g., Africa), or both. Multiple reasons contribute to such segmented business structures. For many oil and gas companies, national regulatory requirements (both for energy and tax) often mandate that their in-country operations be separately incorporated. In the event of radical forms of regulation (such as nationalization), the subsidiary structure likely reduces the uncertainty associated with expropriation. In addition, the multi-subsidiary structure connes other forms of liability risk, usually containing it within the operating subsidiary (so long as the corporate structure adheres to the formalities of its own separate structure), making it possible for a multinational to operate at scale without similarly magnifying their exposure. Yet because such corporate structures mandate formal governance “separateness” between parent and subsidiary rms, segmented business structures, such as BP’s, also create potential organizational barriers within companies, particularly those horizontally connected to one another in the corporate hierarchy. Although informational separation is not a requirement for maintaining limited liability, when it is convenient or desirable for the organization to cultivate such separation, the subsidiary structure is amenable to it.

Deliberate Opacity Democratic societies place a high value on transparency for well-known reasons: open and transparent procedures discourage corruption, facilitate improvement, and promote a well-informed citizenry. As Justice Louis D. Brandeis famously noted: “Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman.” Nonetheless, there is value in opacity to many decision-making processes. Otto von Bismarck is reputed to

288

D. Teichman et al.

have suggested that “[l]aws are like sausages. It’s better not to see them being made.” More recently, MacCoun (2006) demonstrated that transparency itself is not unequivocally positive in its effects (see also Lewandowsky and Bishop 2016). To a certain degree, transparency is the opposite of deliberate ignorance, as it focuses on making information easily available to those who wish to use it. In this section, we elaborate on the connection between deliberate ignorance and transparency, and explore why and how the public might decide to shield itself from information regarding the activity of the state. It should be noted, however, that while this discussion focuses on the transparency of the state, many of the arguments apply to private entities as well. As a descriptive matter, much of the operation of governments and organizations is not transparent. The government routinely classies documents and restricts public access to information. A collective decision of a community to shield itself from certain information could reect the limited value this information has to group members and the potential adverse effects of publicizing it. Citizens might rationally choose not to be aware of the intricate details associated with the national security of their country, simply because this information is inconsequential from their perspective, and highly valuable to hostile powers. A perhaps less obvious reason for limiting governmental transparency arises from situations in which the desirable decision requires conducting taboo trade-offs. For example, a community might want governmental decision makers to conduct cost-benet analysis with respect to investment in safety, a process that would require decision makers to put an explicit price tag on human life. However, such cost-benet analysis might run against the taboo that human life should not be evaluated in monetary terms. To sustain the taboo, it might be valuable to shield the public from the cost-benet analysis. Thus, limiting transparency can allow communities to have their incommensurable cake and eat it too. Against this backdrop, Fiske and Tetlock (1997) argue that societies often develop norms to avoid openly acknowledging necessary choices that inevitably trade off two or more deeply held values (for an elaborate discussion, see Calabresi and Bobbitt 1978). Governments might also choose to limit transparency to aid complex collective decisions that require balancing between different constituencies (Brunsson 1989). By acting in a stealthy manner, the government might avoid some of the political friction that would have been caused had its actions been conducted transparently. For example, if mere knowledge of a government’s activity causes distress to a subset of the population (e.g., orthodox Jews who strongly believe that the state of Israel should not engage in public works such as road construction on the Jewish day of rest, Shabbat), then adapting a “don’t ask don’t tell” type of policy vis-à-vis the distressed community can enable the government to act in a way that reects the preferences of the majority (i.e., pave roads and lay rail lines over the weekends), while avoiding psychic harm from being inicted on some of the population.

Institutions Promoting or Countering Deliberate Ignorance

289

Another aspect of the institutional design of deliberate opacity policies relates to temporal availability of information. Democratic societies often make a conscious decision to shield themselves from current information, while committing themselves to reveal this information in the future through a process of declassication. For instance, it has been suggested that members of the European Union consciously chose to shield information relating to the trade-offs involved in the creation of the Union, since revealing them might have undermined the process at the time (for additional examples, see Zamir and Yair, this volume). Declassication could reect a careful balance between transparency and deliberate opacity. If deliberations take place in private yet are recorded, then internal monitors (e.g., legal advisors, state comptroller office) can still provide some oversight. Furthermore, the prospect of future publicity might serve as a check on governmental power and assure that at least some of the benets of transparency are realized over the long term. Finally, limiting transparency might also reect a conscious choice to enable a decision-making process that is insulated from external pressures. Recent experience shows that transparency comes at a considerable cost: When private emails between climate scientists were hacked and distributed on the Internet a decade ago, this enabled political operatives to construct a narrative about alleged corruption and misconduct among climate scientists that arguably delayed policy action and reduced public commitment to environmental policies. In fact, the scientists involved were exonerated in nine independent investigations in the United States and the United Kingdom (Lewandowsky 2014). This is not an isolated incident; it illustrates the unavoidable implication of unlimited transparency when it is not counterbalanced by privacy considerations. Freedom-of-information requests for scientists’ emails have become a common weapon in the arsenal of political operatives who seek to undermine scientic ndings they oppose, which in turn has arguably led to self-censorship among scientists in their exchanges with colleagues, thereby compromising the rigor of scientic debate. In conclusion, our point is not to minimize the risks and costs of limiting transparency, but simply to suggest that transparency should not automatically block the option of deliberate opacity. The relative costs and benets of deliberate opacity and transparency need to be evaluated on a case-by-case basis.

The Role of Technology Here we turn to a topic that lies at the frontier of research on deliberate ignorance and examine the way in which the emergence of technologies such as articial intelligence and machine learning impacts the design of peoples’ informational environment. As the discussion will show, computerized decision making interacts with many of the institutional questions reviewed throughout this chapter. To

290

D. Teichman et al.

a large degree, our ability to control the decision-making environment in which computers operate, including the possibility of instructing computers to disregard information they were exposed to, suggests that computers could be an effective tool through which information can be deliberately ignored. We begin with a brief introduction to key terms in the area of computer science, before exploring whether machines can be blinded. Thereafter, we turn to compare human and machine decision processes, and highlight their comparative advantages. Principles of Machine Learning Algorithms are increasingly being used in decision-making tasks that affect human lives. These tasks vary from predicting the risk of a defendant recidivating to estimating the creditworthiness of a person seeking a loan. By algorithms we mean computer programs that outline precise and detailed step-by-step instructions for how information given as input to the programs (e.g., features of a defendant or a loan applicant) should be processed to obtain decision outcomes (e.g., recidivism or credit-risk predictions). Intuitively, algorithms can be thought of as detailed recipes for turning information inputs (ingredients) into decision outcomes (dishes). Traditionally, algorithms were designed (programmed or coded) by humans with considerable planning, effort, and time. When outlining an algorithm (i.e., the precise step-by-step plan for processing input information), human programmers would carefully consider what information would be accepted as input and in what form or representation. In contrast to traditional human-driven algorithm design, there has been a rise in recent years in machine-driven algorithm design. This is more commonly referred to as machine learning. The key idea behind machine learning is to learn automatically the decision-making algorithm from a set of example decisions. For example, to learn the algorithm that decides who is credit worthy, a bank could simply compile a data set of past credit assessments made by their employees and derive the algorithm from this sampling. The data used to learn algorithms is referred to as training data and the increasing availability of large training data sets in various decision-making scenarios from banking to predictive policing has catalyzed the adoption of machine learning to design algorithmic decision-making systems. There are many signicant pros and cons to machine-driven algorithm design compared to human-driven algorithm design. On one hand, machine learning saves considerable human effort when designing algorithms; utilizing advances in computer hardware and learning techniques (e.g., deep learning), machine learning can potentially identify complex, yet useful patterns in largescale training data that are beyond human cognitive abilities. On the other hand, the lack of human supervision over the algorithm design raises concerns about the ability to explain the algorithm’s outcomes and points to the vulnerability

Institutions Promoting or Countering Deliberate Ignorance

291

of the algorithm design to any form of bias in the training data. In particular, many recent studies have raised concerns about the fairness of learning-based algorithm designs, and fair machine learning has emerged as a topic of considerable interest within the machine-learning research community. The explored approaches to fair machine learning center around deliberately ignoring certain types of information at different stages of the learning and decision-making process. In the next section, we describe the different approaches and explore whether the approaches are normatively equivalent. Blinding Machines One can distinguish three different stages in the design of learning-based algorithmic decision-making systems: • • •

Stage 1: Prepare the training data Stage 2: Learn from the training data Stage 3: Deploy the learned algorithm in practice

Let us rst focus on Stage 2 to understand how discrimination might arise when learning algorithms from data. The traditional goal of learning models is to design an algorithm that achieves maximum accuracy in its predictions (e.g., risk assessments) over the entire population. Unfortunately, this goal does not guarantee that different socially salient groups of users in the population (e.g., based on race or gender) would achieve similar prediction accuracy. Suppose you have a population where 90% of people belong to race 1 and 10% belong to race 2. Suppose further that you have two predictors, p1 and p2, where p1 achieves 100% accuracy for people of race 1 and 0% accuracy for people of race 2, and p2 achieves 85% accuracy for people of both races. When learning, traditional learning models would prioritize learning p1 over p2 because p1 has a higher overall accuracy (90% across both races) compared to p2 (85% across both races). However, human designers of algorithms might justify selecting p2 over p1 due to considerations of nondiscrimination; that is, p2 has considerably lower disparity (inequality) in accuracy across both races in the population compared to p1. In response to concerns about discrimination, a number of recent works in machine learning have explored new methods to train nondiscriminatory algorithms. At a high level, the methods can be categorized into three main classes, corresponding to the three different stages of the design of algorithmic decision systems at which they are applied. The class of methods that are applied at the learning stage (Stage 2) are referred to as in-processing methods. These methods modify the traditional goal of obtaining an algorithm that maximizes prediction accuracy across the entire population. Specically, the goal of nondiscriminatory learning is to nd an algorithm that maximizes prediction accuracy across the entire population, but subject to the constraint (i.e., bounded by the requirement) that the inequality

292

D. Teichman et al.

in accuracy for different salient social groups in the population is kept low (ideally zero). To achieve these goals, in-processing methods need to use information about social group membership of people during the learning stage (Stage 2). The class of methods that are applied at the deployment stage (Stage 3) are referred to as post-processing methods. These methods do not alter the traditional goal of designing an algorithm that maximizes prediction accuracy across the entire population. Instead, they track the decision outcomes of the potentially discriminatory algorithm in practice (i.e., measure inequality in its prediction accuracy across different salient social groups) and alter the predicted outcomes in a manner that lowers (ideally equalizes) the inequality in the prediction accuracy across social groups. To achieve this goal, post-processing methods need to use information about social group membership in the deployment stage (Stage 3) but not during the learning stage (Stage 2). The nal class of methods that are applied at the training data preparation stage (Stage 1) are referred to as preprocessing methods. These methods do not alter the traditional goal of creating an algorithm that maximizes prediction accuracy across the entire population, nor do they change the predicted outcomes of the algorithm in deployment. Instead, they transform the training data in a manner that lowers the chance of generating algorithms that might yield disparate prediction accuracies across different social groups. Examples of such transformations include sampling training data where people belonging to different races are equally represented in the training data or learning new representations of training data (i.e., via dimensionality reduction or expansion techniques) such that people belonging to different races are desegregated in the new feature space. To carry out such data transformations, the preprocessing methods only need to use information about social group membership of people during the data preparation stage (Stage 1) and can ignore the information in learning and deployment stages (Stages 2 and 3). To date, the machine-learning community has compared the different approaches based on the performance trade-offs they offer between overall and group-level accuracy, as well as practical considerations, such as privacy concerns with needing information about social group membership at deployment stage (for post-processing) versus learning stage (for inprocessing) versus data preparation stage (for preprocessing). However, it is an open question whether the three classes of approaches are normatively equivalent. While we cannot present a denitive answer to this question, there are several indications that as a positive matter, people might view preprocessing methods as more palatable, even if their end results are similar to those of the other methods. Specically, Ritov and Zamir (2014) examined people’s assessment of various processes of recruiting students and employees with a view to achieve affirmative action or other goals regarding the composition of the recruited group. They found that processes in which it was theoretically possible to identify people who were tentatively selected,

Institutions Promoting or Countering Deliberate Ignorance

293

but ultimately excluded, from the group of selected students/employees, were judged considerably less acceptable than processes in which no such theoretical possibility existed. The authors attributed these results to the psychological phenomena of identiability (people’s different responses to identied, or even identiable, people, compared to unidentied ones) and loss aversion (the phenomenon that losses loom much larger than unattained gains). Although the study did not address articial intelligence or mechanical learning, it appears that the same factors may drive people to prefer the use of preprocessing measures in machine learning. Considerations of public acceptance and trust might also favor preprocessing methods. While in-processing and post-processing measures are inherently unclear and opaque to most people, some of the available preprocessing measures are relatively simple and straightforward. Removing certain types of data from the machine’s information set can be easily grasped, even by those who are not trained in computer science. As a result, it could very well be the case that public acceptance of preprocessing measures will be relatively higher. One should note, however, that once more complex preprocessing methods are used, it is unclear whether this hypothesis holds. Machines versus People Having discussed the nature of machine-learning processes and how they can be interfered with to achieve desirable goals in contexts such as selection of people in a nondiscriminatory way, let us now examine the pros and cons of using human or machine-learning processes to make decisions. At the outset, we note that the dichotomy between machine and human decision making is to some extent false, for two reasons. First, humans are inevitably involved in practically all stages of the machine-learning processes (and make the normative choices whether and how to use the products of those processes). Second, there is a continuum between primarily human and primarily machine-based decision processes. Thus, for example, whenever people use the Internet to gather information that would feed into their decisions, the information they get is determined to some extent by the machine-learning process used by the search engine. For the sake of the discussion, we nevertheless consider prototypical human versus machine-based decision processes. Reviewing all of the considerations relevant to the comparison between human and machine-based decision processes lies well beyond the scope of this chapter. Our aim here is to map some of the considerations that should be addressed when deciding whether to entrust humans or machines with various types of decisions, with a focus on the issues related to deliberate ignorance. A rst consideration stems from the discretion afforded to the decision maker. Decision processes may use precise algorithms, employ a well-dened set of variables, give predetermined weight to each variable, and so on. Decision processes may also set more or less abstract goals or values and leave

294

D. Teichman et al.

the decision maker with broader or narrower discretion as to what weight to give to what factors in each particular case, paying heed to all the particular circumstances. In the realm of human decision processes, both possibilities (and any intermediate or hybrid forms) are commonly used in judicial and other contexts (in the legal context, the choice is often framed as a choice between rules and standards). Machine-based decision processes are intuitively associated with precise, algorithmic processes. But is this characteristic necessary? Is it possible now, or is it expected to be possible in the future, to create fuzzier, probabilistic machine-based decision processes that would be more exible, indeterminate, and standard-like? A second dimension of comparison relates to explainability, transparency, and accountability. Machine-based processes are in principle more amenable to ex post examination. Such examination (or reverse engineering) could let us know which factors were actually addressed, what weight was given to any one of them, and so forth. Human decision making—primarily when made by a single decision maker, but often also when the process is collective—is frequently less tractable and explainable. At times we are interested in maximizing transparency and accountability of decision making; however, as noted in our discussion on intentional opacity, society might nd it preferable to make decisions in less explicit and transparent ways. While it appears that both human and machine-based decisions may be more or less transparent and explainable, there may also be characteristic differences between the two (and in any subcategories thereof) that one might want to consider. One advantage that machine-based processes might have is that they may facilitate “acoustic separation” (see Dan-Cohen 1984) between the public and decision makers, and perhaps even among different parts of the decision-making process. A concrete example that might illuminate this abstract discussion can be found in the context of antidiscrimination and affirmative action. When zooming in on the issues of antidiscrimination and affirmative action—as major fairness concerns in selection processes—the potential use of machine-based processes is associated with several noteworthy issues. First, the underlying motivation of discrimination might be relevant to the analysis. There are those who argue that only prejudicial, animus-based discrimination is unacceptable, whereas statistical discrimination is not. Accordingly, as long as the decision maker (e.g., an employer, an education institute, or a landlord) strives for the most accurate decision and uses such attributes as gender and ethnicity merely as indicators of legitimate variables (assuming that the alleged correlation or association does exist, and that making decisions without them would be much costlier), there is nothing wrong with such statistical discrimination. If this is so, it is easy to design machine-based selection processes that would avoid animus-based discrimination. However, if one believes that statistical discrimination is unacceptable, then using machine-based processes that rely on correlations and

Institutions Promoting or Countering Deliberate Ignorance

295

associations may not only fail to avoid discrimination, but may actually perpetuate it, because they reect associations established in the past. Careful measures (of the type discussed above) may then be necessary to avoid the discriminatory results. Second, the introduction of computerized decision making could inuence the degree to which the analysis focuses on the issue of disparate impact. Some criteria may appear to be perfectly benign, yet using them in selection processes may result in extreme underrepresentation of some groups in the selected body of students, employees, and so forth. While it is a highly controversial issue, some believe that as long as the motivations and processes used by decision makers are fair, the end results are fair as well. In contrast, some believe that criteria and processes that lead to socially unacceptable results should be avoided. Here again, the use of machine-based processes may help to overcome some difficulties, but they may also exacerbate others. In conclusion, machine-based decision making offers opportunities to operationalize peoples’ preference for ignorance, but raises serious concerns at the same time. Many interesting questions still loom in this context. At the institutional level, who should make the above choices, and through which processes? What role should public discourse, academic discourse, legislative, judicial, and administrative deliberation play? On the practical regulatory level, assuming that regulation of machine-based decision processes is legitimate and may be needed, at what point in time are regulations necessary? Should computer experts be free to design whichever tools they wish, and only their use be regulated, or should the development of new tools be subject to regulation ex ante? Finally, in the long run, to what extent should decisions be made according to the current development of machine-based decision processes? While the evolution of human faculties is very slow, technological advancement proceeds at a very fast pace. It may therefore be advisable to make any choice contingent on the present or future developments of the technologies.

Conclusion In reviewing a variety of institutional implications of deliberate ignorance, we have mapped several mechanisms that could be used to design the information environment consistent with a presumed informational goal: prevention of the creation of information, the destruction of information, quarantining information, quarantining users of information, and limiting limitation of the use of acquired information. We have also explored several instantiations of such mechanisms throughout society, from private contracts, organizational structures, to the state itself. Choosing among the different institutional responses to deliberate ignorance is a herculean task, and constitutes to a large extent a critical research

296

D. Teichman et al.

question for future study. For almost any institutional design task, there is more than one option. As a result, selecting the ideal institutional mechanism requires incorporating all relevant factors: consequences, fairness, political constraints, legitimacy, as well as social norms and practices, to name but a few. Throughout this chapter, we have endeavored to delineate the most important dimensions that should be assessed. A primary concern involves effectiveness. How well does the institution achieve its intended goal? Threatening a decision maker with a sanction, in the event of using protected information, is less safe than making sure that information is never generated. Destroying the information is a riskier option, but it suffices to check whether those who have ever seen the information can be trusted not to report on it. Effectiveness, however, is not conned to the immediate outcome (the information is withheld). An institution is also more effective if it can work reliably or be relatively easily adjusted when a task is incorrectly specied or the environment changes in unpredicted ways. An institution can also be regarded as more effective if it enables institutional learning, be it for the case at hand or analogous cases that present themselves in the future. Another dimension of effectiveness results from the relationship between the institution (and those running it, if it is purposefully steered by individuals) and its addressees. Effectiveness may be good or bad for this purpose, depending on whether it is easy to understand and observe the channels at which the institution achieves its intended goal. Such institutional transparency can create trust, but it can also make normative conicts patent, which would make it difficult for those whose lives are being governed to accept the intervention. A secondary concern involves externalities. If they are not part of a hidden agenda, one could refer to this concern as unintended negative consequences: on other tasks that this same decision maker expects to face, on other (private or public) parties, on the social fabric, or on the evolution of the community. The spillover may also originate from the fact that the solution, in the guise of an institutional transplant, is picked up in different areas of life, or by different communities. What is good for one context can be detrimental for another. Even if the institution does not have any adverse effects on third parties, it can amplify heterogeneity among its addressees. When this heterogeneity is pecuniary, it is usually referred to as distributional effects. Given the wide body of distributional theories along with the different legal responses (i.e., regulation vs. taxation), we limit ourselves to highlighting this issue as one in need of future study. Numerous specic paths for future research emerge from the issues discussed in this chapter. For instance, our discussion of the rm focused on the relations within the rm, yet an entirely separate set of questions relates to the relationship between the rm and its shareholders. On this front, shareholders might wish to shield themselves from nancial information that might cause

Institutions Promoting or Countering Deliberate Ignorance

297

them to make biased decisions that run against their long-term interests. In addition, shareholders might wish to shield themselves from information alluding to the rm’s actual activities to avoid tension between those activities and their views on issues, such as protecting the environment, distributive justice, and so forth. (Of course, it is debatable whether shareholders should be allowed to remain ignorant in such cases.) Similarly, our analysis of technology focused on a relatively small set of questions associated with machine learning. There are numerous other technological frontiers, such as the emerging use of blockchain technology, which could be used to design the informational environment. There are also many open questions concerning the role of machine learning in fostering or mitigating deliberate ignorance. When it comes to fairness judgments (where deliberate ignorance of attributes such as sex or race is often desirable), research in the machine-learning community has compared preprocessing, in-processing, and post-processing approaches for accuracy and group disparity. These three classes of strategies are roughly equivalent, but research has not yet examined whether they differ in terms of trust or acceptance. Post-processing approaches might, for example, be perceived as unfair because a human at the end of the process changes the machine-learning output. People may prefer the preprocessing approaches since the variable that is protected by antidiscrimination laws is not used, or its use is penalized. One could also argue that in-processing approaches might be favored since reducing disparity is an explicit goal (next to accuracy) of this approach. Which approach is ultimately favored might depend on group membership as well as on beliefs regarding the perceived truth of a stereotype. Although the accuracy rates of the three approaches are similar, decisions on the individual level are not necessarily so. A particular woman might be hired when a preprocessing approach is used, but not when an in-processing approach is employed. That people’s preference for equality versus equity, in distributive fairness, depends on whether they are better off is a well-known phenomenon (Messick and Sentis 1983); when it comes to machine learning, which is less easily understood, beliefs about one’s chances under the use of different approaches come into play. Future studies should therefore not only compare the acceptance of the different approaches, but also examine the role of explainability. Closer cooperation between social scientists and machine learners may be useful to answer these questions. Finally, it should be acknowledged that the discussion in this chapter does not purport to be a comprehensive overview of the institutional responses to deliberate ignorance. By its nature, this discussion was dened by the expertise of the participants in the research group. Thus, additional perspectives need to be incorporated into the analysis. For instance, the education system might be able to help train students to deliberately ignore information. To this end, familiarizing students with the tools that facilitate deliberate ignorance could become part of media literacy programs. By teaching young people when to

298

D. Teichman et al.

shield themselves from too much trivial or untrue information, policy makers could foster a more informed public discourse. Hopefully, this void will be lled over time.

16 Deliberate Ignorance and the Law Eyal Zamir and Roi Yair Abstract This chapter offers a bird’s-eye view of existing legal doctrines and institutions that overcome or foster deliberate ignorance, critically assesses these doctrines and institutions, and considers extensions thereof. It begins by focusing on three legal means of discouraging deliberate ignorance: subjecting people who could have acquired the relevant information to the same treatment as those who acted knowingly, imposing positive duties to acquire information, and rendering information more conspicuous, thereby making it more difficult to ignore. It also touches upon the issue of collective ignorance. Thereafter it discusses instances in which the law encourages deliberate ignorance to facilitate better decision making and promote other values. It starts from the basic issue of designing the system of government and constitutional protection of human rights using veils of ignorance and then moves on to more specic legal topics: inadmissibility and other evidence rules, anonymity and omitted details of candidates to overcome the biases and prejudices of decision makers, expungement of criminal records, and the right to be forgotten.

Introduction In this chapter, we present a bird’s-eye view of existing legal doctrines and institutions that foster or overcome deliberate ignorance, assess these doctrines and institutions, and consider extensions thereof. As part of a series of studies on deliberate ignorance, we do not discuss the conceptual, philosophical, psychological, or economic aspects of deliberate ignorance, as these are considered by others in this volume. In particular, we do not delve into the question of whether knowing something is an all-or-nothing matter or (more realistically) a matter of degree of belief (Buchak 2014). For the purpose of our discussion, we liberally use Hertwig and Engel’s working denition of deliberate ignorance (this volume, 2016): “the conscious individual or collective choice not to seek or use information.” Without delving

300

E. Zamir and R. Yair

into the denitional question, a preliminary comment about the boundary between so-called rational and irrational ignorance is in order. After offering the above denition, Hertwig and Engel note that their focus is on “situations where the marginal acquisition costs are negligible and the potential benets potentially large.” But what about instances in which the gap between the costs and benets is narrow or nil? While Hertwig and Engel do not squarely exclude such cases (sometimes dubbed as rational ignorance or rational inattention) from their discussion, they implicitly distinguish between various motivations for not seeking or using information. Specically, one may or may not wish to include instances in which the individual or societal costs of attaining and processing information exceed their benets, as well as instances in which disregarding certain types of information can improve decision making. The rst group includes the rational choice of citizens to remain uninformed about governmental and political issues that do not signicantly affect their lives. It is also closely connected to the debates about disclosure duties that pervade the law (e.g., in the spheres of consumer transactions and medical malpractice law). Inasmuch as the costs of grasping, understanding, and using the disclosed information by the disclosees are prohibitive, these duties are arguably futile (Ben-Shahar and Schneider 2014; Zamir and Teichman 2018:171– 177, 314–318). The issues of a citizen’s choice to remain uninformed about political issues, as well as the relationship between deliberate ignorance and mandated disclosure, are both worth exploring, but due to space limitations we will not discuss them in any detail. In terms of the second group, it may be rational (in the sense of cost-benet analysis) to disregard certain types of information when this will likely increase the accuracy of fact nding and decision making. This is, for example, the common rationale for some (but not all) inadmissibility rules in the law of evidence, as will be discussed below. This chapter consists of two sections. The rst discusses cases in which the law strives to discourage deliberate ignorance, and the second turns to instances in which the law strives to ensure or encourage such ignorance.

Overcoming Deliberate Ignorance Both retributive justice and deterrence presumably require distinguishing between a behavior that is done with full knowledge of the relevant facts and its expected consequences, and the same behavior that occurs without such awareness. From a retributivist perspective, knowingly harming other people (or not coming to their aid) is more culpable than harming through negligence or by accident. From a deterrence standpoint, subjecting negligent or accidental (as opposed to knowing) conduct to criminal liability creates an incentive to refrain from lawful activities to avoid the risk of erroneous punishment, and therefore warrants a cautious approach (Posner 2014:279). Laws based

Deliberate Ignorance and the Law

301

on these distinctions, however, create an incentive for deliberate ignorance to avoid fault-based liability. The primary means by which the law can eliminate or weaken this undesirable incentive is by subjecting people who could have acquired the relevant information to the same treatment as those who acted (or failed to act) knowingly. A second technique is to impose positive duties to acquire information, while a third one is to render information more conspicuous and salient, thereby making it more difficult to ignore. We discuss each of these techniques in turn and then briey address the issue of collective ignorance. Willful Blindness, Constructive Knowledge, and Strict Liability Equalizing the treatment of deliberately ignorant and fully cognizant actors may be attained in two primary ways (Turner 2009:360–362): 1. 2.

Treat deliberately ignorant actors as though they were aware of the relevant information. Lower the threshold of liability by dispensing with the requirement of actual knowledge in favor of an objective standard of conduct in light of the attainable information, such as negligence, or even imposing strict liability.

The most direct technique the law uses to overcome deliberate ignorance is to treat it as a substitute for actual knowledge. Thus, the common-law doctrine of willful blindness considers the requirement of knowledge to be satised when the person in question suspects the existence of a fact but deliberately avoids looking into it. For example, many states criminalize the knowing exposure of sexual partners to the risk of HIV transmission. Absent the doctrine of willful blindness, individuals who suspect that they are at risk of carrying the virus have an incentive not to get tested, to avoid such liability (Ruby 1999:330– 331). Sometimes, the law explicitly treats even reckless disregard of information as tantamount to deliberate ignorance.1 While the willful blindness doctrine is primarily known for its application in criminal law2—as a means of satisfying the requirement of mens rea of knowledge (Robbins 1990)—it may also be used in civil proceedings where an element of actual (as opposed to constructive) knowledge is required. For instance, in some legal systems and under certain conditions, the doctrine of market overt protects the bona de purchaser of stolen goods. When this protection is granted only to a buyer who is unaware that the goods were stolen, willful blindness amounts to actual awareness (Zamir 1990:109–112). From the standpoint of deterrence, the justication for the doctrine of willful blindness is straightforward: it deters people from circumventing criminal (or civil) liability through deliberate ignorance (Kozlov-Davis 2001:483). 1 2

See, e.g., U.S. False Claims Act, 31 U.S.C. § 3729(b)(1)(A). See, e.g., §2.02(7) of the U.S. Model Penal Code.

302

E. Zamir and R. Yair

Arguably, the doctrine may also enhance deterrence by serving as a means to overcome the evidentiary challenge of proving that a person actually knew something (Charlow 1992:1359–1360). However, for the remainder of our discussion, we assume that the defendant was deliberately ignorant, rather than actually knew the relevant facts. From a retributivist perspective, the picture is more nuanced. The point of departure is that someone who acts in ignorance is less blameworthy than one who acts knowingly—but when the ignorance is deliberate, the acts are equally culpable. This equal culpability thesis has been subject to some criticism (Charlow 1992; Husak and Callender 1994; Sarch 2014:1052–1071). Specically, it has been argued that different kinds of deliberate ignorance may involve different levels of culpability. For example, someone who buys stolen goods from his/her friend at a suspiciously low price is more culpable if the reason for not asking any questions about the provenance of the goods was to avoid criminal liability, rather than not to embarrass the friend by questioning the legality and morality of the latter’s conduct (Charlow 1992–1413). People who contrive their ignorance so as to allow themselves to act in an unlawful manner are possibly even more culpable than mere knowing actors and are similar to purposeful offenders (Luban 1999:968–969; see also Aquinas 1265–1274, q. 76 art. 4). Another question that may be of importance in determining culpability is: What would the person have done had s/he known the incriminating facts? David Luban has argued that while deliberately ignorant people, who would have acted in the same way had they known the incriminating fact, are as culpable as knowing offenders, such equivalence does not hold for people who would have refrained from the same action had they been informed (Luban 1999:973–976). One could, however, argue that such counterfactual thinking may be of relevance to the evaluation of the moral status of the agent, but not the culpability of the action under the actual circumstances. Furthermore, it is possible that people who would have refrained from acting, had they known, are more blameworthy, since in a sense they could have easily prevented the crime by conrming their suspicion. Another doctrine that appears to be relevant is that of collective knowledge in corporate criminal law. This doctrine allows courts to nd that a corporation acted knowingly if the aggregate of its employees’ knowledge was sufficient for the purpose, even if no single individual within the corporation possessed that knowledge. Thus, for example, if one employee makes a report, and another knows that some of the reported facts are false but is unaware that a report has been made, the corporation may be convicted for knowingly making a false report, even though no single employee had such knowledge (Colvin 1995:18–19). This doctrine is especially useful in preventing corporations from avoiding criminal liability by cultivating a regime whereby no single employee is aware of the incriminating facts (for further discussion, see Teichman et al., this volume). To illustrate, consider the above scenario, but now add the

Deliberate Ignorance and the Law

303

fact that the manager of both employees suspects that the rst employee may know that some of the details in the report are false, so he makes certain that the latter is unaware that the report has been submitted. While the doctrine does not criminalize the conduct of the manager (although he may sometimes bear personal criminal liability, possibly under the willful blindness doctrine discussed above), it does deter the use of deliberate ignorance within the corporation. In fact, two commentators have suggested that the doctrine only applies when there is an element of deliberate ignorance in play (Hagemann and Grinstein 1997). Unlike criminal law, private law (including contract, tort, property, and unjust enrichment law) is generally more interested in facilitating fair and efficient behavior than in moral culpability. Consequently, it may well treat those who should have known something similarly to those who actually knew it, even though the latter are often more blameworthy. The determination as to whether or not someone should have known a given fact is usually made by asking whether refraining from obtaining the information was reasonable under the circumstances. A nice example may be found in the U.S. law of contractual duress, as encapsulated in Section 175 of the Restatement (Second) of Contracts. When a person enters a contract as a result of an improper threat that leaves him or her no reasonable alternative, s/he is entitled to annul the contract. Typically, the threat is made by the other contracting party, but occasionally by someone else. For example, a husband may threaten his wife that he will leave her unless she sells her jewels and gives him the proceeds, in which case he is not a party to the contract of sale between his wife and the jewelry buyer. In such cases, the victim (wife) can annul the contract, unless the other party (the jeweler) made the transaction “in good faith and without reason to know of the duress” (Section 175(2)). Thus, to remove the jeweler’s temptation to go ahead with a protable transaction under suspicious circumstances, the law treats a person who had reason to know of the duress as one who actually knew about it. Similar rules apply to transactions where one party had reason to know (a) that the other party’s consent was induced by misrepresentation by a third party, (b) that the other party made the contract due to a fundamental mistake, (c) that the latter was unable to act in a reasonable manner in relation to the transaction due to mental illness, or (d) that s/he was unable to reasonably understand the meaning of the transaction due to intoxication (Regan 1999).3 While such rules cover both deliberate and negligent ignorance, they do meet the challenge posed by the former (and overcome the difficulties of proving that one’s ignorance was willful). Imposing strict liability similarly discourages deliberate ignorance. Under such a regime, a defendant is subject to (criminal or civil) liability for his/ her harmful conduct regardless of whether s/he knew, or even ought to have known, the relevant facts. Consequently, such a regime creates an incentive 3

See Restatement (Second) of Contracts §§ 164, 153(b), 15(1)(b), and 16, respectively.

304

E. Zamir and R. Yair

to avoid the harmful conduct and to acquire the necessary information to that end (as long as the cost of acquiring the information and using it to avoid the harm is smaller than the expected sanction). In fact, Hamdani (2007) has argued that strict liability in criminal law is best explained by this rationale and is thus prevalent mostly in instances where there are strong market incentives to remaining ignorant, such as the sale of liquor to minors. Similarly, the general principle that ignorantia juris non excusat (ignorance of the law is not an excuse) is, in a sense, a standard of strict liability with respect to knowledge of the law and serves a similar function. Indeed, absent this principle, there is virtually no incentive for anyone to acquire information about the law (Hamdani 2007:448–449). Again, doing away with any requirement of awareness serves goals that are much broader than merely coping with deliberate ignorance, yet it serves the latter as well. The rules concerning willful blindness, constructive knowledge, and strict liability may be used not only to deter harmful conduct, but also to encourage benecial conduct, assuming one accepts the viability of this distinction (see Zamir 2015:177–199). However, given that the law is much more hesitant to impose duties to actively help others than to prohibit the active iniction of harm, the willingness to equate negligence or even willful blindness with actual knowledge is more limited in this context. For example, with respect to the crime of failure to prevent a felony, the Israeli Supreme Court has declined to apply the willful blindness doctrine as a substitute for actual knowledge of a plan to commit a crime.4 Specic Duties to Acquire Information Rather than, or in addition to, imposing liability for the harm caused by the actions of the deliberately ignorant, the law occasionally focuses on the information-acquisition phase by imposing a specic duty to acquire information, and even dictating the procedure for doing so. For example, the Financial Action Task Force’s recommendations on anti-money-laundering laws require nancial institutions and some nonnancial business professionals to complete a process of customer due diligence. This includes acquiring information about the customer’s identity (and, in the case of corporations, its ownership structure) as well as various details about the transaction or nature of the business in which the customer seeks to engage (FATF 2012/2019). If this information raises suspicions about the nature of the transaction, an obligation to report is triggered. This protocol prevents institutions from skirting the obligation to report by remaining ignorant about the nature of the transaction. Similarly, both international and national public law may require an environmental impact assessment to be conducted before approval is given to any enterprise that may adversely affect the environment. Meeting this procedural obligation—which 4

Har-She v. State of Israel (2001) 55(ii) P.D. 735, 756–68.

Deliberate Ignorance and the Law

305

involves gathering information about the impact of the proposed project, including explicit descriptions of potential alternatives, and identifying potential risks and uncertainties—ensures that a decision will, at the very least, be informed, if not necessarily wise or ethical (Craik 2008). Although specic information-acquisition duties may impose signicant costs (Gill and Taylor 2004), they may be desirable for several reasons. First, they may reframe the context of the process: when a banker asks a customer for additional information about a given transaction, it is not because s/he suspects misconduct, but because the law requires her/him to do so. This type of reframing makes the process more comfortable to both parties. Second, such duties, if accompanied by criminal or administrative sanctions, create additional deterrence against deliberate ignorance by capturing cases where it has not resulted in harm. This issue relates to a broader debate about liability that is risk based, rather than harm based. Third, setting a clear procedure makes it psychologically more difficult to avoid information. In particular, it reduces the vulnerability to motivated reasoning (see below), which in part is facilitated by distorting the information-acquisition process so as to promote a desired conclusion. Provision of Information Policy makers may reasonably conclude that, all things considered, there is insufficient basis for imposing legal sanctions against a given undesirable act or omission, yet wish to encourage desirable (and discourage undesirable) behavior noncoercively. When people engage in undesirable conduct (or fail to engage in desirable conduct) because they are (possibly deliberately) ignorant of the signicance and ramications of their behavior, providing them with the relevant information may induce them to change their practices. The state itself may provide such information or mandate others to do so. A case in point is the provision of salient information about the hazards of cigarettes (including graphic warnings that seek to evoke visceral negative reactions) to discourage smoking (Hammond 2011). In a different sphere, where the goal is to reduce employment discrimination, institutions may inform recruiters about existing underrepresentation of women and minority groups in the workforce and educate decision makers about unconscious biases that affect hiring decisions, although the efficacy of these measures, in and of themselves, is rather limited (Kalev et al. 2006). Finally, it has been suggested that a reliable and uniform labeling system for products, which grade them in terms of the impact of their production process on human welfare (e.g., employees’ working conditions), the environment, and whether they involve animal abuse, may encourage ethical consumption (Assaf 2016). This suggestion is supported by ndings that while consumers deliberately avoid information about the ethical attributes of products, once it is forced upon them, they do use it (Ehrich and Irwin 2005).

306

E. Zamir and R. Yair

While these and comparable instances of information provision do not target deliberate ignorance, the line between deliberate and nondeliberate ignorance is often unclear, both conceptually and empirically. Psychological mechanisms such as motivated reasoning (the acquisition and processing of information in a manner that leads to the sought-for conclusion; Kunda 1990) and conrmation bias (the tendency to seek and process information in ways that are partial to one’s interests, beliefs, and expectations; Nickerson 1998) blur this line. People consciously and unconsciously look for conrmatory evidence and tend to ignore disproving evidence, deny its relevance, or give it less weight. Providing clear and conspicuous information about the medical hazards of smoking, the unequal representation of women and minorities in a given organization, or the conditions in which products are manufactured makes it more difficult for people to ignore the troubling information, and might induce them to change their behavior accordingly. It has also been shown that when people are faced with a self-beneting choice that might potentially harm someone else, they prefer not to know whether such harm would indeed ensue, so that they can make their choice in good conscience (Dana et al. 2007). Compelling offenders to meet with their victims and realize the consequences of their actions makes it impossible for them to ignore the harm they have caused. Fostering such accountability and responsibility taking is a key element in restorative justice programs (Van Ness and Strong 2015:81–96). Collective Ignorance Thus far, we have discussed instances in which the law strives to overcome the deliberate ignorance of individuals, but ignorance (and prevalent misconceptions) sometimes characterizes entire societies, or segments thereof. In such cases, its ramications may be greater than those of individual ignorance. Relevant examples include slavery in ancient Greece and eighteenth-century United States; racial and gender discrimination in many societies; and even active participation in genocide, or turning a blind eye to it, in Nazi Germany. Such practices are typically accompanied by deliberate ignorance or misconceptions of associated facts (e.g., about gender differences) and a blindness to their profound immorality (at least according to current moral values). In fact, customary practices, such as eating meat, may be condemned in the future just as much as the aforementioned practices are condemned by us (as, indeed, some critics already do). We cannot delve into the issue of whether members of such societies are unable to see the wrongness of such practices (and are therefore blameless); or should be held fully accountable for their active or passive involvement in them; or perhaps held accountable, subject to some kind of cultural mitigating circumstances. Importantly, those who support either of the latter two views tend to think that collective ignorance is, at least in part,

Deliberate Ignorance and the Law

307

deliberate (Moody-Adams 1994; on the related issue of collective forgetting of past atrocities, see Ellerbrock and Hertwig, this volume). Assuming that collective ignorance facilitates objectionable or even abhorrent practices, what legal means may be taken to ght it? In principle, all three types of measures discussed in this part could be used, mutatis mutandis, to counteract not only individual, but also collective ignorance. The difficulty, of course, is that when the government or hegemonic segments of society benet from the collective ignorance, they are unlikely to try to counteract it, and may even try to frustrate attempts by minority groups to counteract it. Constitutional safeguards of free speech (in democratic countries) and supranational legal norms (that apply to all countries) may have some benecial impact in this respect. In fact, issues of willful blindness have recurrently been discussed in international criminal proceedings (Van Der Vyver 2004:75–76). Lastly, transitional justice processes are sometimes designed to shed light on the truth of past atrocities and eradicate surrounding ignorance (be it deliberate or otherwise). Such processes may or may not be accompanied by legal sanctions against those who committed such atrocities. For example, truth and reconciliation commissions, like those established in postapartheid South Africa, may offer amnesty in exchange for testimony about the extent of the crimes committed, thereby creating an extensive historical record, and possibly preventing its recurrence (Simonovich 2004:351–352).

Facilitating Deliberate Ignorance While deliberate ignorance may have negative consequences, it may also bring about positive outcomes. It may, for example, facilitate better decision making and promote other values in instances where full information might be detrimental. In this section, we review some such instances, starting from the general and basic issue of designing the system of government and constitutional protection of human rights, and then moving on to more specic legal topics. Veils of Ignorance People often make decisions that affect not only, or even primarily, their own interests, but those of others as well. These include decision and policy making by public officials, such as legislators and judges, CEOs, and other office holders in corporations, attorneys, and other duciaries. Such principal-agent relationships often involve a conict of interest. More generally, people’s interests are often misaligned with the overall social good (e.g., due to externalities). One technique, or rather group of techniques, for handling such conicts of interest and misalignment is to keep decision makers ignorant of how their decisions may affect their interests. In Rawls’s (1979) famous thought experiment,

308

E. Zamir and R. Yair

this is achieved by decision makers not knowing what their abilities, tastes, and positions will be in the society whose social order they design. In the real world, people know their current characteristics and position, but a veil of ignorance may be approximated by creating uncertainty about how their decisions might affect them (Vermeule 2001). One path that the law takes is to structure decision making such that decision makers do not know, or do not know precisely, where their interests will lie when their decisions are implemented. Thus, the separation between a constitution and ordinary legislation—and the supremacy of constitutional over ordinary norms—may be viewed as a means of establishing the fundamental norms of government and the protection of human rights before it is apparent who would benet from those norms, and who would be adversely affected by them (Vermeule 2001). The same holds true for the ideal of separation between the enactment of general norms by the legislature and their implementation in specic cases by the law courts or the executive (Nzelibe 2011; Vermeule 2001). In the same vein, public and private bodies (including the legislature, the Cabinet, and faculty councils) may adopt two-stage decision procedures with the purpose of achieving similar goals. For example, the Cabinet might rst decide on the size of a critical budgetary cut and only then allocate that reduction among the various ministries and agencies. Similarly, an academic department may adopt a long-term development program, in which its needs and aspirations are dened in the abstract, and only then make specic decisions about new recruits. Another technique that may be viewed through this prism is deferred implementation of legislation and other decisions. It is easier to overcome sectorial opposition to socially desirable reforms if their implementation is postponed, because such postponement creates uncertainty about the reform’s gainers and losers, and because some of the future losers are not party to the present decision process (Porat and Yadlin 2006). In keeping with the same logic, it may be desirable to expedite negotiations about the design of general norms so as to conclude them before more information becomes available to the negotiators. A case in point is negotiations of international instruments for addressing global warming and other climate changes, before it is known how each country would be affected by those changes. Evidence Law: Admissibility Rules, Privileges, and Presumptions Similar to the conventions of scientic research, the law of evidence sets considerable limits on the determination of facts. Among other things, the rules of evidence (and sometimes substantive legal rules) dictate that certain pieces of information are not made available to judicial fact nders. Some inadmissibility rules are based on the premise that the prejudicial effects of certain types of evidence outweigh their probative value. For example, information about a defendant’s past convictions may be relevant to a determination of liability in a

Deliberate Ignorance and the Law

309

given case, but may also skew decisions toward a nding of guilt. Other types of evidence, such as hearsay testimony, may be excluded due to their allegedly limited probative value (or the difficulty to assess their probative value). Finally, some inadmissibility rules stem from policy considerations that are unrelated to the probative weight of the evidence. For example, evidence obtained through illegal police practices might be deemed inadmissible in order to incentivize the police to behave appropriately in future cases and to protect the fairness of the judicial process (for a general discussion on these rules in U.S. law, see Broun et al. 2013, vol. 1:897–991, 1013–1187 and vol. 2:175–257). Relevant information may also be kept out of the reach of judicial fact nders when the people who possess the information, or the people to whom it refers, enjoy the legal privilege to refuse to disclose this information, as in the cases of attorney–client and physician–patient relationships (Broun et al. 2013, vol. 1:527–642). Clearly, these privileges promote values that compete with the primary goals of evidence law; namely, the accuracy of judicial factnding and the optimal allocation of the risks of error between litigants. Rules of burden of proof (including rebuttable legal presumptions that shift the burden from one party to the other) do not ordinarily exclude any information from the reach of the court. However, the stronger the presumption and the stricter the limitations on contradicting it, the more its effect resembles that of an exclusionary rule. For example, the marital paternity presumption (the presumption that the mother’s husband is the father of a child) used to be conclusive, and even today is difficult to contradict in some jurisdictions (Glennon 2000). Thus, under California law,5 a motion for a paternity test can only be led within two years from the child’s birth. Inasmuch as these doctrines effectively minimize the total sum of adjudicatory errors (or weighted errors, if some types of errors are considered more harmful than others), they are perfectly rational from a cost-benet standpoint. Even if the exclusion of a piece of evidence or the adoption of a conclusive presumption dramatically increases the risk of judicial error in a particular case, it may still be true that adhering to such rules would minimize the total sum of errors (Berman 2004). This argument does not necessarily apply to rules of evidence that serve other purposes, each of which requires a delicate balancing of the pertinent considerations, which we cannot offer here. Finally, it should be noted that the efficacy of inadmissibility is challenged when judicial fact nders are exposed to inadmissible evidence. This can happen during adjudication when witnesses and attorneys intentionally or inadvertently reveal the inadmissible evidence. Information from external sources, such as the media, may also be inadmissible. Subject to certain nuances, the picture emerging from numerous empirical studies is that judicial decision makers are unable to completely disregard inadmissible evidence, so it affects their decisions (Steblay et al. 2006; Wistrich et al. 2005). 5

See California Family Code §§ 7540, 7541.

310

E. Zamir and R. Yair

Evidence Law: Enhancing Evidence Credibility through Blinding Evidence law can enhance accurate fact-nding not only by depriving judicial fact nders of certain information, but also by using blinding techniques in the process of gathering and preparing the evidence (for discussion on comparable uses of blinding in other contexts, see MacCoun, this volume). Two primary examples are double-blind lineups and depriving experts of information that may bias their expert opinion. Eyewitness identication often plays a key role in criminal (and sometimes civil) trials. However, reliable identication hinges on accurate encoding, retention, and retrieval of information, all of which are imperfect and prone to biases. Dozens of studies have demonstrated (a) that people are not very good in encoding strangers’ faces and are particularly bad at identifying members of other races, (b) that memories tend to fade over time and may be contaminated (e.g., by exposure to media reports), and (c) that during the retrieval phase— often involving the use of lineups—witnesses are over-inclined to choose someone in the lineup and are inuenced by (conscious or unconscious) clues given by the lineup administrator (Simon 2012:50–80; Zamir and Teichman 2018:568–572). Thus, there is a growing consensus in the forensic literature that double-blind lineups, in which the administrator does not know the identity of the suspect, can signicantly increase the reliability of identication (Wells et al. 1998). However, since there is generally a trade-off between type-I and type-II errors, and since the adoption of this and comparable recommendations requires the allocation of more resources, this recommendation is far from being universally accepted (Steblay and Loftus 2013). Comparable concerns are expressed about expert opinions that are regularly used by litigants. When experts are hired by one of the litigants or are otherwise motivated to arrive at a particular conclusion, the reliability of their opinion is compromised due (at least) to conrmation bias and motivated reasoning. Several strategies, which we cannot explore here, have been proposed to mitigate these concerns by blinding experts to the identity of the party who hires them and by depriving them, at least initially, of information that might bias their investigation (for a collection of contributions on this issue, see Robertson and Kesselheim 2016:129–220). Anonymity and Omitted Details The notion that some types of information may adversely affect the impartiality of decision makers extends beyond the courtroom. For example, employers who recruit new employees, scholars who review their peers’ manuscripts for publication, and professors who grade their students’ papers are all vulnerable to all sorts of biases and prejudices that may lead their decisions astray (see also MacCoun, this volume). As previously noted, even well-intentioned people may be unable to overcome automatic and possibly unconscious cognitive

Deliberate Ignorance and the Law

311

biases. Unidirectional—and sometimes even bidirectional—anonymity and exclusion of certain bits of information (e.g., an applicant’s religion or sexual orientation) may therefore facilitate unbiased decision making. Thus, for example, the British governmental online guidelines for interviewing new employees provides a list of protected characteristics (e.g., age, gender reassignment, and marital status) that an employer must not ask candidates about during job interviews (GOV.UK 2018). In the same spirit, Airbnb, the global company that offers an online marketplace for lodging, changed its policy (October 22, 2018) such that potential hosts can only view the photos of potential guests after accepting a booking request, thereby reducing race-based and other forms of discrimination. While it may be difficult to hide some of these characteristics from human decision makers, the advent of computerized decision making provides a unique opportunity to implement such measures (see also Teichman et al., this volume). At times, even information that conveys positive probative value may be excluded from the decision process, as illustrated by the various prohibitions on discrimination in the insurance industry. In the United States, for instance, health insurers are prohibited from considering genetic information when setting premium rates or rules of eligibility.6 Turner (2009:316) notes that this law may be understood as a method of discouraging the deliberate ignorance of the individual who would otherwise have an incentive to remain ignorant of his genetic information. Similarly, the European Court of Justice has invalidated an exemption in the equal treatment in the goods and services directive which allowed for statistically based gender discrimination in insurance premiums, ruling that it is incompatible with the objectives of the directive and with articles 21 and 23 of the European Charter of Fundamental Rights.7 While such prohibitions may curtail the efficiency of the market, they are arguably justied on the basis of considerations such as the protection of susceptible populations or protections of individual privacy (Avraham et al. 2014:201–221). The exclusion of information from decision makers should, however, be considered with caution, as it can have unintended and even counterproductive consequences. For example, many jurisdictions across the United States have recently adopted “ban the box” policies, prohibiting employers from enquiring about applicants’ criminal records, at least until late stages of the hiring process. These policies are meant to help people with criminal records nd employment. However, employers who believe that ex-offenders are less qualied may turn to indirect ways of statistical discrimination to avoid hiring them. Indeed, both Agan and Starr (2018) and Doleac and Hansen (2016) nd that these policies have harmed the job prospects of young low-skilled black and 6 7

42 U.S. Code § 300gg–53(a), Prohibition of health discrimination on the basis of genetic information. Association belge des Consommateurs Test-Achats (ASBL) v. Conseil des ministres 2011. ECJ Case C-236/09.

312

E. Zamir and R. Yair

Hispanic men, who are more likely than other demographic groups to have a criminal record. Similar concerns arise even when the decision process is computerized (Kleinberg et al. 2018). Ignoring and Forgetting Forgetting is important at both the individual and the social level (see Schooler, this volume). Our ability to forget past events helps us overcome traumatic events, allows us to start over with a clean slate, and promotes our autonomy by liberating us from the shackles of the past (Mayer-Schönberger 2009:16–49). A society that never forgets limits its opportunity to receive a second chance and creates a chilling effect on its members, who know that the impact of every mistake they make will be perpetuated. One notable way in which the law recognizes the importance of forgetting is the expungement of criminal records. Criminal records impose a signicant burden on convicts attempting to reintegrate into society, as they hinder the ability to nd employment, receive credit, or rent an apartment. These effects are not limited to those who have been found guilty, but extend to those who have been charged and acquitted, and even those who have only been arrested (Jacobs and Crepet 2008). To mitigate the effect of this mark of Cain, many jurisdictions allow for a process of expungement, which may mean that a criminal record is sealed, vacated, or completely destroyed. The conditions under which expungement is attainable vary, and most jurisdictions allow for a more lenient treatment of juvenile records in light of the greater emphasis on rehabilitation. In some cases, the process is automatically initiated after a certain amount of time has passed, whereas in others it may be granted only upon petition. The implications of expungement also vary. For example, Section 651:5 of the New Hampshire Criminal Code states that “the person whose record is annulled shall be treated in all respects as if he or she had never been arrested, convicted or sentenced….” Other jurisdictions state that such a person is entitled, even under oath, to deny that the expunged incident ever occurred.8 The notion of forgetting and starting over with a clean slate is similarly evident in the regulation of reports about people’s nancial history. Credit reports are used by creditors to determine whether, and at what interest rate, they will offer credit to a consumer. Reports may also be used by insurance agencies, property owners who wish to rent out their property, and sometimes even prospective employers. The report includes information about previous loans, debts, defaults, and bankruptcies. While this information is certainly useful, many countries set a limit, after which the information may no longer be included in the report. For example, in the United States, bankruptcies are 8

See Connecticut General Statutes § 54-142a(e), Erasure of criminal records; see also section 4 of the U.K. 1974 Rehabilitation of Offenders Act.

Deliberate Ignorance and the Law

313

omitted from the report after ten years,9 and many countries in the European Union impose even stricter standards (Feretti 2008:103–121). These rules balance the utility of the data in predicting default rates against the interest of allowing individuals a clean slate. The advent of the digital age poses a particularly difficult challenge to legal attempts to promote forgetting. While in the past forgetting was the norm and remembering the exception, digitization has reversed this state of affairs (Mayer-Schönberger 2009). Digitization—and the concomitant ability to index, store, and search huge amounts of information—has made forgetting a lot more difficult than it used to be. Particularly challenging to the goal of permitting a clean slate is the pervasive use of search engines that have perfected the capacities of indexation, search, and retrieval, thus creating an eternal memory. The repercussions of eternal memory have led to an intense debate about the potential legal right to be forgotten (Leta Jones 2016). The intent of such a right is to grant people a certain amount of control over the online circulation of details about them. Depending on its scope and exact denition, such a right could possibly allow one to de-index certain results from Google, erase posts from Facebook and other social media outlets, and request the revision or removal of other online references. Underpinned by a desire to protect reputation and privacy and to allow one to shape one’s own identity free from the burdens of the past, such legally induced amnesia also has serious implications for free speech and the free ow of information online. For this reason, detractors of the right to be forgotten are concerned about the creation of “black holes” of information and attempts to rewrite history (Rosen 2012). Thus, while the European Union has recently introduced its new General Data Protection Regulation, which explicitly recognizes a right to be forgotten, in the United States the First Amendment would most likely prevent similar initiatives (Larson 2013). Without delving into the normative debate, we note that while it may restrict certain forms of speech, the right to be forgotten may also facilitate expression and prevent the chilling effect of knowing that every Facebook post or tweet we make may come back to haunt us in the future. Attorney–Client Relationships and Perjury Sometimes the law does not necessarily encourage deliberate ignorance but nonetheless tolerates it by not equating deliberate ignorance with actual knowledge. For example, under the model rules for professional conduct provided by the American Bar Association (ABA), attorneys have a duty of candor toward the court that prohibits them from knowingly deceiving the court (ABA Rule 3.3). In this context, “knowingly” is construed as pertaining to actual knowledge only (ABA Rule 1.0(f); Roiphe 2011:190). This regime has cultivated a practice of deliberate ignorance amongst defense lawyers. While they 9

15 U.S. Code § 1681c(a)(1), Requirements relating to information contained in consumer reports.

314

E. Zamir and R. Yair

are not permitted to knowingly allow a witness to commit perjury, they may deliberately avoid information, thereby enabling their clients to present false testimony (Roiphe 2011:197). In an effort to defend this regime, it has been argued that requiring lawyers to investigate their clients’ statements would undermine the attorney–client relationship and induce clients to hide information from their lawyers (Luban 1999:976–980). Arguably, however, it follows that lawyers should simply be allowed to introduce false testimony under certain conditions, as the deliberate ignorance route impairs communication, which is at the core of the attorney–client relationship (Roiphe 2011). At any rate, outside the context of perjury, the willingness to accept deliberate ignorance on the part of lawyers is limited. Notably, the role of deliberately ignorant lawyers in the Enron scandal has led to stricter regulation of corporate lawyers. Such lawyers now have a duty to investigate suspicions of client misconduct, report it within the corporation, and in certain cases even withdraw representation and inform the Securities and Exchange Committee (Cramton et al. 2004). The Downside of Informed Consent The informed consent doctrine serves several interrelated purposes in tort law, chief of which are (a) facilitating patients’ compensation for injuries they have suffered as a result of medical treatment, in incidences where, but for the lack of informed consent, the treatment was adequate and involved no negligence (Peck 1984; Raab 2004); and (b) protecting patients’ autonomy (Jones 1990). While contemporary discussions highlight patient autonomy, the former, traditional purpose still plays a major role in practice, as courts strive to provide relief to unfortunate patients who have incurred bodily injury, irrespective of physician negligence (Brennan et al. 1996). To fulll the former purpose, but not the latter, patients must establish that had they received the relevant information, they would have refused to undergo the treatment, thereby avoiding the resulting injury (Maclean 2009:183–188). Patients’ autonomy is compromised whether or not they would have consented to the treatment, so it does not hinge on such causality (although the magnitude of the harm to autonomy may hinge on it). Neither of the two purposes necessarily assumes that patients actually wish to be fully informed about their condition and the potential prospects and risks of the proposed treatment. Arguably, respect for patients’ autonomy requires that they receive all the relevant information, irrespective of whether they wish to receive it (Ost 1984; Turner 2009:347), and once patients are fully informed, they might refuse to undergo a given treatment even if they initially preferred not to receive the relevant information (for further discussion of the right not to know, see Berkman, this volume). Nonetheless, the doctrine of informed consent is associated with the assumption that people wish to be informed about their medical condition, the

Deliberate Ignorance and the Law

315

available treatments, and the attendant prospects and risks involved. If patients typically prefer that their physician make the decision for them, and agree to follow the physician’s advice whatever information they receive, then the causal link between not getting the relevant information and consenting to undergo the relevant treatment is severed. Arguably, this would be the case even if a minority of patients do not share this typical preference, as long as the plaintiff fails to prove that s/he belongs to that minority. As for patient autonomy, it plausibly requires that the patient’s (explicit and perhaps even implicit) choice not to receive information is respected as well (Andorno 2004; but see Harris and Keywood 2001). Contrary to much of the legal discourse, the available empirical data shows that patients do typically prefer to follow their physician’s advice, rather than make the necessary decisions by themselves (Schneider 1998:35–46). And while patients’ desire for information is much stronger than their desire to make decisions (e.g., Ende et al. 1989), many also prefer not to know at least some aspects of their medical condition, prognosis, and the risks involved in a given treatment (e.g., on the preferences of elderly cancer patients, Elkin et al. 2007; on patients’ desire to get information about various aspects of a surgery, Asehnoune et al. 2000; on the aversion of some cancer patients to receive technical information before major cancer surgeries, McNair et al 2016:261 and Schneider 1998:110–111). Insofar as this is true, there is a tension between the legal norms that strongly incentivize physicians to provide as much information as the patient can reasonably comprehend and patients’ common preference (which is not necessarily irrational) to remain uninformed. This tension might be eliminated by entitling patients to damages for injuries arising from medical treatment on a no-fault basis. Under such a regime, the law would not have to frustrate patients’ options to remain ignorant and still compensate them for their loss and suffering in appropriate cases. Physicians would still be liable for violating the duty to provide patients with relevant information, but only in instances where patients do wish to be informed (or at least do not wish to remain uninformed). Of course, moving from faultbased to nonfault liability for medical injuries, and delineating the scope of such liability, would require careful consideration of a multitude of factors that lie beyond the scope of the present discussion (Studdert and Brennan 2001; Weiler 1993). Fostering Settlements and Plea Bargains People’s decisions are inuenced by the anticipated feeling of regret; that is, by the expectation that if it transpires that they have made the wrong choice, they would experience regret. The anticipation of regret depends on what one expects to know ex post. The decision maker may expect to know the outcomes of both the chosen option and the forgone one(s) (full knowledge), or only those of the chosen option (partial knowledge) (Ritov and Baron 1995).

316

E. Zamir and R. Yair

Sometimes, one option entails full knowledge while the other involves only partial knowledge. In such cases, the latter option is typically more attractive to regret-averse people, because not knowing the outcome of the forgone option largely shields one from the anticipated regret. This is a common explanation for the pervasiveness of settlements in civil proceedings and of plea bargaining in criminal ones (Zamir and Teichman 2018:505–507, 522). Inasmuch as the legal system strives to encourage settlements and plea bargains, courts should make sure not to reveal how they would have decided the case in the absence of a settlement or a plea bargain.

Conclusion In this chapter, we have surveyed ways in which the law overcomes some instances of deliberate ignorance and fosters others. Along with major issues, such as the doctrine of willful blindness and institutional veils of ignorance, we touched upon more specic and even peculiar examples. In addition to describing existing legal norms, we highlighted cases in which the law should arguably combat deliberate ignorance more effectively or, conversely, facilitate more of it. Given the limited scope of the paper, it should primarily be taken as an invitation to further discuss these theoretically fascinating and practically important issues.

Acknowledgments We are grateful to Christoph Engel, David Hagmann, Ori Herstein, Daphna LewinsohnZamir, Ofer Malcai, Hanan Raviv, Nora Szech, Sonja Utz, and Limor Zer-Gutman for helpful comments, and to Inbal Elbaz for excellent research assistance. This research was supported by the I-CORE Program of the Planning and Budgeting Committee and the Israel Science Foundation (Grant No. 1821/12).

17 Deliberate Ignorance Present and Future Christoph Engel and Ralph Hertwig In 2016, as we penned our conceptual and explorative article on the phenomenon of deliberate ignorance (Hertwig and Engel 2016), we felt a bit like explorers setting sail for an unknown destination. Our spirits were high and we were ready for an intellectual adventure. Fascinated by the richness of the phenomenon, we soon noticed that others in the elds of economics, sociology, law, and medicine had been travelling in a similar direction, guided by terms such as “information avoidance,” “willful blindness,” and even deliberate ignorance (e.g., Robbins 1990). Still, we found the vast territory of deliberate ignorance to be mostly uncharted and hoped that our article would serve as an inspiring travelogue, describing our attempt to survey the lay of the land and inviting others to join us in exploring the phenomenon further. To our delight, our excitement proved contagious, as demonstrated by the lively discussions that emerged at this Ernst Strüngmann Forum. In this nal chapter, we reect on specic areas that have left their mark on us both. We begin with an observation that ran throughout all discussions, and then present our thoughts, organized around the four thematic areas of the Forum.

The Power and Perils of Interdisciplinarity Statements calling for interdisciplinary analysis of a research topic are ten a penny. Yet, to comprehend the phenomenon of deliberate ignorance and its implications requires exactly that. Deliberate ignorance is a human behavior with strong normative and institutional implications: it plays out individually and collectively, is subject to temporal dynamics and changes in norms, and can be investigated by means of experiments, surveys, interviews, modeling, or archival work. No single discipline has a full command of these tools, concepts, and dimensions. Against this background, we submitted a proposal to the Ernst Strüngmann Forum, which has a reputation for promoting truly interdisciplinary discourse, and were delighted when our proposal was accepted.

318

C. Engel and R. Hertwig

Interdisciplinary discourse is often hard. Typically, the humanities and social sciences do not share the behavioral sciences’ commitment to logical positivism, behavioral experimentation, modeling, and quantication. The paradigmatic theories of individual and collective behaviors (e.g., expected utility theory and game theory) adopted by many economists and psychologists are not necessarily compatible with, say, the terminology and explanatory concepts used by social scientists, or with historians’ foci on the dynamics of change across time and the intricate interdependency of individual and collective processes. Relatedly, the descriptive concept of deliberate ignorance may feel like an intellectual affront to a historian for whom the study of history is at its core enlightenment, understood as a necessary condition of human liberty (Nipperdey 1980). Finding common ground amidst all this was not easy, as many discovered at the Forum. Equally, though, such intellectual provocation can also be the starting point for something new. We hope that this volume will be viewed as an attempt to submit the polymorphous phenomenon of deliberate ignorance to an analysis without borders. Others are necessary and should follow.

What Exactly Is Deliberate Ignorance? Denitions are simplistic constructions with a purpose. They draw boundaries because classication has instrumental value. They can stimulate thought, guide investigation, enable understanding, suggest evaluations, and even inform the design of policy interventions. Whether or not a case belongs to the territory of deliberate ignorance is therefore not an ontological question. We do not mean to propose a cut-and-dried test analogous to that for blood types, where only, say, rhesus positive counts as the phenomenon in question. The denition must t the intended research purposes. Admittedly, these purposes are diverse and need not be fully aligned in their denitional implications. If the aim is to catalog ever more instances of deliberate ignorance, a wider denition may be preferable to ensure that no interesting case is overlooked. If the coverage is too wide, however, the concept risks losing its bite. In addition, it becomes increasingly difficult to understand the phenomenon, let alone to model it rigorously. Narrowing the scope may be necessary to detect the ne-grained structure of the phenomenon. Yet too narrow a denition may hinder normative appraisal of instances at the margin that are normatively no less troublesome than the prototypical ones. Finally, institutions are lumpy responses to lumpy perceived problems (North 1990). An institutional designer may therefore choose a different criterion for the tradeoff between precision and breadth than would be used by a modeler or surveyor of deliberate ignorance. In some contexts, a narrow and precise denition may be necessary to design an institutional intervention and convince policy makers to implement it. In other contexts, a well-intended institution with an overly

Deliberate Ignorance: Present and Future

319

narrow denition of the cause for intervention may miss the target and prove counterproductive. Like many other concepts, deliberate ignorance consists of a hard core— represented by a repository of paradigmatic examples (see Appendix 14.1 in Krueger et al., this volume)—and a somewhat fuzzy periphery. Where to draw the line of demarcation will depend on the research question and its implied decision criterion. Below, we illustrate this point by reference to a few cases discussed at the Forum. Heuristics and Deliberate Ignorance Brown and Walasek as well as Kornhauser (both this volume) have asked whether the nonuse of information, which is not only known to exist but has been encoded in memory, implies deliberate ignorance. Dened thus broadly, the concept would also encompass any heuristic (e.g., Gigerenzer et al. 2011) that is used deliberately. Indeed, one denition of a heuristic is “a strategy that ignores part of the information, with the goal of making decisions more quickly, frugally, and/or accurately than more complex methods” (Gigerenzer and Gaissmaier 2011:454). There is a vast literature on heuristics, to which both of us have contributed. The debate remains controversial, with one key point of contention being the rationality of heuristics: Does heuristic decision making lead to more—or to more serious—errors than “rational” procedures, as dened by logic or statistical models, or can it perhaps outperform more complex strategies when applied in the right environments (the “ecological rationality” of heuristics; Gigerenzer and Gaissmaier 2011; Hogarth and Karelaia 2007; Spiliopoulos and Hertwig 2019)? Is heuristic decision making an instance of deliberate ignorance? In our view, the answer must be no. Dening any act of ignoring information as deliberate ignorance would miss the novelty of the concept and overlook what makes it psychologically so interesting. A person may rely on a heuristic to offset cognitive limitations, as suggested by the heuristics-and-biases view of heuristic decision making (Kahneman 2011), or because they have learned that doing so often leads to better outcomes, as suggested by the ecological rationality view of heuristic decision making (Gigerenzer et al. 2011; Hertwig et al. 2019). Yet both reasons fail to produce the sense of perplexity often generated by instances of deliberate ignorance. As we wrote (Hertwig and Engel 2016:360): We are particularly interested in situations where the marginal acquisition costs are negligible and the potential benets potentially large, such that—from the perspective of the economics of information […]—acquiring information would seem to be rational.

In our view, the tension resides precisely here: in the individual or collective choice to not consult information that could be acquired at negligible costs with potentially substantial benets. Heuristic decision making is not typically characterized

320

C. Engel and R. Hertwig

by this tension. In fact, it is the nonuse of additional information that has potentially substantial benets in heuristic decision making, as it helps to escape, for instance, the curse of overtting (Gigerenzer and Brighton 2009). Yet, the usefulness of a denition depends on its purpose and the research question under consideration. Those interested in normative reasons for not generating or retrieving available information from memory or the external world may benet from a normative discussion of heuristic decision making and its relationship to deliberate ignorance. From some normative perspectives, knowing about the content of possibly troublesome information but ignoring it (heuristic decision making) may be even more problematic than knowing about the existence of possibly troublesome information but not exposing oneself to its content (deliberate ignorance). Forgetting, Expungement, and Deliberate Ignorance The deliberately ignorant individual has reason to expect that decision-relevant information is available, and yet chooses not to access that information. A functionally similar effect is achieved if the individual has been in possession of the information but through some means—such as (directed) forgetting— successfully removes it from memory before facing the need to act on it (see Schooler, this volume, for a discussion of the family resemblance between forgetting and deliberate ignorance). Institutionalized deliberate ignorance via “purposeful forgetting” also plays a role in a legal context. A classic example is expungement, common in juvenile criminal court proceedings. It entails erasing or removing from state or federal records the information that a minor has been convicted for a crime, by sealing or destroying the record. Some jurisdictions also make it illegal for private parties such as future employers to request this information (e.g., New Hampshire Criminal Code Chapter Section 651:5 X (f)). Someone who has forgotten something cannot use that information in the context of a given decision, but they may be able to retrieve the semantic or biographical fact on another occasion. Expungement is more radical: information is completely removed from the record. The effect of this institutional intervention thus transcends the concrete instance. The focus is no longer on a specic decision, but on any decision that might be affected by the information in question. Individual forgetting and institutional expungement are processes that occur after the fact. Both reset an individual, collective, or institutional information status to a state of ignorance. In this sense, one may argue that forgetting and expungement are not identical to deliberate ignorance. They stop the information in question from entering the system. Yet, phenomenologically, these processes appear to belong to the same functional family.

Deliberate Ignorance: Present and Future

321

Deliberate Ignorance Is More Than Information Avoidance The term “information avoidance”1 has been used in the health domain to describe behaviors such as parents-to-be avoiding genetic testing on an unborn child, gay and bisexual men declining to learn their HIV status, or women avoiding regular pelvic checkups (Howell and Shepperd 2012; Sweeny et al. 2010). The term is also used in Golman et al.’s highly informative review article (Golman et al. 2017). The pivotal reason behind our alternative terminological choice is that “information avoidance” suggests what Howell and Shepperd (2012) characterized as “defensive responding” (p. 259), turning the act of not seeking or using available information into a form of psychological reactance, possibly even a “public health concern” (p. 262) in need of therapy. Yet the choice not to know, as many of the examples discussed in this book attest, is not invariably dysfunctional. Indeed, there are numerous individual and institutional contexts in which deliberate ignorance affords a strategic advantage (e.g., Auster and Dana, this volume), attenuates the impact of selection biases (McCoun, this volume), constitutes a legal right (Berkman, this volume), or seems imperative as a means of keeping transitional societies together (Ellerbrock and Hertwig, this volume). Consider, for illustration, the choice of someone who has been diagnosed with a serious illness but decides not to ask about their prognosis. In one framing, this choice can be seen as a form of denial, the irrational avoidance of information fueled by fears of a bleak future. In another framing (Miller and Berger 2019), it is a positive choice: When faced with serious illness, being able to make decisions about the ow of information is one of the most life-affirming things you can do. It’s a way to declare: I am alive and it’s still my right to choose what’s best for me.

Patients may legitimately want to shield themselves from a menacing, and not necessarily accurate, timeline against which each day is ticked off. Calling this behavior “deliberate ignorance” does not negate the associated detrimental effects that actively avoiding information may bring (see Krueger et al. and Teichmann et al., this volume). It is this inescapable ambiguity that makes the phenomenon of deliberate ignorance so interesting and, in our view, takes it beyond the normatively charged concept of information avoidance. The example of not wanting to know one’s medical prognosis raises another issue. As a patient, the choice not to know does not necessarily mean that the information should be concealed from everybody. Often it means that the patient needs or wants somebody else (e.g., a physician or partner) to process it. A tool not (yet) implemented in standard medical practice enables patients to communicate to their physicians their preferences to know or not know, ranging from “Tell me everything” to “I don’t wish to know any information about my prognosis but I authorize you to speak with [blank] about my case 1

To the best of our knowledge, Frey (1982) was the rst to use the term “information avoidance” but others had previously referred to the avoidance of dissonant information (e.g., Mills 1965).

322

C. Engel and R. Hertwig

and for you to answer any question that this person may have about my likely prognosis and treatment” (see Miller and Berger 2019). Is this kind of delegation a way of deliberately ignoring information, or is it a way of deliberately using information? Obfuscating Information Obscuring access to information is a strategy used to overcome bias, increase impartiality in selection processes, and improve the quality of the decisions reached. Many orchestras, for example, utilize “blind auditions” in the preliminary rounds of the selection process for new members. Briey, an auditioning musician plays behind a screen, so that visual cues are removed from consideration. The intent is to force the selection committee to focus on a candidate’s musical performance, not on gender, race, or a person’s affiliation to certain teachers or musicians2 (see MacCoun as well as Krueger et al., this volume). In academia, the peer review process constitutes another example. Here, a double-blind procedure is used to increase fairness in the evaluation of scientic performance (MacCoun, this volume). Conceptually, both examples shield the identity of individuals, so that decision makers must rely on content. A related strategy deliberately adds noise to information, as is a standard practice in some scientic disciplines. Here, empirical research aspires to make causal statements about a population, along the lines of “whenever process A is observed, phenomenon B will happen.” For the most part, though, scientists are unable to observe an entire population; they can only observe a sampling of it, and this sample may not be representative of the population. There is thus a risk of overinterpreting the sample and wrongly inferring a causal relationship from a random co-occurrence in the sample. This problem is known as overtting. To safeguard against this, data is deliberately perturbed (e.g., random noise is added to each data point) before it is analyzed. Scientists will not report an observed effect unless it stands the test of this deliberate obfuscation (MacCoun, this volume). Whether obfuscation qualies as deliberate ignorance depends on the research question being asked. If one focuses solely on obfuscation, it is clear that the critical information is, by denition, available. This speaks against broadening the concept of deliberate ignorance. On the other hand, obfuscation increases both the cost of information retrieval and the risk that critical information will be missed. If obfuscation is, at least in principle, viewed as deliberate ignorance, its boundaries must be dened. Does that which counts as deliberate ignorance depend on the horizon of the intended recipient? A 2

How good a hiring committee’s judgments about musical performance are without visual information is another question. Although auditory information is commonly assumed to be the most important information in the evaluation of music, experimental studies suggest that people “depend primarily on visual information when making judgments about music performance“ (Tsay 2013:14580).

Deliberate Ignorance: Present and Future

323

mere nuisance for the savvy user may prove an insurmountable obstacle for the novice. Should self-obfuscation be considered differently from inducing a third party to hide information, or beneting from an outsider making it more difficult to access information? The Construction of Reality Berger and Luckmann (1991) hold that reality does not simply exist, it is socially constructed. Take, for instance, the construction of reality in the U.S. impeachment trial that is ongoing as we write this chapter. Did President Trump press legitimately for an investigation into a political rival’s son in an effort to ght a corrupt elite? That is one socially constructed reality. Another is that he solicited the interference of a foreign government to help him win the 2020 election. Deliberately accepting one of the two constructed realities as the full and objective truth may have multiple effects. For instance, it allows people to remain comfortably ignorant of any other information and/or revelations that may emerge in the future that are more consistent with the other reality. Should constructing and/or adopting one narrative in this way be subsumed under the heading of deliberate ignorance? Again, it depends on the research question. From an individualistic perspective, it may be important to distinguish between deliberate ignorance and the production and dissemination of a false narrative. The distinction between omission and commission may also cast a different normative light on such constructive efforts. Yet communication theorists are likely to argue that all forms of information processing are constructive. From this perspective, individuals do not mechanically integrate multiple pieces of information; instead, they make sense of communicative acts. Accordingly, drawing a strict boundary between looking the other way and telling an alternative narrative would thus be fallacious. Summary The concept of deliberate ignorance consists of a hard core of meaning and a fuzzy periphery. How the boundary is drawn depends on the research question as well as on the decision criterion. Many fascinating phenomena, some of which we have raised here, are located at the fuzzy periphery. It is important to note that the term “deliberate ignorance” suggests a simple dichotomy: the decision maker either knows or chooses to remain ignorant. In many contexts, however, knowledge and ignorance are matters of degree. For example, a newspaper reader who reads the rst paragraph of an article that describes graphically the impact of industrial-scale beef farming on animal welfare may choose not to read the whole article, not wanting to know any more about the provenance of their affordable supermarket beef. They thus know something but not everything, replacing complete ignorance with ambiguity.

324

C. Engel and R. Hertwig

How to Model Deliberate Ignorance? Behavior dened as deliberate ignorance can be analyzed and modeled in more than one way. Individualistic versus Holistic Models The most fundamental conceptual divide lies between the individualistic and holistic perspectives. From an individualistic perspective, the focus is on an agent’s decision to ignore information. Here, ignorance is a deliberate choice: The agent had the freedom to generate, retrieve, or use a dened piece of information. The question, however, becomes: Why did they choose not to do so? A holistic perspective, by contrast, focuses primarily not on agency, but on why and how information remains unused that otherwise, in a counterfactual world, could have been available. The object of investigation is the social process by which a common understanding of social reality is forged. Why could a community not see an alternative interpretation of a set of facts? From this perspective, the construction of one understanding—and the non-construction of an alternative understanding—is a political act. Individuals versus Higher-Order Agents If an individualistic perspective is taken, the rst step is to dene the agent of interest. Is it the individual deciding in isolation, say, not to get tested for a genetic risk? Or is the agent a group of people, such as the individual’s family, who would potentially also be affected by the test results? If the latter, does deliberate ignorance require that each member of the group not know? Is the agent an institution, such as a rm? If so, whose knowledge is attributed to this legal entity: that of the board members only or that of any employee? Is it a sufficient condition for deliberate ignorance that the agent has not taken the necessary steps to ensure that decision-relevant knowledge become available? Let us consider product liability: A manufacturer or seller of a defective product can be held liable for injuries arising from its use. Many legal orders dene liability in such a way that it does not suffice for the manufacturer simply not to know about the defect. Rather, they must ensure that product development and production is organized in such a way that they would be alerted to any sign of a defect. Should it count as deliberate ignorance if, for instance, the safety analysis for a new ight control system is organized in such a way that critical aws are not detected? Is a professional deliberately ignorant if they do not run tests that are standard in the profession? Is a group deliberately ignorant if it excludes from its membership an individual who would very likely have known critical information? Are corporate actors deliberately ignorant if

Deliberate Ignorance: Present and Future

325

they fail to organize the ow of information within the corporation in such a way that relevant information is brought to the attention of the board? Utility versus Strategic Interaction Again, assuming an individualistic perspective, should the focus be placed on the motives of an individual who decides in isolation, or on understanding the strategic advantage of not knowing? The strategic perspective is technically more involved, as the conditions for equilibria need to be dened. Higher-order effects need to be considered. If, for instance, deliberate ignorance affords a strategic advantage, that advantage presupposes that the counterpart knows or believes that they are alone in having the relevant knowledge. The “game of chicken” is a paradigmatic example. In this model of conict in game theory, two players head toward each other; if both stay on track, they will collide. The logic of the game is that the player who yields rst loses the game. Yet if neither gives in, both will perish. Consider two vehicles: one self-driving, the other driven by a human. The vehicles approach an unmarked intersection and need to negotiate for priority. If the human driver believes that the self-driving car does not know that a collision could be fatal and will therefore press for priority, the onus is on the human driver to stop and (in game theoretic terms) lose. Deontological Motives Why might an individual in a nonstrategic situation prefer not to know? Individualistic modelers need assumptions about people’s motives to generate predictions. These motives may be utilitarian. The individual expects to be better off, in whatever sense, if they do not acquire a piece of information. They may, for instance, be concerned that they will be unable to not use the information and feel obliged to make choices they do not want to make. Alternatively, the motives may be deontological. An individual who treasures enlightenment values may feel morally obliged to access and use the information. At the same time, they may also hold privacy or secrecy in high regard and balk at the idea of invading another individual’s legally protected private sphere, even if they could exploit the knowledge gained to their own benet.

What Are the Normative Implications of Deliberate Ignorance? Deliberate ignorance eludes categorical normative conclusions and recommendations. Its manifestations are neither always normatively suspect nor always in accord with principles of ethics and rationality. Recently, it has been asked whether deliberate ignorance calls for interventions to protect the interests of those who desire to ignore information and those who do not desire so (Sharot and Sunstein 2020). We suggest that the high degree of context specicity

326

C. Engel and R. Hertwig

requires case-by-case analysis, thus making normative investigations of deliberate ignorance intriguing and regulatory interventions challenging. Deliberate Ignorance: The Problem or the Answer? Recent years have seen alarming developments in the form of deepening ideological divides and rising political polarization. In many countries, politicians, activists, and indeed voters appear to be deeply divided on issues such as inequality and immigration, with the divisions falling increasingly along party lines (e.g., DellaPosta et al. 2015; Iyengar and Westwood 2015; Sides and Hopkins 2015). One obvious fear is that this dynamic of polarization is intimately connected with deliberate ignorance. If asked to name a single problematic aspect of human reasoning that overrides all others, many psychologists will probably cite conrmation bias (see, e.g., Evans 1989): the tendency to seek or interpret evidence “in ways that are partial to existing beliefs, expectations, or a hypothesis in hand” (Nickerson 1998:175). It is easy to see how this tendency insulates people from views that contradict their preexisting beliefs, creating a fertile ground for political polarization.3 As one possible cause and motive of deliberate ignorance, the conrmation bias can foster undesirable behaviors. At the same time, disinformation, propaganda, and fakery—particularly, but not only, in the digital ecosystem—are matters of growing concern worldwide (see Lewandowsky, this volume). According to a large-scale analysis of Twitter data, the “amount of false news online is clearly increasing” (Vosoughi et al. 2018:1150). The persuasive power of a falsity resides, among other factors, in the insidious fact that false information tends to be novel, and novelty elicits what is, under normal circumstances, an adaptive response: it grabs people’s attention. Analyzing all 126,000 major news stories distributed on Twitter from 2006 to 2017 and veried to be true or false, Vosoughi et al. found that the truth simply cannot compete with hoax and rumor. Falsehood consistently dominates the truth on Twitter: it reaches more people, penetrates deeper into social networks, and spreads much faster. When “falsehood ies, and the truth comes limping after it,” as Jonathan Swift so elegantly wrote three centuries ago, the competence to discern true from false news becomes essential. By the same token, the competence to exercise deliberate ignorance is becoming a critical civic skill. Once a person, a news source, a website, or an organization has been identied as regularly communicating falsity, users need to resist it. They need to withstand the temptation to fall for novelty, surprise, and the deceptive promise of relevance. Here, deliberate ignorance is anything but the 3

Interestingly, a recent study found initial evidence of possible backre effects of exposing people to opposing views on social media. Attempts to introduce users to a broad range of opposing political views on social media sites such as Twitter might not only be ineffective but also counterproductive: they may actually increase political polarization (Bail et al. 2018).

Deliberate Ignorance: Present and Future

327

sign of an intellectually incurious and lethargic cognitive system that yearns for comfort and consistency; it requires executive control and a system that strives for veracity rather than consistency. This is one type of deliberate ignorance that we had in mind when referring to its function as a “cognitive sustainability and information-management device” (Hertwig and Engel 2016). Deliberate ignorance, therefore, can be both the problem and the solution. Who Decides and the Problem of Externalities Another key normative aspect of deliberate ignorance is that the choice not to know often affects the well-being of others (and, to use the terminology of economics, leads to externalities), delegates responsibility, and ultimately raises the issue of who has the (political) power to decide. Consider, for illustration, Berkman’s (this volume) discussion of the right not to know one’s genetic makeup and the debate that has broken out in the medical community over this right. In response to the wider availability and improved utility of largescale genomic sequencing (e.g., relating an increasing number of genetic variants to clinical phenotypes), the American College of Medical Genetics and Genomics (ACMG) in 2013 issued a recommendation for the handling of “incidental ndings” (Green et al. 2013). These are pieces of “information (typically clinically signicant and medically actionable) that is generated during a test or procedure but which does not relate to the original purpose for which the test or procedure was conducted” (Berkman, this volume, p. 200). The recommendation was that labs should actively search “for a ‘minimum list’ of variants that predispose patients to risk for disorders that ‘would likely have medical benet for the patients and families of patients undergoing clinical sequencing’ ” (Berkman, this volume, p. 202). But how could such an active search be aligned with a right not to know? As Berkman describes, the ACMG Working Group controversially argued against soliciting patient preferences on receiving (or not receiving) incidental ndings. In other words, patients would no longer be given the choice not to learn about clinically important and actionable ndings. Berkman (this volume, p. 213) lays out the grounds for this recommendation as follows: It is a vexing problem to possess genetic information that one deems to be clinically important, but to be precluded from disclosing it because a patient has exercised their RNTK. These medical professionals are apt to experience what we can colloquially call the “I-can’t-sleep-at-night” problem. More technically, they are experiencing a phenomenon known as moral distress.

Without wanting to downplay the physicians’ distress, we see an irony in the Working Group’s hardnosed paternalistic recommendation; namely, the privilege not to know was to be transferred from one stakeholder to another. Under the status quo, it was the patient’s right not to be informed about incidental ndings; the physician had to carry the potentially distressing burden of

328

C. Engel and R. Hertwig

knowing a patient’s genetic predispositions and risks. Had the new recommendation been put into effect, physicians would have been granted the right not to know patients’ preferences. The ACMG eventually retreated from the Working Group’s recommendation in response to criticism within the research ethics community. As this complex negotiation over who should be accorded the privilege not to know illustrates, deliberate ignorance is frequently and intricately intertwined with power (see also Ellerbrock and Hertwig, this volume) as well as the delegation of responsibility. Knowledge is power—but so can be the right not to know. To conclude, an individual’s choice to ignore information rarely affects just their own well-being. Externalities are a powerful justication for third-party intervention, yet caution is warranted here. In societies that have a universal health care system, for example, even decisions about individual health will eventually impact everybody else, as everyone shoulders the costs for the health service. However, if such externalities, which only come into being in the rst place due to institutional intervention, are seen as sufficient grounds to condemn others’ information preferences and behaviors, the right to intervene will become pervasive. The more severely third parties are affected, the more consideration may be given to interventions that make it harder for an individual (not) to generate, retrieve, or use an inuential piece of information. Should Certain Preferences Be Ignored? The concern about externalities is utilitarian. The policy makers’ concern is that individuals might increase their personal well-being at the expense of inicting disutility on others. This would be inefficient, as total welfare is smaller than it could be. Critically, from this normative perspective, the goal is to full as many individual wishes as is feasible, given the resources the economy can muster. In the textbook version of the argument, preferences are dened narrowly as the willingness-to-pay for goods or services. The global optimum is reached if the wishes of those with the highest willingness-to-pay are fullled. Conceptually, the apparatus of welfare theory can also be applied if utility is not equated with prot. A subbranch of economics works on such extensions, and calls for social preferences (e.g., care for the well-being of others) to be integrated into the otherwise narrow willingness-to-pay preference functions. Bierbrauer (this volume) convincingly argues that applying welfare theory to such more broadly dened preferences can lead to repugnant outcomes. For instance, if a person cares about the material well-being of another, but not vice versa, both will be better off if the government takes money from the caring person and gives it to the other one. Thus, the most socially minded get the worst deal. This result can be avoided only if policy makers deliberately ignore the possibility that some members of society might hold social preferences and respond to motives other than self-interest when making decisions. Deliberate ignorance, therefore, has a place at the heart of normative utilitarian theory.

Deliberate Ignorance: Present and Future

329

The concern about preferences that normative theory should ignore reaches even further. Happiness research has produced a body of evidence suggesting that a person’s state of affective well-being is highly adaptive. Individuals adjust surprisingly quickly when circumstances deteriorate, often bouncing back to the same happiness level as before (Frederick and Loewenstein 1999). Hedonic adaption is a good thing. It allows people to lead a meaningful life even under dire circumstances. Yet if policy makers were to strive simply to maintain the level of happiness, they would have carte blanche. As happiness normally reverts quickly to its original level, policy makers could ignore the harms that their interventions inict on citizens and focus instead on enriching themselves or furthering the political goals of their clientele. To produce normatively acceptable decisions, welfare theory must deliberately ignore some preferences (Bierbrauer, this volume). Should Deliberate Ignorance Be Assessed Solely in Terms of its Consequences? Society gives academics freedom and nances the scientic enterprise because it embraces the ideals of enlightenment. But is uncovering the secrets of life invariably good under all circumstances? There have always been conicting normative claims. Creationists argue that it is a sin to investigate the Darwinian origins of life, contending that only the Bible holds such answers. The current debate over using CRISPR technology to edit the human genome centers on human dignity as a limitation for scientic investigation (see, e.g., Brokowski and Adli 2019). These concerns are deontological. Deontological and consequentialist theories are disconnected as a matter of principle. Deontological theories argue from rst normative principles, such as Kant’s categorical imperative: “Act only according to that maxim whereby you can at the same time will that it should become a universal law” (Kant 1785/1993, 4:421). By contrast, consequentialist theories hold that only the consequences of one’s acts are the basis for judging their moral rightness. Utilitarian theories are a subgroup of consequentialist theories in that they dene welfare to be the ultimate goal. Scarce resources are to be used in the most productive way. One important bridge has, however, been built in the debate between deontology and utilitarianism. On deontological grounds, it can be argued that rules should be followed at all times (“rules are rules”): it would be immoral to break a rule that has been legitimately established. Rule utilitarians argue that the same norm can also be established on utilitarian grounds if a multi-period framework is adopted: over time, everybody is best off if legitimate rules are followed. This creates order and saves the inefficient transaction cost of sanctioning rule violations (Hooker 2016). By analogy, one might ask whether there is such a thing as information utilitarianism or, more broadly, information consequentialism. To the extent that society is better off, at least in the long

330

C. Engel and R. Hertwig

run, if information is generated, retrieved, and used, utilitarian or consequentialist theorists could require that information always be generated, retrieved, and used, irrespective of the immediate benet. Of course, there could also be rule consequentialist grounds to not generate, retrieve, and use information. Blind auditions, for example, may fall under this category.

Institutional Implications: How to Avert or Foster Deliberate Ignorance From a normative perspective, preventing undesirable deliberate ignorance is no less relevant than is enabling desirable deliberate ignorance. The central goal of institutional design, however, is to prevent socially undesirable behavior. Most (formal and informal) institutions have been set up to combat pervasive undesirable behaviors rather than to facilitate desirable ones. From command-and-control regulation to disincentives, from moral suasion to nudges, there is a whole panoply of tried-and-tested tools for discouraging individuals from engaging in undesirable behaviors. Combatting unwanted deliberate ignorance is not in any principled way different from standard normative concerns, such as ghting pollution or speeding. By contrast, the enabling function of institutional intervention is well understood only for the core of a market economy. Specically, within the market economy, property rights dene and standardize the object of trade, and contracts make trade possible. Could these standard techniques also enable socially desirable deliberate ignorance? This is not obvious. If a Homo ignorans desires to remain ignorant of something, it is crucial that nobody relays relevant information to him. If more than one person becomes aware of the information of which a person desires to remain ignorant, this preference will only be protected as long as all who do or might know refrain from informing him. Homo ignorans’s preference thus creates a one-to-many relationship. This makes it difficult for Homo ignorans to turn his interest not to know into an object of trade. He would have to strike a deal with all potential informants. Formal versus Informal Institutions Legal institutions are formal, in the sense that they are explicitly designed and, if needed, explicitly enforced. For some instances of deliberate ignorance, this formality may be desirable. If policy makers want to prevent a rm from remaining deliberately ignorant about the harmful effects of a production technology on the environment, they may want to force the rm to clear the procedure with an environmental agency before it starts producing. If the agent of deliberate ignorance is an individual consumer, however, legal intervention may at most be an institutional backstop. Unless the legal rule is mirrored by a sufficiently powerful social norm, protection is likely to be imperfect.

Deliberate Ignorance: Present and Future

331

The Role of Education An educational approach is particularly appealing when the goal is to enable behavior. The normatively desirable reaction will rarely consist of never or always generating, retrieving, or using information. Rather, individuals should ideally be empowered to discriminate between contexts and issues in which having more information is better and those in which is it better to refrain from accessing information. In Hertwig and Engel (2016), we gave the example that (social) media and Internet platforms have become experts in designing mental stimulants that usurp users’ attention. In an informationally obesogenic environment, citizens are threatened with loss of agency over how much of their attention they allocate, and to what. One of the most important goals for future school and adult education may be to equip students with the competence to discern good from worthless information, and to detect and reject the relentless attempts to hijack their limited attentional resources. The Role of Digital Technology Digital technology may also offer an intriguing solution precisely because it is embedded in (computer) code. As Lessig (2009) noted, code is law. Social norms and legal rules are never perfectly enforced. There is always an implementation gap resulting from neglect, resistance, or a lack of enforcement. By contrast, computer code is self-enforcing. If a piece of information is not to be accessible (and provided that it has not yet been duplicated), a single line of code can make it disappear. Likewise, if there is concern that people might avoid a piece of information they ought to see, a few lines of code cannot only make sure that they receive the information but also document exactly when they received it. When deliberate ignorance is implemented by code, normative conicts that can otherwise be kept hidden become manifest. For example, antidiscrimination law prohibits discrimination on grounds of race or gender, but as long as decision makers do not openly justify their choices based on either category, it is difficult to prove that they have engaged in discrimination. An electronic decision tool can be programmed to purge a data set from informative correlations with gender or race. Although highly effective in preventing discrimination, this intervention may also reduce prediction accuracy; the more it does, the more the normatively undesirable behavior is actually correlated with gender or race. Intervention by code thus forces an open discussion of this trade-off. Society must decide how high a price it is willing to pay not to discriminate.

Deliberate Ignorance: A Wisdom Call According to Kant (1784), “Enlightenment is man’s emergence from his self-imposed immaturity.” A self-determined life, grounded in knowledge

332

C. Engel and R. Hertwig

and understanding, is certainly desirable. Quite often, making the best use of the available knowledge enables people to live a meaningful life. Yet, as the contributions to this volume demonstrate, more knowledge and information are not always desirable, and deliberate ignorance cannot simply be equated with self-imposed immaturity. Individuals and societies may have good reason not to generate, acquire, access, disseminate, or use knowledge and information, even if doing so would be feasible and affordable. There are contexts and conditions under which it is better to remain deliberately ignorant. Striking a balance between the liberating and enlightening effects of knowledge and the benecial effects of self-imposed ignorance requires individual, collective, and institutional wisdom.

Bibliography Note: Numbers in square brackets denote the chapter in which an entry is cited. Abbott, A. 2010. Varieties of Ignorance. Am. Sociol. 41:174–189. [1] Abele, A. E., and B. Wojciszke. 2014. Communal and Agentic Content in Social Cognition: A Dual Perspective Model. Adv. Exp. Soc. Psychol. 50:195–255. [14] Abelson, R. P. 1986. Beliefs Are Like Possessions. J. Theory Soc. Behav. 16:223–250. [14] Adorno, T. W. 1977. Was bedeutet: Aufarbeitung der Vergangenheit. In: Kulturkritik und Gesellschaft II, pp. 555–572. Frankfurt: Suhrkamp. [2] Adorno, T. W., and M. Horkheimer. 2002. Dialectic of Enlightenment: Philosophical Fragments (Originally Published in 1944). Cultural Memory in the Present, M. Bal and H. de Vries, series ed. Stanford: Stanford Univ. Press. [14] Agan, A., and S. Starr. 2018. Ban the Box, Criminal Records, and Racial Discrimination: A Field Experiment. Q. J. Econ. 133:191–235. [4, 16] Ainslie, G., and N. Haslam. 1992. Hyperbolic Discounting. In: Choice over Time, ed. G. Loewenstein and J. Elster, pp. 57–92. New York: Russell Sage Foundation. [14] Akerlof, G. A. 1970. The Market for “Lemons”: Quality Uncertainty and the Market Mechanism. Q. J. Econ. 84:488–500. [3] Akerlof, G. A., and W. T. Dickens. 1982. The Economic Consequences of Cognitive Dissonance. Am. Econ. Rev. 72:307–319. [1, 8] Alesina, A., S. Stantcheva, and E. Teso. 2018. Intergenerational Mobility and Preferences for Redistribution. Am. Econ. Rev. 108:521–554. [11] Allcott, H., and M. Gentzkow. 2017. Social Media and Fake News in the 2016 Election. J. Econ. Perspect. 31:211–236. [7, 15] Allcott, H., B. B. Lockwood, and D. Taubinsky. 2019. Regressive Sin Taxes, with an Application to the Optimal Soda Tax. Q. J. Econ. 134:1557–1626. [11] Allport, F. H. 1924. The Group Fallacy in Relation to Social Science. Am. J. Sociol. 29:688–706. [14] Altheide, D. L., and J. N. Grimes. 2005. War Programming: The Propaganda Project and the Iraq War. Sociol. Q. 46:617–643. [7] Anderson, A. A., D. Brossard, D. A. Scheufele, M. A. Xenos, and P. Ladwig. 2013. The “Nasty Effect”: Online Incivility and Risk Perceptions of Emerging Technologies. J. Comput. Mediat. Commun. 19:373–387. [7] Anderson, J. R., M. Matessa, and C. Lebiere. 1997. ACT-R: A Theory of Higher Level Cognition and Its Relation to Visual Attention. Hum. Comp. Interac. 12:439–462. [6] Anderson, J. R., and R. Milson. 1989. Human Memory: An Adaptive Perspective. Psychol. Rev. 96:703–719. [6, 10] Anderson, K. 2018. Collective Crimes, Collective Memory, and Transitional Justice in Bangladesh. In: Understanding the Age of Transitional Justice: Crimes, Courts, Commissions, and Chronicling, ed. N. Adler, pp. 213–236. New Brunswick: Rutgers Univ. Press. [2] Anderson, M. C., and C. Green. 2001. Suppressing Unwanted Memories by Executive Control. Nature 410:366–369. [1] Andorno, R. 2004. The Right Not to Know: An Autonomy Based Approach. J. Med. Ethics 30:435–439. [12, 16] Andreoni, J., J. M. Rao, and H. Trachtman. 2017. Avoiding the Ask: A Field Experiment on Altruism, Empathy, and Charitable Giving. J. Polit. Economy 125 625–653. [10] Andries, M., and V. Haddad. 2017. Information Aversion. NBER Work Pap. Ser. 17: [8]

334

Bibliography

Anter, A., ed. 2004. Die normative Kraft des Faktischen: Das Staatsverständnis Georg Jellineks. Baden Baden: Nomos Verlag. [14] Aquinas, T. 1265–1274. Summa Theologica I–II (transl. by Fathers of the English Dominican Province, 1942). London: Burns Oates & Washbourne. [16] Arendt, H. 1951. The Origins of Totalitarianism. New York: Schocken Books. [7] Arkes, H. R., G. Gigerenzer, and R. Hertwig. 2016. How Bad Is Incoherence? Decision 3:20–39. [1, 13, 14] Arnold, A. 2018. Trump Let Kavanaugh Know You’re Sorry for All He’s Been Through. The Cut, Oct. 9, 2018. [7] Arrow, K. J. 1963. Social Choice and Individual Values. New Haven: Yale Univ. Press. [13] Arsenault, A., and M. Castells. 2006. Conquering the Minds, Conquering Iraq: The Social Production of Misinformation in the United States: A Case Study. Inform. Commun. Soc. 9:284–307. [7] Asehnoune, K., P. Albaladejo, N. Smail, et al. 2000. Information and Anesthesia: What Does the Patient Desire? Ann. Fr. Anesth. Reanim. 19:577–581. [16] Ash, T. G. 1997. The File: A Personal History. London: HarperCollins. [2] Aslund, O., and O. N. Skans. 2012. Do Anonymous Job Application Procedures Level the Playing Field? Ind. Lab. Relat. Rev. 65:82–107. [4] Assaf, K. 2016. Buying Goods and Doing Good: Trademarks and Social Competition. Alabama Law Rev. 67:980–1016. [16] Assmann, A. 2008. Canon and Archive. In: Cultural Memory Studies: An International and Interdisciplinary Handbook, ed. A. Erll and A. Nünning, pp. 97–108, Media and Cultural Memory [Medien und Kulturelle Erinnerung]. Berlin: de Gruyter. [2] ———. 2016a. Formen des Vergessens [Forms of Forgetting]. Historische Geisteswissenschaften, Frankfurter Vorträge, B. Jussen and S. Scholz, series eds. Göttingen: Wallstein Verlag. [2] ———. 2016b. Shadows of Trauma: Memory and the Politics of Postwar Identity (transl. S. Clift). New York: Fordham Univ. Press. [2] Assmann, A., and S. Conrad, eds. 2010. Memory in a Global Age: Discourses, Practices and Trajectories, Palgrave Macmillan Memory Studies. Basingstoke: Palgrave Macmillan. [2] Assmann, A., and U. Frevert. 1999. Geschichtsvergessenheit, Geschichtsversessenheit: Vom Umgang mit deutschen Vergangenheiten nach 1945 [Historical Oblivion, Historical Obsession: Dealing with Germany’s Past after 1945]. Stuttgart: Deutsche Verlags-Anstalt. [2] Atkinson, A., and J. Stiglitz. 1976. The Design of Tax Structure: Direct versus Indirect Taxation. J. Public Econ. 6:55–75. [11] Austad, T. 1996. The Right Not to Know: Worthy of Preservation Any Longer? An Ethical Perspective. Clin. Genet. 50:85–88. [12] Avraham, R., K. D. Logue, and D. Schwarcz. 2014. Understanding Insurance AntiDiscrimination Laws. S. California Law Rev. 87:195–274. [16] Axelrod, R. 1997. The Dissemination of Culture: A Model with Local Convergence and Global Polarization. J. Conict Res. 41:203–226. [10] Babcock, L., G. Loewenstein, S. Issacharoff, and C. Camerer. 1995. Biased Judgments of Fairness in Bargaining. Am. Econ. Rev. 85:1337–1343. [3] Bail, C. A., L. P. Argyle, T. W. Brown, et al. 2018. Exposure to Opposing Views on Social Media Can Increase Political Polarization. PNAS 115:9216–9221. [17] Bakir, V., E. Herring, D. Miller, and P. Robinson. 2018. Organized Persuasive Communication: A New Conceptual Framework for Research on Public Relations, Propaganda and Promotional Culture. Crit. Sociol. 54:311–328. [7]

Bibliography

335

Bar-On, D. 1993. Die Last des Schweigens: Gespräche mit Kindern von Nazi-Tätern [The Burden of Silence: Conversations with Children of Nazi Perpetrators]. Frankfurt: Campus Verlag. [2] Batterman, R. W. 2006. The Devil Is in the Details: Asymptotic Reasoning in Explanation, Reduction and Emergence. Oxford Studies in Philosophy of Science, P. Humphreys, series ed. New York: Oxford Univ. Press. [13] Baum, M. A., and T. Groeling. 2010. Reality Asserts Itself: Public Opinion on Iraq and the Elasticity of Reality. Intl. Organ. 64:443–479. [7] Bear, A., and D. G. Rand. 2016. Intuition, Deliberation, and the Evolution of Cooperation. PNAS 113:936–941. [9] ———. 2019. Can Strategic Ignorance Explain the Evolution of Love? Top. Cogn. Sci. 11:393–408. [9] Beck, U., A. Giddens, and S. Lash. 1996. Reexive Modernisierung: Eine Kontroverse [Reexive Modernization: A Controversy]. Frankfurt: Suhrkamp. [2] Becker, J., D. Brackbill, and D. Centola. 2017. Network Dynamics of Social Inuence in the Wisdom of Crowds. PNAS 114:E5070–E5076. [14] Behaghel, L., B. Crepon, and T. Le Barbanchon. 2015. Unintended Effects of Anonymous Résumés. Am. Econ. J. App. Econ. 7:1–27. [4] Behr, H. 2017. The Populist Obstruction of Reality: Analysis and Response. Glob. Aff. 3:73–80. [7] Bell, V. 2010. Don’t Touch That Dial! A History of Media Technology Scares, from the Printing Press to Facebook. https://slate.com/technology/2010/02/a-history-ofmedia-technology-scares-from-the-printing-press-to-facebook.html. (accessed Jan. 13, 2020). [1] Bellarmine, R. 1989. The Art of Dying Well. In: Spiritual Writings, ed. J. D. Donnelly and R. J. Teske, pp. 235–386. Mahwah, NJ: Paulist Press. [1] Bem, D. J. 1967. Self-Perception: An Alternative Interpretation of Cognitive Dissonance Phenomena. Psychol. Rev. 74:183–200. [8] Bénabou, R. 2013. Groupthink: Collective Delusions in Organizations and Markets. Rev. Econ. Stud. 80:429–462. [8] Bénabou, R., and J. Tirole. 2002. Self-Condence and Personal Motivation. Q. J. Econ. 117:871–915. [10] ———. 2006a. Belief in a Just World and Redistributive Politics. Q. J. Econ. 121:699– 746. [11] ———. 2006b. Incentives and Proscocial Behavior. Am. Econ. Rev. 96:1652–1678. [9] ———. 2011. Identity, Morals, and Taboos: Beliefs as Assets. Q. J. Econ. 126:805–855. [8] Benartzi, S., and R. H. Thaler. 1995. Myopic Loss Aversion and the Equity Premium Puzzle. Q. J. Econ. 110:73–92. [5] Bender, B., and A. Hanna. 2016. Flynn under Fire for Fake News. Poitico [7] Bengson, J., and M. A. Moffett. 2012. Two Conceptions of Mind and Action. In: Knowing How: Essays on Knowledge, Mind and Action, ed. J. Bengson and M. A. Moffett. New York: Oxford Univ. Press. [13] Benjamin, W. 2004. The Work of Art in the Age of Mechanical Reproduction. In: Literary Theory: An Anthology, ed. J. Rivkin and M. Ryan, pp. 1235–1241. Maiden: Blackwell. [7] Ben-Shahar, O., and C. E. Schneider. 2014. More Than You Wanted to Know: The Failure of Mandated Disclosure. Princeton: Princeton Univ. Press. [16] Berger, P. L., and T. Luckmann. 1991. The Social Construction of Reality: A Treatise in the Sociology of Knowledge. Penguin Social Sciences. London: Penguin. [7, 17]

336

Bibliography

Berkman, B. E., and S. C. Hull. 2014. The “Right Not to Know” in the Genomic Era: Time to Break from Tradition? Am. J. Bioethics 14:28–31. [12] Berman, M. N. 2004. Constitutional Decision Rules. VA Law Rev. 90:1–168. [16] Berns, G. S. 2006. Neurobiological Substrates of Dread. Science 312:754–758. [1] Birthler, M. 2014. Halbes Land: ganzes Land: ganzes Leben: Erinnerungen [Half Land: Whole Land: Whole Life: Memories]. Munich: Hanser Berlin. [2] Blackwell, D. 1953. Equivalent Comparisons of Experiments. Ann. Mathemat. Stat. 24:265–272. [1] Blank, R. M. 1991. The Effects of Double-Blind versus Single-Blind Reviewing: Experimental Evidence from the American Economic Review. Am. Econ. Rev. 81:1041–1067. [4] Bloch, M., and M. R. Hayden. 1990. Predictive Testing for Huntington Disease in Childhood: Challenges and Implications. Am. J. Hum. Genet. 46:1–4. [1] Bock, P. 1999. Vergangenheitspolitik in der Revolution von 1989 [Politics of the Past in the Revolution of 1989]. In: Umkämpfte Vergangenheit [A Contested Past], ed. P. Bock and E. Wolfrum, pp. 82–100. Göttingen: Vandenhoeck and Ruprecht. [2] Boddie, E. C. 2018. A Damaging Bid to Censor Applications at Harvard. New York Times, Oct. 10, 2018. [4] Boghossian, P. A. 2006. Fear of Knowledge: Against Relativism and Constructivism. Oxford: Oxford Univ. Press. [7] Bohn, R. E., and J. E. Short. 2009. How Much Information? 2009 Report on American Consumers. Global Information Industry Center, San Diego. http://hmi.ucsd.edu/ howmuchinfo.php. (accessed Jan. 13, 2020). [1] Bolton, G. E., and A. Ockenfels. 2000. ERC: A Theory of Equity, Reciprocity, and Competition. Am. Econ. Rev. 90:166–193. [9] Booß, C., and H. Müller-Enbergs. 2014. Die indiskrete Gesellschaft: Studien zum Denunziationskomplex und zu inoffiziellen Mitarbeitern [The Indiscreet Society: Studies on the Denunciation Complex and Unofficial Employees]. Frankfurt: Verlag für Polizeiwissenschaft. [2] Borges, B., D. G. Goldstein, A. Ortmann, and G. Gigerenzer. 1999. Can Ignorance Beat the Stock Market? In: Simple Heuristics That Make Us Smart, ed. G. Gigerenzer et al., pp. 59–72. New York: Oxford Univ. Press. [10] Bortolotti, L. 2012. The Relative Importance of Undesirable Truths. Med. Health Care Phil. 16:683–690. [1] Boskin, M. J., and E. Sheshinski. 1983. Optimal Tax Treatment of the Family: Married Couples. J. Public Econ. 20:281–297. [11] Bottis, M. C. 2000. Comment on a View Favoring Ignorance of Genetic Information: Condentiality, Autonomy, Benecence and the Right Not to Know. Eur. J. Health Law 7:173–183. [12] Boyd, R., and P. J. Richerson. 1985. Culture and the Evolutionary Process. Chicago: Univ. of Chicago Press. [5] Braithwaite, J. 1992. Crime, Shame and Reintegration. Cambridge: Cambridge Univ. Press. [2] Brennan, T. A., C. M. Sox, and H. R. Burstein. 1996. Relation between Negligent Adverse Events and the Outcomes of Medical-Malpractice Litigation. N. Engl. J. Med. 335:1963–1967. [16] Brink, D. 2014. Mill’s Moral and Political Philosophy. Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/archives/fall2014/entries/mill-moral-political/. (accessed Mar. 18, 2020). [1]

Bibliography

337

Broadstock, M., S. Michie, and T. Marteau. 2000. Psychological Consequences of Predictive Genetics Testing: A Systematic Review. Eur. J. Hum. Genet. 8:731–738. [12] Brokowski, C., and M. Adli. 2019. CRISPR Ethics: Moral Considerations for Applications of a Powerful Tool. J. Molec. Biol. 431:88–101. [17] Broniatowski, D. A., A. M. Jamison, S. Qi, et al. 2018. Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate. Am. J. Public Health 108:1378–1384. [7] Broun, K. S., G. E. Dix, E. J. Imwinkelried, et al., eds. 2013. McCormick on Evidence, 7th ed. St. Paul: West Publishing. [16] Bruni, F. 2019. Will the Media Be Trump’s Accomplice Again in 2020? We Have a Second Chance. Let’s Not Blow It. New York Times, Jan. 11, 2019. [7] Brunnermeier, M. K., F. Papakonstantinou, and J. A. Parker. 2017. Optimal TimeInconsistent Beliefs: Misplanning, Procrastination, and Commitment. Manage. Sci. 63:1271–1656. [8] Brunnermeier, M. K., and J. A. Parker. 2005. Optimal Expectations. Am. Econ. Rev. 95:1092–1118. [8, 10] Brunsson, N. 1989. The Organization of Hypocrisy: Talk, Decisions, and Actions in Organizations. New York: Wiley. [15] Buchak, L. 2014. Belief, Credence, and Norms. Phil. Stud. 169:1–27. [16] Buckley-Zistel, S. 2014. Narrative Truths: On the Construction of the Past in Truth Commissions. In: Transitional Justice Theories, ed. S. Buckley-Zistel et al., pp. 144–162, Transitional Justice, K. McEvoy, series ed. Abingdon: Routledge. [2] Buckley-Zistel, S., and S. Schäfer, eds. 2014. Memorials in Times of Transition, Series on Transitional Justice, vol. 16. Cambridge: Intersentia. [2] Budden, A. E., T. Tregenza, L. W. Aarssen, et al. 2008. Double-Blind Review Favours Increased Representation of Female Authors. Trend. Ecol. Evol. 23:4–6. [4] Buddensiek, L. 2017. Ein Recht auf Einsicht? Die Debatte um den Zugang zur “Eigenen” Stasi-Akte [A Right to View? The Debate over Access to “One’s Own” Stasi File]. In: Deutschland seit 1990: Wege in die Vereinigungsgesellschaft [Germany Since 1990: Paths to a Reunied Society], ed. T. Großbölting and C. Lorke, pp. 225–241, Nassauer Gespräche der Freiherr-Vom-Stein-Gesellschaft, vol. Band 10. Stuttgart: Franz Steiner Verlag. [2] Bulman, M. 2016. Donald Trump “Using Hamilton Controversy to Distract from $25m Fraud Settlement and Other Scandals.” The Independent , November 25. [7] Burke, W., A. H. M. Antommaria, R. Bennett, et al. 2013. Recommendations for Returning Genomic Incidental Findings? We Need to Talk! Genet. Med. 15:854– 859. [12] Burnett, J. 2010. Generations: The Time Machine in Theory and Practice. Surrey: Ashgate Press. [2] Cain, D. M., G. Loewenstein, and D. A. Moore. 2005. Coming Clean but Playing Dirtier: The Shortcomings of Disclosure as a Solution to Conicts of Interest. In: Conicts of Interest: Challenges and Solutions in Business, Law, Medicine and Public Policy, ed. D. A. Moore et al., pp. 104–125. Cambridge: Cambridge Univ. Press. [4] Calabresi, G., and P. Bobbitt. 1978. Tragic Choices: The Conicts Society Confronts in the Allocation of Tragically Scarce Resources. New York: W.W. Norton. [15] Callahan, D. 1989. Can We Return Death to Disease? Hastings Center Report 19:4–6. [14] Campbell, D. T. 1969. Reforms as Experiments. Am. Psychol. 24:409–429. [14] Caplin, A., and J. Leahy. 2001. Psychological Expected Utility Theory and Anticipatory Feelings. Q. J. Econ. 116:55–79. [8, 10]

338

Bibliography

Caplin, A., and J. Leahy. 2004. The Supply of Information by a Concerned Expert. Econ. J. 114:487–505. [8, 10] Carnap, R. 1947. On the Application of Inductive Logic. Philos. Phenomenol. Res. 8:133–148. [1] Carrillo, J. D. 2005. To Be Consumed with Moderation. Eur. Econ. Rev. 49:99–111. [8] Carrillo, J. D., and T. Mariotti. 2000. Strategic Ignorance as a Self-Disciplining Device. Rev. Econ. Stud. 67:529–544. [1, 5, 8, 10, 14] Carroll, A. E. 2018. Peer Review: The Worst Way to Judge Research, except for All the Others: A Look at the System’s Weaknesses, and Possible Ways to Combat Them. New York Times [4] Carstensen, L. L. 2006. The Inuence of a Sense of Time on Human Development. Science 312:1913–1915. [1] Carstensen, L. L., D. M. Isaacowitz, and S. T. Charles. 1999. Taking Time Seriously: A Theory of Socioemotional Selectivity. Am. Psychol. 54:165–181. [1] Carter, J. A., A. Clark, J. Kallestrup, S. O. Palermos, and D. Pritchard, eds. 2018. Extended Epistemology. New York: Oxford Univ. Press. [13] Carter, J. R., and M. D. Irons. 1991. Are Economists Different, and If So, Why? J. Econ. Perspect. 5:171–177. [3] Cartwright, N. 1983. How the Laws of Physics Lie. Oxford: Oxford Univ. Press. [13] Case, D. O., J. E. Andrews, J. D. Johnson, and S. L. Allard. 2005. Avoiding versus Seeking: The Relationship of Information Seeking to Avoidance, Blunting, Coping, Dissonance, and Related Concepts. J. Med. Lib. Ass. 93:353–362. [1] Ceci, S. J., and M. L. Huffman. 1997. How Suggestible Are Preschoolers? Cognitive and Social Factors. J. Am. Acad. Child Adolesc. Psychiatry 36:948–958. [6] Centola, D., J. C. Gonzales-Avella, V. M. Eguiluz, and M. San Miguel. 2007. Homophily, Cultural Drift and the Coevolution of Cultural Groups. J. Conict Res. 51:905–929. [10] Cerabino, F. 2018. Trump Calls Palm Beach Post “Fake News” in Gas-Fueled Grievance. Palm Beach Post, Nov. 28. [7] Chambers, S. 2018. Human Life Is Group Life: Deliberative Democracy for Realists. Crit. Rev. 30:36–48. [7] Charité, J., R. Fisman, and I. Kuziemko. 2015. Reference Points and Redistributive Preferences: Experimental Evidence. National Bureau of Economic Research. https://ideas.repec.org/p/nbr/nberwo/21009.html. (accessed Sep. 5, 2019). [11] Charlow, R. 1992. Wilful Ignorance and Criminal Culpability. Texas Law Rev. 70:1351– 1429. [16] Charman, P. 2017. Mail Online’s Katie Hopkins: “There Is No Such Thing as Fact Any More... There Is No Truth.” PressGazette, October 12, 2017. [7] Chetty, R., A. Looney, and K. Kroft. 2009. Salience and Taxation: Theory and Evidence. Am. Econ. Rev. 99:1145–1177. [11] Chew, S. H., and J. L. Ho. 1994. Hope: An Empirical Study of Attitude toward the Timing of Uncertainty Resolution. J. Risk Uncertainty 8:267–288. [8] Chomsky, N. 2016. Who Rules the World? New York: Henry Holt. [2] Cicero, M. T. 1913. The Orations of Marcus Tullius Cicero (transl. C. D. Yonge), vol. IV. London: G. Bell and Sons. [2] Clark, A., and D. Chalmers. 1998. The Extended Mind. Analysis 58 7–19. [13] Coase, R. H. 1960. The Problem of Social Cost. J. Law Econ. 3:1–44. [11] Cohen, L. J. 1977. The Probable and the Provable. Oxford: Oxford Univ. Press. [13] ———. 1981. Can Human Irrationality Be Experimentally Demonstrated? Behav. Brain Sci. 4:317–370. [13]

Bibliography

339

Cohen, S. 2001. States of Denial: Knowing About Atrocities and Suffering. Cambridge: Polity Press. [1] Colasacco, B. 2018. Before Trump: On Comparing Fascism and Trumpism. J. Stud. Radical. 12:27–53. [7] Colvin, E. 1995. Corporate Personality and Criminal Liability. Crim. Law Forum 6:1–44. [16] Connerton, P. 2008. Seven Types of Forgetting. Mem. Stud. 1:59–71. [2] Conrads, J., and B. Irlenbusch. 2013. Strategic Ignorance in Ultimatum Bargaining. J. Econ. Behav. Org. 92:104–115. [1, 3] Cooksey, R. W. 1996. Judgment Analysis: Theory, Methods, and Applications. San Diego: Academic Press. [4] Cooley, E., B. K. Payne, W. Cipolli, et al. 2017. The Paradox of Group Mind: “People in a Group” Have More Mind Than “A Group of People.” J. Exp. Psychol. 146:691– 699. [14] Cooter, R., and T. Ulen. 2008. Law and Economics (5th ed.). Boston: Pearson. [1] Craik, N. 2008. The International Law of Environmental Impact Assessment: Process, Substance and Integration. Cambridge Series in International and Comparative Law, J. Crawford and J. S. Bell, series eds. Cambridge: Cambridge Univ. Press. [16] Craker, N., and E. March. 2016. The Dark Side of Facebook®: The Dark Tetrad, Negative Social Potency, and Trolling Behaviours. Pers. Individ. Diff. 102:79–84. [7] Cramton, R. C., G. M. Cohen, and S. P. Koniak. 2004. Legal and Ethical Duties of Lawyers after Sarbanes-Oxley. Villanova Law Rev. 49:725–831. [16] Crawford, M. B. 2015. The World Beyond Your Head: On Becoming an Individual in an Age of Distraction. New York: Farrar, Straus and Giroux. [1, 5, 14] Creighton, S., E. W. Almqvist, D. MacGregor, et al. 2003. Predictive, Prenatal and Diagnostic Genetic Testing for Huntington’s Disease: The Experience in Canada from 1987 to 2000. Clin. Genet. 63:462–475. [1] Crémer, J. 1995. Arm’s Length Relationships. Q. J. Econ. 110:275–295. [1, 3] Crupi, V., J. D. Nelson, B. Meder, G. Cevolani, and K. Tentori. 2018. Generalized Information Theory Meets Human Cognition: Introducing a Unied Framework to Model Uncertainty and Information Search. Cogn. Sci. 42:1410–1456. [14] Cuddihy, J. M. 1974. The Ordeal of Civility: Freud, Marx, Levi-Strauss, and the Jewish Struggle with Modernity. New York: Dell Publishing Company. [5] ———. 1978. No Offense: Civil Religion and Protestant Taste. New York: Seabury Press. [5] Culloty, E., and J. Suiter. 2018. Journalism Norms and the Absence of Media Populism in the Irish General Election 2016. In: Mediated Campaigns and Populism in Europe, ed. S. Salgado, pp. 51–74, Political Campaigning and Communication, D. Lilleker, series ed. Cham: Springer. [7] Curato, N. C., J. S. Dryzek, S. A. Ercan, C. M. Hendriks, and S. Niemeyer. 2017. Twelve Key Findings in Deliberative Democracy Research. Daedalus 146:28–38. [7] Curtis, C. 2018. Although She’s Unpopular, Theresa May Is Still the Most Liked Living PM. YouGov, Nov. 8, 2018. [7] d’Adda, G., Y. Gao, R. Golman, and M. Tavoni. 2018. It’s So Hot in Here: Information Avoidance, Moral Wiggle Room, and High Air Conditioning Usage. FEEM Working Paper No. 07.2018. SSRN Elec. J., March 26, 2018. [10] Dalton, C. 2016. Bullshit for You; Transcendence for Me: A Commentary on “On the Reception and Detection of Pseudo-Profound Bullshit.” Judgment Dec. Making 11:121–122. [7]

340

Bibliography

Dana, J. 2006. Strategic Ignorance and Ethical Behavior in Organizations. In: Ethics in Groups, ed. A. E. Tenbrunsel, pp. 39–57, Research on Managing Groups and Teams, vol. 8. Bingley, UK: Emerald Group. [1, 5] ———. 2008. What Makes Improper Linear Models Tick? In: Rationality and Social Responsibility: Essays in Honor of Robyn Mason Dawes, ed. J. I. Krueger, pp. 71– 89, Modern Pioneers in Psychological Science: An APS Psychology Press Series. New York: Psychology Press. [14] Dana, J., G. Loewnstein, and R. A. Weber. 2011. Ethical Immunity: How People Violate Their Own Moral Standards without Feeling They Are Doing So. In: Behavioral Business Ethics: Shaping an Emerging Field, ed. A. E. Tenbrunsel and D. De Cremer, pp. 201–219. London: Taylor & Francis. [3] Dana, J., R. A. Weber, and J. X. Kuang. 2007. Exploiting Moral Wiggle Room: Experiments Demonstrating an Illusory Preference for Fairness. Econ. Theory 33:67–80. [1, 3, 9, 10, 14, 16] Dan-Cohen, M. 1984. Decisions Rules and Conduct Rules: On Acoustic Separation in Criminal Law. Harvard Law Rev. 97:625–677. [15] Daston, L., and K. Park. 2001. Wonders and the Order of Nature 1150–1750 New York: Zone Books. [1] Davies, W., and L. McGoey. 2012. Rationalities of Ignorance: On Financial Crisis and the Ambivalence of Neo-Liberal Epistemology. Econ. Soc. 41:64–83. [1] Dawes, C. T., J. H. Fowler, T. Johnson, R. McElreath, and O. Smirnov. 2007. Egalitarian Motives in Humans. Nature 446:794–796. [9] Dawes, R. M. 1979. The Robust Beauty of Improper Linear Models in Decision Making. Am. Psychol. 34:571–582. [14] ———. 1980. Social Dilemmas. Annu. Rev. Psychol. 31:169–193. [14] ———. 1988a. Plato versus Russell: Hoess and the Relevance of Cognitive Psychology. Relig. Human. 22:20–26. [14] ———. 1988b. Rational Choice in an Uncertain World. New York: Harcourt Brace Jovanovich. [14] Dawson, E., T. Gilovich, and D. T. Regan. 2002. Motivated Reasoning and Performance on the Wason Selection Task. Pers. Soc. Psychol. Bull. 28:1379–1387. [5] DellaPosta, D., Y. Shi, and M. Macy. 2015. Why Do Liberals Drink Lattes? Am. J. Sociol. 120:1473–1511. [17] DellaVigna, S., J. A. List, and U. Malmendier. 2012. Testing for Altruism and Social Pressure in Charitable Giving. Q. J. Econ. 127:1–56. [10] Delton, A. W., M. M. Krasnow, L. Cosmides, and J. Tooby. 2011. Evolution of Direct Reciprocity under Uncertainty Can Explain Human Generosity in One-Shot Encounters. PNAS 108:13335–13340. [9] Denrell, J. 2005. Why Most People Disapprove of Me: Experience Sampling in Impression Formation. Psychol. Rev. 112:951–978. [14] Der Spiegel. 1990. Menschlich bewegt: Bundesminister de Maizère gab sein Amt wegen langjähriger Stasi-Kontakte auf, die er selbst noch Bestreitet. Der Spiegel 1990:20–23. [2] Deshpandé, R., and A. K. Kohli. 1989. Knowledge Disavowal: Structural Determinants of Information-Processing Breakdown in Organizations. Sci. Commun. 11:155–169. [9] Dessingué, A., and J. Winter. 2016. Remembering, Forgetting and Silence. In: Beyond Memory: Silence and the Aesthetics of Remembrance, ed. A. Dessingué and J. Winter, pp. 1–10, Routledge Approaches to History, vol. 13. New York: Routledge. [2] Dhami, M. K., R. Hertwig, and U. Hoffrage. 2004. The Role of Representative Design in an Ecological Approach to Cognition. Psychol. Bull. 130:959–988. [4]

Bibliography

341

Diamond, P. A. 1975. A Many-Person Ramsey Tax Rule. J. Public Econ. 4:335–342. [11] ———. 1998. Optimal Income Taxation: An Example with a U-Shaped Pattern of Optimal Marginal Tax Rates. Am. Econ. Rev. 88:83–95. [11] Dick, P. K. 1980. The Golden Man (rst published in If Magazine, 1954), M. Hurst, series ed. New York: Berkley Books. [14] Dickinson, D., and M.-C. Villeval. 2008. Does Monitoring Decrease Work Effort? The Complementarity between Agency and Crowding-out Theories. Games Econ. Behav. 63:56–76. [10] Die Tageszeitung. 1990. Die CDU führt die Stasi-Liste an: Die informellen Mitarbeiter des Ministeriums für Staatssicherheit unter den Ministern und Abgeordneten in der Volkskammer. Die Tageszeitung [2] Dillenberger, D. 2010. Preferences for One-Shot Resolution of Uncertainty and AllaisType Behavior. Econometrica 78:1973–2004. [8] Dimbath, O., and P. Wehling, eds. 2011. Soziologie des Vergessens: Theoretische Zugänge und empirische Forschungsfelder. Konstanz: UVK-Verlagsgesellschaft. [2] Dobelli, R. 2013. News Is Bad for You: And Giving up Reading It Will Make You Happier. The Guardian [14] Doleac, J. L., and B. Hansen. 2016. Does “Ban the Box” Help or Hurt Low-Skilled Workers? Statistical Discrimination and Employment Outcomes When Criminal Histories Are Hidden. NBER Working Paper. https://www.nber.org/papers/w22469. (accessed Sep. 25, 2019). [16] Dorison, C. A., J. A. Minson, and T. Rogers. 2019. Selective Exposure Partly Relies on Faulty Affective Forecasts. Cognition 188:98–107. [5] Dresdner-Morgenpost. 1990. Interview mit Lothar de Maizière [Interview with Lothar de Maizière]. Dresdner Morgenpost, Sept. 15, 1990. [2] Driver, P. 2001. Uneasy Virtue. Cambridge: Cambridge Univ. Press. [1] Dubey, P., and C. Wu. 2001. Competitive Prizes: When Less Scrutiny Induces More Effort. J. Math. Econ. 36:311–336. [9] Duckworth, A. L., T. S. Gendler, and J. J. Gross. 2016. Situational Strategies for SelfControl. Persp. Psychol. Sci. 11:35–55. [14] Dunlap, R. E., and A. M. McCright. 2010. Climate Change Denial: Sources, Actors, and Strategies. In: Routledge Handbook of Climate Change and Society, ed. C. LeverTracy, pp. 240–259. Abingdon: Routledge. [7] Duranti, M. 2013. Holocaust Memory and the Silences of the Human Rights Revolution. In: Schweigen: Archaologie der literarischen Kommunikation XI [Silence: Archaology of Literary Communication vol. 11], ed. A. Assmann and J. Assmann, pp. 89–100, Munich: William Fink Verlag. [2] Eatwell, R. 1996. On Dening the “Fascist Minimum”: The Centrality of Ideology. J. Polit. Ideol. 1:303–319. [7] Eckert, S. 2017. Fighting for Recognition: Online Abuse of Women Bloggers in Germany, Switzerland, the United Kingdom, and the United States. New Media Society 20:1282–1302. [7] Eckstein, L., J. R. Garrett, and B. E. Berkman. 2014. A Framework for Analyzing the Ethics of Disclosing Genetic Research Findings. J. L. Med. Ethics 42:190–207. [12] Edgers, G. 2018. The Star Flutist Was Paid $70,000 Less Than the Oboe Player: So She Sued. Washington Post, Dec. 11, 2018 [4] Ehrich, K. R., and J. I. Irwin. 2005. Willful Ignorance in the Request for Product Attribute Information. J. Market. Res. 42:266–277. [16] Einstein, K. L., and D. M. Glick. 2015. Do I Think BLS Data Are BS? The Consequences of Conspiracy Theories. Polit. Behav. 37:679–701. [7]

342

Bibliography

Elgin, C. 2004. True Enough. Phil. Issues 14:113–121. [13] Elias, N. 1978. The Civilizing Process: The History of Manners. New York: Basil Blackwell. [5] Eliaz, K., and R. Spiegler. 2006. Can Anticipatory Feelings Explain Anomalous Choices of Information Sources? Games Econ. Behav. 56:87–104. [1] Elkin, E. B., S. H. M. Kim, E. S. Casper, D. W. Kissane, and S. Schrag. 2007. Desire for Information and Involvement in Treatment Decisions: Elderly Cancer Patients’ Preferences and Their Physicians’ Perceptions. J. Clinical Oncology 25:5275–5280. [16] Ellwood, C. A. 1920. An Introduction to Social Psychology. New York: Appleton. [14] Elster, J. 1996. Rationality and the Emotions. Econ. J. 106:1386–1397. [1] ———. 2000. Ulysses Unbound: Studies in Rationality, Precommitment, and Constraints. Cambridge: Cambridge Univ. Press. [14] Ely, J., A. Frankel, and E. Kamenica. 2015. Suspense and Surprise. J. Polit. Economy 123:215–260. [1, 8, 14] El Zein, M., B. Bahrami, and R. Hertwig. 2019. Shared Responsibility in Collective Decisions. Nature Hum. Behav. 3:554–559. [14] Ende, J., L. Kazis, A. Ash, and M. A. Moskowitz. 1989. Measuring Patients’ Desire for Autonomy: Decision Making and Information-Seeking Preferences Among Medical Patients. J. Gen. Internal Med. 4:23–30. [16] Engel, C., and W. Singer, eds. 2008. Better Than Conscious? Decision Making, the Human Mind, and Implications for Institutions, Strüngmann Forum Reports, vol. 1, J. Lupp, series ed. Cambridge, MA: MIT Press. [Preface] Enli, G. 2017. Twitter as Arena for the Authentic Outsider: Exploring the Social Media Campaigns of Trump and Clinton in the 2016 US Presidential Election. Eur. J. Commun. 32:50–61. [7] Epstein, L. G. 2008. Living with Risk. Rev. Econ. Stud. 75:1121–1141. [8] Erll, A. 2011. Traumatic Pasts, Literary Afterlives, and Transcultural Memory: New Directions of Literary and Media Memory Studies. J. Aesthet. Cult. 3: [2] Evans, A. M., and J. I. Krueger. 2011. Elements of Trust: Risk and Perspective Taking. J. Exp. Soc. Psychol. 47:171–177. [14] Evans, J. S. B. T. 1989. Bias in Human Reasoning: Causes and Consequences Hillsdale: Erlbaum. [17] Fabsitz, R. R., A. McGuire, R. R. Sharp, et al. 2010. Ethical and Practical Guidelines for Reporting Genetic Research Results to Study Participants: Updated Guidelines from a National Heart, Lung, and Blood Inst. Working Group. Circulation 3:574–580. [12] Fahrmeir, A., and A. Imhausen, eds. 2013. Die Vielfalt normativer Ordnungen: Konikte und Dynamik in historischer und ethnologischer Perspektive, vol. 8. Frankfurt: Campus Verlag. [14] Falk, A., and N. Szech. 2013. Morals and Markets. Science 340:707–711. [14] ———. 2019. Competing Image Concerns: Pleasures of Skill and Moral Values, Working Paper, Karlsruhe Institute of Technology. https://polit.econ.kit.edu/downloads/papers/ WP_Pleasures_of_skill_Falk_Szech.pdf. (accessed Jan. 30, 2020). [14] Falk, A., and F. Zimmermann. 2017. Consistency as a Signal of Skills. Manage. Sci. 63:2049–2395. [8] Farhi, E., and I. Werning. 2010. Progressive Estate Taxation. Q. J. Econ. 125:635–673. [11] Farrell, D. M., J. Suiter, and C. Harris. 2018. “Systematizing” Constitutional Deliberation: The 2016–18 Citizens’ Assembly in Ireland. Irish Polit. Stud. 34:113–123. [7] Farrell, H., and B. Schneier. 2018. Common-Knowledge Attacks on Democracy. Cambridge, MA: Berkman Klein Center for Internet and Society at Harvard Univ.

Bibliography

343

FATF. 2012/2019. International Standards on Combating Money Laundering and the Financing of Terrorism & Proliferation. Paris: Financial Action Task Force (FATF). [16] Fawcett, T. W., B. Fallenstein, A. D. Higginson, et al. 2014. The Evolution of Decision Rules in Complex Environments. Trend. Cogn. Sci. 18:153–161. [9] Fechner, H. B., L. J. Schooler, and T. Pachur. 2018. Cognitive Costs of Decision-Making Strategies: A Resource Demand Decomposition with a Cognitive Architecture. Cognition 170:102–122. [5] Fehr, E., and U. Fischbacher. 2003. The Nature of Human Altruism. Nature 425:785–791. [9] Fehr, E., and K. Schmidt. 1999. A Theory of Fairness, Competition, and Cooperation. Q. J. Econ. 114:817–868. [9] Fels, M. 2015. On the Value of Information: Why People Reject Medical Tests. J. Behav. Exp. Econ. 56:1–12. [1] Feretti, F. 2008. The Law and Consumer Credit Information in the European Community: The Regulation of Consumer Credit Systems. Abingdon: Routledge. [16] Festinger, L. 1954. A Theory of Social Comparison Processes. Hum. Relat. 7:117–140. [8] ———. 1957. A Theory of Cognitive Dissonance. California: Row Peterson and Co. [10] Fiedler, K., and P. Juslin, eds. 2006. Information Sampling and Adaptive Cognition. Cambridge: Cambridge Univ. Press. [8, 14] Fiedler, K., and Y. Kareev. 2006. Does Decision Quality (Always) Increase with the Size of Information Samples? Some Vicissitudes in Applying the Law of Large Numbers. J. Exp. Psychol. Learn. Mem. Cogn. 32:883–903. [8] Finger, E. 2012. “Wahrheit schafft Klarheit.” Die Zeit, Nr. 14, March 29. [2] Fischbacher, U., and S. Gächter. 2010. Social Preferences, Beliefs, and the Dynamics of Free Riding in Public Goods Experiments. Am. Econ. Rev. 100:541–556. [14] Fischer, P., and T. Greitemeyer. 2010. A New Look at Selective-Exposure Effects: An Integrative Model. Curr. Dir. Psychol. Sci. 19:384–389. [14] Fischer, P., J. I. Krueger, T. Greitemeyer, et al. 2011. The Bystander Effect: A MetaAnalytic Review on Bystander Intervention in Dangerous and Non-Dangerous Emergencies. Psychol. Bull. 137:517–537. [14] Fisher, M. 1992. Stasi File Undercuts East German: Prominent Politician Shown to Have Killed 6 Jews in World War II. Washington Post, A12. [2] Fiske, A. P., and P. E. Tetlock. 1997. Taboo Trade-Offs: Reactions to Transactions That Transgress the Spheres of Justice. Polit. Psychol. 18:255–297. [15] Fiske, S. T. 2018. Stereotype Content: Warmth and Competence Endure. Curr. Dir. Psychol. Sci. 27:67–73. [14] Foucault, M. 1972. Archaeology of Knowledge and the Discourse on Language (transl. A. M. Seridan Smith). New York: Pantheon Books. [2] Fouchier, R. A. M., A. García-Sastre, and Y. Kawaoka. 2012. Pause on Avian Flu Transmission Studies. Nature 481:443–443. [1] Frank, R. H. 1988. Passions Within Reason: The Strategic Role of Emotions. New York: W. W. Norton. [3, 10] ———. 2011. The Strategic Role of the Emotions. Emotion Rev. 3:252–254. [3] Frankfurt, H. G. 2005. On Bullshit. Princeton: Princeton Univ. Press. [7] Freddi, E. 2017. Do People Avoid Morally Relevant Information? Evidence from the Refugee Crisis? Center Discussion Paper Series No. 2017-034. SSRN Elec. J. Sept., 22, 2017. [10]

344

Bibliography

Frederick, S., and G. Loewenstein. 1999. Hedonic Adaptation. In: Well-Being: Foundations of Hedonic Psychology, ed. D. Kahneman et al., pp. 302–329. New York: Russell Sage Foundation. [17] Frei, R. 2018. “In My Home Nobody Spoke about Religion, Politics or Football”: Communicative Silences among Generations in Argentina and Chile. Mem. Stud. 5: doi 10.1177/1750698017754249. [2] Freud, S. 1950. Remembering, Repeating and Working-Through. In: The Standard edition of the Complete Psychological Works of Sigmund Freud, ed. J. Strachey, pp. 145–157, vol. XII. London: Hogarth. [1] Frevert, U., ed. 2019. Moral Economies. Göttingen: Vandenhoeck and Ruprecht. [14] Frey, D. 1982. Different Levels of Cognitive Dissonance, Information Seeking, and Information Avoidance. J. Pers. Soc. Psychol. 43:1175–1183. [17] Fried, J., and M. Stolleis, eds. 2009. Wissenskulturen: Über die Erzeugung und Weitergabe von Wissen. Frankfurt: Campus Verlag. [14] Friedman, M. 1953. The Methodology of Positive Economics in Essays in Positive Economics. Chicago: Univ. Chicago Press. [13] Fuchs, A. 2006. From “Vergangenheitsbewaltigung” to Generational Memory Contests in Günter Grass, Monika Maron and Uwe Timm. German Life Lett. 59:169–186. [2] Fudenberg, D., and J. Tirole. 1991. Game Theory. Cambridge, MA: MIT Press. [9] Fulbrook, M. 1995. Anatomy of a Dictatorship: Inside the GDR, 1949–1989. New York: Oxford Univ. Press. [2] Funke, M., M. Schularick, and C. Trebesch. 2016. Going to Extremes: Politics after Financial Crises, 1870–2014. Eur. Econ. Rev. 88:227–260. [7] Galston, W. A. 2018. The Populist Challenge to Liberal Democracy. J. Democracy 29:5–19. [7] Garcia, S. M., A. Tor, and T. M. Schiff. 2013. The Psychology of Competition: A Social Comparison Perspective. Persp. Psychol. Sci. 8:634–650. [1] Gardner, A. J., and J. Griffiths. 2014. Propranolol, Post-Traumatic Stress Disorder, and Intensive Care: Incorporating New Advances in Psychiatry into the ICU. Crit. Care 18:698. [6, 10] Garnkel, H. 1956. Conditions of Successful Degradation Ceremonies. Am. J. Sociol. 61:420–424. [2] Gassert, P., and A. E. Steinweis, eds. 2007. Coping with the Nazi Past: West German Debates on Nazism and Generational Conict, 1955–1975, Studies in German History, vol. 2, C. Mauch, series ed. New York: Berghahn. [2] Gauck, J. 1994. Gegen den Schlußstrich: Gespräch mit dem Stasi-Akten-Verwalter Joachim Gauck. Evangel. Komment. 27:341–344. [2] Gauthier, D. 1986. Morals by Agreement. New York: Oxford Univ. Press. [13] Geisler, W. S. 1989. Sequential Ideal-Observer Analysis of Visual Discriminations. Psychol. Rev. 96:267–314. [14] ———. 2011. Contributions of Ideal Observer Theory to Vision Research. Vision Res. 51:771–781. [14] Gestrich, A. 1994. Absolutismus und Öffentlichkeit: Politische Kommunikation in Deutschland zu Beginn des 18. Jahrhunderts [Absolutism and the Public Sphere: Political Communication in Germany at the Beginning of the 18th Century]. Kritische Studien zur Geschichtswissenschaft. Göttingen: Vandenhoeck and Ruprecht. [2] Gibbons, M., C. Limoges, H. Nowotny, et al. 2006. The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies. London: Sage Publications. [2]

Bibliography

345

Gieseke, J. 2001. Der Mielke-Konzern: Die Geschichte der Stasi 1945–1990. Stuttgart: Deutsche Verlags-Anstalt. [2] Gigerenzer, G. 2014. Risk Savvy: How to Make Good Decisions. New York: Viking Penguin. [10] Gigerenzer, G., and H. Brighton. 2009. Homo Heuristicus: Why Biased Minds Make Better Inferences. Top. Cogn. Sci. 1:107–143. [13, 17] Gigerenzer, G., and W. Gaissmaier. 2011. Heuristic Decision Making. Annu. Rev. Psychol. 62:451–482. [17] Gigerenzer, G., and R. Garcia-Retamero. 2017. Cassandra’s Regret: The Psychology of Not Wanting to Know. Psychol. Rev. 124:179–196. [5, 8, 10, 14] Gigerenzer, G., and D. G. Goldstein. 1996. Reasoning the Fast and Frugal Way: Models of Bounded Rationality. Psychol. Rev. 103:650–669. [9, 13] Gigerenzer, G., and J. A. M. Gray, eds. 2011. Better Doctors, Better Patients, Better Decisions: Envisioning Healthcare 2020, Strüngmann Forum Reports, vol. 6, J. Lupp, series ed. Cambridge, MA: MIT Press. [Preface] Gigerenzer, G., R. Hertwig, and T. Pachur, eds. 2011. Heuristics: The Foundations of Adaptive Behavior. New York: Oxford Univ. Press. [1, 17] Gigerenzer, G., and E. M. Kurz. 2001. Vicarious Functioning Reconsidered: A Fast and Frugal Lens Model. In: The Essential Brunswik: Beginnings, Explications, Applications, ed. K. R. Hammond and T. R. Stewart, pp. 342–347. Oxford: Oxford Univ. Press. [4] Gigerenzer, G., and D. J. Murray. 1987. Cognition as Intuitive Statistics. Hillsdale: Lawrence Erlbaum. [14] Gigerenzer, G., and R. Selten, eds. 2001. Bounded Rationality: The Adaptive Toolbox, Dahlem Workshop Reports, vol. 84, J. Lupp, series ed. Cambridge, MA: MIT Press. [14] Gigerenzer, G., and P. M. Todd. 2012. What Is Ecological Rationality. In: Ecological Rationality: Intelligence in the World, ed. G. Gigerenzer et al. New York: Oxford Univ. Press. [13] Gigerenzer, G., P. M. Todd, and ABC Research Group. 2000. Simple Heuristics That Make Us Smart. Evolution and Cognition, S. Stich, series ed. Oxford: Oxford Univ. Press. [8] Gill, D., and U. Schröter. 1991. Das Ministerium für Staatssicherheit: Anatomie des Mielke-Imperiums. Berlin: Rowohlt. [2] Gill, M., and G. Taylor. 2004. Preventing Money Laundering or Obstructing Business? Financial Companies’ Perspectives on “Know Your Customer” Procedures. Br. J. Criminol. 44:582–594. [16] Giroux, H. A. 2018. Trump and the Legacy of a Menacing Past. Cult. Stud. 33:1–29. [7] Gitlin, T. 1993. I Did Not Imagine That I Lived in Truth. New York Times. [2] Glennon, T. 2000. Somebody’s Child: Evaluating the Erosion of the Marital Presumption of Paternity. W. VA Law Rev. 102:547–605. [16] Gliwa, C., I. R. Yurkiewicz, L. S. Lehmann, et al. 2016. Institutional Review Board Perspectives on Obligations to Disclose Genetic Incidental Findings to Research Participants. Genet. Med. 18:705–711. [12] Goldin, C., and C. Rouse. 2000. Orchestrating Impartiality: The Impact of “Blind” Auditions on Female Musicians. Am. Econ. Rev. 90:715–741. [1, 4, 7, 14] Goldstein, D. G., and G. Gigerenzer. 2002. Models of Ecological Rationality: The Recognition Heuristic. Psychol. Rev. 109:75–90. [6, 10] Gollier, C., and A. Muermann. 2010. Optimal Choice and Beliefs with Ex Ante Savoring and Ex Post Disappointment. Manage. Sci. 56:1272–1284. [10]

346

Bibliography

Golman, R., D. Hagmann, and G. Loewenstein. 2017. Information Avoidance. J. Econ. Lit. 55:96–135. [2, 3, 5, 8, 9, 11, 14, 17] Golman, R., and G. Loewenstein. 2016. Information Gaps: A Theory of Preferences Regarding the Presence and Absence of Information. Decision 5:143–164. [8] ———. 2018. Information Gaps: A Theory of Preferences Regarding the Presence and Absence of Information. Decision 5:143–164. [9, 10] Golman, R., G. Loewenstein, K. O. Moene, and L. Zarri. 2016. The Preference for Belief Consonance. J. Econ. Perspect. 30:165–188. [8] Golman, R., G. Loewenstein, A. Molnar, and S. Saccardo. 2019. The Demand for, and Avoidance of, Information. SSRN Elec. J. [5, 8, 10] Good, I. J. 1950. Probability and the Weighing of Evidence. New York: Griffin. [14] ———. 1967. On the Principle of Total Evidence. Br. J. Phil. Sci. 17:319–321. [1] Goodin, R. E. 1995. Laundering Preferences. In: Utilitarianism as a Public Philosophy, ed. R. E. Goodin, pp. 132–148, Cambridge Studies in Philosophy and Public Policy, D. MacLean, series ed. New York: Cambridge Univ. Press. [11] ———. 2004. Reective Democracy. Oxford Political Theory. New York: Oxford Univ. Press. [14] Gosseries, A. 2008. On Future Generations’ Future Rights. J. Political Philosophy 16:446–474. [14] Gosseries, A., and T. Parr. 2005. Publicity. Stanford Encyclopedia of Philosophy. https:// plato.stanford.edu/archives/win2018/entries/publicity/. (accessed Jan. 9, 2020). [4] GOV.UK. 2018. Employers: Preventing Discrimination. https://www.gov.uk/employerpreventing-discrimination/recruitment. (accessed Jan. 2, 2020). [16] Grant, S., A. Kajii, and B. Polak. 1998. Intrinsic Preference for Information. J. Econ. Theory 83:233–259. [1] Gray, T. 1747. Ode on a Distant Prospect of Eton College. Thomas Gray Archive. http:// www.thomasgray.org/cgi-bin/display.cgi?text=odec. (accessed Jan. 13, 2020). [1] Green, M. J., and J. R. Botkin. 2003. Genetic Exceptionalism in Medicine: Clarifying the Differences between Genetic and Nongenetic Tests. Ann. Intern. Med. 138:571– 575. [12] Green, R. C., J. S. Berg, W. W. Grody, et al. 2013. ACMG Recommendations for Reporting of Incidental Findings in Clinical Exome and Genome Sequencing. Genet. Med. 15:565–574. [12, 17] Greene, J. D. 2016. Solving the Trolley Problem. In: A Companion to Experimental Philosophy, ed. J. Sytsma and W. Buckwalter, pp. 175–189, Blackwell Companions to Philosphy. Malden, MA: Wiley Blackwell. [14] Griffiths, T. L., F. Lieder, and N. D. Goodman. 2015. Rational Use of Cognitive Resources: Levels of Analysis between the Computational and the Algorithmic. Top. Cogn. Sci. 7:217–229. [14] Grinberg, N., K. Joseph, L. Friedland, B. Swire-Thompson, and D. Lazer. 2019. Fake News on Twitter During the 2016 US Presidential Election. Science 363:374–378. [7] Gross, M., and L. McGoey, eds. 2015. Routledge International Handbook of Ignorance Studies, Routledge International Handbooks. Abingdon: Routledge. [1, 2, 5] Großbölting, T., and S. Kittel, eds. 2019. Welche “Wirklichkeit” und wessen “Wahrheit”? Das Geheimdienstarchiv als Quelle und Medium der Wissensproduktion [What “Reality” and Whose “Truth?” The Secret Service Archive as Source and Medium of Knowledge Production]. Göttingen: Vandenhoeck and Ruprecht. [2] Grossman, D. C., S. J. Curry, D. K. Owens, et al. 2018. Screening for Prostate Cancer: US Preventive Services Task Force Recommendation Statement. JAMA 319:1901– 1913. [14]

Bibliography

347

Grossman, Z., and J. J. Van der Weele. 2017. Self-Image and Willful Ignorance in Social Decisions. J. Eur. Econ. Assoc. 15:173–217. [8, 10] Gruber, J., and B. Köszegi. 2001. Is Addiction “Rational?” Theory and Evidence. Q. J. Econ. 116:1261–1303. [11] Guess, A., J. Nagler, and J. Tucker. 2019. Less Than You Think: Prevalence and Predictors of Fake News Dissemination on Facebook. Sci. Adv. 5:eaau4586. [7] Gul, F., and W. Pesendorfer. 2008. The Case for Mindless Economics. In: The Foundations of Positive and Normative Economics: A Hand Book, ed. A. Caplin and A. Schotter, pp. 3–41, Handbooks in Economic Methodologies. New York: Oxford Univ. Press. [11] Habermas, J. 1992. Bemerkungen zu einer verworrenen Diskussion: Was bedeutet “Aufarbeitung der Vergangenheit” heute? Die Zeit 15:, Nr. 15, April 13. [2] Hage, V., and K. Thimm. 2010. Oralverkehr mit Vokalen [Oral Sex with Vowels]. Spiegel Online. http://magazin.spiegel.de/EpubDelivery/spiegel/pdf/73290122. (accessed Jan. 13, 2020). [1] Hagemann, T. A., and J. Grinstein. 1997. Mythology of Aggregate Corporate Knowledge: A Deconstruction. George Washington Law Rev. 65:210–247. [16] Hahl, O., M. Kim, and E. W. Zuckerman Sivan. 2018. The Authentic Appeal of the Lying Demagogue: Proclaiming the Deeper Truth About Political Illegitimacy. Am. Sociol. Rev. 83:1–33. [7] Hahl, O., E. W. Zuckerman, and M. Kim. 2017. Why Elites Love Authentic Lowbrow Culture: Overcoming High-Status Denigration with Outsider Art. Am. Sociol. Rev. 82:828–856. [7] Hahn, U. 2014. Experiential Limitation in Judgment and Decision. Top. Cogn. Sci. 6:229–244. [14] Hahn, U., and A. J. Harris. 2014. What Does It Mean to Be Biased: Motivated Reasoning and Rationality. Psychol. Learn. Motiv. 61:41–102. [5] Hahn, U., M. von Sydow, and C. Merdes. 2019. How Communication Can Make Voters Choose Less Well. Top. Cogn. Sci. 11:194–206. [14] Haidt, J., and J. Baron. 1996. Social Roles and the Moral Judgement of Acts and Omissions. Eur. J. Soc. Psychol. 26:201–218. [14] Hallin, D. C. 2018. Mediatisation, Neoliberalism and Populisms: The Case of Trump. Contemp. Soc. Sci. 14:1–12. [7] Halpern, J., and R. M. Arnold. 2008. Affective Forecasting: An Unrecognized Challenge in Making Serious Health Decisions. J. Gen. Internal Med. 23:1708–1712. [12] Hamdani, A. 2007. Mens Rea and the Cost of Ignorance. VA Law Rev. 93:415–457. [16] Hammond, D. 2011. Health Warning Messages on Tobacco Products: A Review. BMJ Evid. Based Med. 20:327–337. [16] Hammond, K. R. 2000. Coherence and Correspondence Theories in Judgment and Decision Making. In: Judgment and Decision Making: An Interdisciplinary Reader, ed. T. T. Connolly et al., pp. 53–65, Cambridge Series on Judgement and Decision Making. New York: Cambridge Univ. Press. [14] Hammond, K. R., and T. R. Stewart. 2001. The Essential Brunswick: Beginnings, Explications, Applications. Oxford: Oxford Univ. Press. [4] Hanna, R. 2006. Rationality and Logic. Cambridge, MA: MIT Press. [13] Harris, J., and K. Keywood. 2001. Ignorance, Information and Autonomy. Theor. Med. Bioeth. 22:415–436. [1, 12, 16] Hart, W., D. Albarracín, A. H. Eagly, et al. 2009. Feeling Validated versus Being Correct: A Meta-Analysis of Selective Exposure to Information. Psychol. Bull. 135:555–588. [1]

348

Bibliography

Haselton, M. G., G. A. Bryant, A. Wilke, et al. 2009. Adaptive Rationality: An Evolutionary Perspective on Cognitive Bias. Soc. Cogn. 27:733–763. [14] Hasher, L., and R. T. Zacks. 1988. Working Memory, Comprehension, and Aging: A Review and a New View. In: The Psychology of Learning and Motivation, ed. G. H. Bower, pp. 193–225, vol. 22. Elsevier. [5] Hastie, R., and T. Kameda. 2005. The Robust Beauty of Majority Rules in Group Decisions. Psychol. Rev. 112:494–508. [14] Haugaard, M. 1997. The Constitution of Power: A Theoretical Analysis of Power, Knowledge and Structure. Manchester: Manchester Univ. Press. [2] Häuser, W., E. Hansen, and E. P. 2012. Nocebo Phenomena in Medicine: Their Relevance in Everyday Clinical Practice. Dtsch. Arztebl. Intl. 109:459–465. [15] Hawking, S., M. Tegmark, S. Russell, and F. Wilczek. 2014. Transcending Complacency on Superintelligent Machines. The Huffington Post Blog. (accessed Mar. 19, 2020) [14] Heck, P. R., and J. I. Krueger. 2017. Social Perception in the Volunteer’s Dilemma: Role of Choice, Outcome, and Expectation. Soc. Cogn. 35:497–519. [14] Heck, P. R., and M. N. Meyer. 2019. Information Avoidance in Genetic Health: Perceptions, Norms, and Preferences. Soc. Cogn. 37:266–293. [14] Heckhausen, J. 2007. The Motivation-Volition Divide and Its Resolution in ActionPhase Models of Developmental Regulation. Res. Hum. Dev. 4:163–180. [5] Heider, F. 1958. The Psychology of Interpersonal Relations. New York: John Wiley and Sons. [8] Hellman, D. 2009. Willfully Blind for Good Reason. Crim. Law Phil. 3:301–316. [1] Hellman, M., and C. Wagnsson. 2017. How Can European States Respond to Russian Information Warfare? An Analytical Framework. Eur. Security 26:153–170. [7] Hermalin, B. E., and M. L. Katz. 2009. Information and the Hold-up Problem. RAND J. Econ. 40:405–423. [3] Herring, E., and P. Robinson. 2014a. Deception and Britain’s Road to War in Iraq. Intl. J. Contemp. Iraqi Stud. 8:213–232. [7] ———. 2014b. Report X Marks the Spot: The British Government’s Deceptive Dossier on Iraq and WMD. Polit. Sci. Q. 129:551–584. [7] Herring, J., and C. Foster. 2012. “Please Don’t Tell Me”: The Right Not to Know. Camb. Q. Healthc. Ethic. 21:20–29. [12] Hertwig, R., and C. Engel. 2016. Homo Ignorans: Deliberately Choosing Not to Know. Persp. Psychol. Sci. 11:359–372. [Preface, 2–11, 13–17] Hertwig, R., and S. M. Herzog. 2009. Fast and Frugal Heuristics: Tools of Social Rationality. Soc. Cogn. 27:661–698. [14] Hertwig, R., U. Hoffrage, and ABC Research Group. 2013. Simple Heuristics in a Social World. New York: Oxford Univ. Press. [1, 9] Hertwig, R., and T. J. Pleskac. 2010. Decisions from Experience: Why Small Samples? Cognition 115:225–237. [8] Hertwig, R., T. J. Pleskac, T. Pachur, and the Center for Adaptive Rationality. 2019. Taming Uncertainty. Cambridge, MA: MIT Press. [17] Hertwig, R., and K. G. Volz. 2013. Abnormality, Rationality, and Sanity. Trend. Cogn. Sci. 17:547–549. [14] Higgins, E. T. 1997. Beyond Pleasure and Pain. Am. Psychol. 52:1280–1300. [14] Higginson, A. D., T. W. Fawcett, P. C. Trimmer, J. M. McNamara, and A. I. Houston. 2012. Generalized Optimal Risk Allocation: Foraging and Antipredator Behavior in a Fluctuating Environment. Am. Nat. 180:589–603. [14] High, C., A. H. Kelly, and J. Mair. 2012. The Anthropology of Ignorance: An Ethnographic Approach. New York: Palgrave Macmillan. [1, 5]

Bibliography

349

Hightow, L. B., W. C. Miller, P. A. Leone, et al. 2003. Failure to Return for HIV Posttest Counseling in an STD Clinic Population. AIDS Educ. Prev. 15:282–290. [1] Hilbe, C., M. Hoffman, and M. A. Nowak. 2015. Cooperate without Looking in a NonRepeated Game. Games 6:458–472. [9] Hilbert, M., and P. López. 2011. The World’s Technological Capacity to Store, Communicate, and Compute Information. Science 332:60–65. [1] Hinck, R. S., H. Hawthorne, and J. Hawthorne. 2018. Authoritarians Don’t Deliberate: Cultivating Deliberation and Resisting Authoritarian Tools in an Age of Global Nationalism. J. Pub. Delib. 14: [7] Hirshleifer, J., and J. G. Riley. 1992. The Analytics of Uncertainty and Information. Cambridge Surveys of Economic Literature, M. Perlman, series ed. Cambridge: Cambridge Univ. Press. [10] Ho, E. H., D. Hagmann, and G. Loewenstein. 2018. Measuring Information Preferences. September 14. SSRN Electronic Journal. https://papers.ssrn.com/sol3/papers. cfm?abstract_id=3249768. (accessed Jan. 9, 2020). [5] Hobbes, T. 1651/1968. Leviathan. London: Penguin. [1] Hobsbawm, E. 1983. Inventing Traditions. In: The Invention of Tradition, ed. E. Hobsbawm and T. Ranger, pp. 1–14. Cambridge: Cambridge Univ. Press. [2] Hoffman, M., C. Hilbe, and M. A. Nowak. 2018. The Signal-Burying Game Can Explain Why We Obscure Positive Traits and Good Deeds. Nature Hum. Behav. 2:397–404. [9] Hoffman, M., E. Yoeli, and C. D. Navarrete. 2016. Game Theory and Morality. In: The Evolution of Morality, ed. T. K. Shackelford and R. D. Hansen, pp. 289–316, Evolutionary Psychology, T. K. Shackelford and V. A. Weekes-Schackelford, series eds. Basel: Springer. [9, 10] Hoffman, M., E. Yoeli, and M. A. Nowak. 2015. Cooperate without Looking: Why We Care What People Think and Not Just What They Do. PNAS 112:1727–1732. [9] Hoffrage, U., R. Hertwig, and G. Gigerenzer. 2000. Hindsight Bias: A By-Product of Knowledge Updating. J. Exp. Psychol. Learn. Mem. Cogn. 26:566–581. [10] Hogarth, R. M., and N. Karelaia. 2007. Heuristic and Linear Models of Judgment: Matching Rules and Environments. Psychol. Rev. 114:733–758. [17] Holtzman, N. A. 2013. ACMG Recommendations on Incidental Findings Are Flawed Scientically and Ethically. Genet. Med. 15:750–751. [12] Holzer, H. J., S. Raphael, and M. Stoll. 2006. Perceived Criminality, Criminal Background Checks and the Racial Hiring Practices of Employers. J. Law Econ. 49:451–480. [4] Hooker, B. 2000. Ideal Code, Real World: A Rule-Consequentialist Theory of Morality. New York: Oxford Univ. Press. [14] ———. 2016. Rule Consequentialism. In: The Stanford Encyclopedia of Philosophy, ed. E. N. Zalta. https://plato.stanford.edu/archives/win2016/entries/consequentialism-rule/. (accessed Jan. 31, 2020). [17] Howard, P. N., S. Woolley, and R. Calo. 2018. Algorithms, Bots, and Political Communication in the US 2016 Election: The Challenge of Automated Political Communication for Election Law and Administration. J. Inform. Technol. Politics 15:81–93. [7] Howard, R. A. 1966. Information Value Theory. IEEE Trans. Syst. Sci. Cybernet. 2:22–26. [14] Howell, J. L., and J. A. Shepperd. 2012. Reducing Information Avoidance through Affirmation. Psychol. Sci. 23:141–145 [1, 17]

350

Bibliography

Howell, J. L., and J. A. Shepperd. 2013. Reducing Health-Information Avoidance through Contemplation. Psychol. Sci. 24:1696–1703. [1, 5] Howes, A., R. L. Lewis, and A. Vera. 2009. Rational Adaptation under Task and Processing Constraints: Implications for Testing Theories of Cognition and Action. Psychol. Rev. 116:717–751. [14] Hsee, C. K., Y. Yang, N. H. Li, and L. X. Shen. 2009. Wealth, Warmth, and Well-Being: Whether Happiness Is Relative or Absolute Depends on Whether It Is About Money, Acquisition, or Consumption. J. Market. Res. 46:396–409. [8] Huck, S., N. Szech, and L. M. Wenner. 2015. More Effort with Less Pay: On Information Avoidance, Belief Design and Performance. Working Paper Series in Economics, Karlsruher Institut für Technologie, No. 72. http://econstor.eu/bitstream/10419/120879/1/836112962.pdf (accessed Jan. 13, 2020). [1] Husak, D. H., and C. A. Callender. 1994. Wilful Ignorance, Knowledge, and the Equal Culpability Thesis: A Study of the Deeper Signicance of the Principle of Legality. Wisconsin Law Rev. 1994:29–69. [16] Hutcherson, C. A., and J. J. Gross. 2011. The Moral Emotions: A Social-Functionalist Account of Anger, Disgust, and Contempt. J. Pers. Soc. Psychol. 100:719–737. [1] Inkster, N. 2016. Information Warfare and the US Presidential Election. Survival 58:23–32. [7] Iyengar, S. S., R. E. Wells, and B. Schwartz. 2006. Doing Better but Feeling Worse: Looking for the “Best” Job Undermines Satisfaction. Psychol. Sci. 17:143––150. [5] Iyengar, S. S., and S. J. Westwood. 2015. Fear and Loathing across Party Lines: New Evidence on Group Polarization. Am. J. Pol. Sci. 59:690–707. [17] Jackson, M. O., and L. Yariv. 2014. Present Bias and Collective Dynamic Choice in the Lab. Am. Econ. Rev. 104:4184–4204. [14] Jacobs, J., and T. Crepet. 2008. The Expanding Scope, Use, and Availability of Criminal Records. NY Univ. J. Legis. Public Policy 11:177–213. [16] Jacobson, G. C. 2010. Perception, Memory, and Partisan Polarization on the Iraq War. Polit. Sci. Q. 125:31–56. [7] Jacquet, J. 2015. Is Shame Necessary? New Uses for an Old Tool. New York: Pantheon Books. [2] Jagau, S., and M. van Veelen. 2017. A General Evolutionary Framework for the Role of Intuition and Deliberation in Cooperation. Nature Hum. Behav. 1:0152. [9] Jahn, R., and P. Wensierski. 2005. Die Freunde als Stasi-Spitzel. Die Eröffnung der Gauckbehörde [Friends as Stasi Informers. The Launch of the Gauck Authority]. Bonn: Bundeszentrale für politische Bildung und RBB. [2] Jamieson, C. E. 2013. Gun Violence Research: History of the Federal Funding Freeze: Newtown Tragedy May Lead to Lifting of Freeze in Place Since 1996. Psychol. Sci. Agenda, Feb. 2013. [14] Jamieson, K. H. 2018. Cyberwar: How Russian Hackers and Trolls Helped Elect a President: What We Don’t, Can’t, and Do Know. Oxford: Oxford Univ. Press. [7] Jamison, J., and J. Wegner. 2010. Multiple Selves in Intertemporal Choice. J. Econ. Psych. 31:832–839. [14] Jarausch, K. H. 2008. Memory Wars: German Debates About the Legacy of Dictatorship. In: Berlin Since the Wall’s End: Shaping Society and Memory in the German Metropolis Since 1989, ed. J. A. Williams, pp. 90–109. Newcastle: Cambridge Scholars Publ. [2] Jarvik, G. P., L. M. Amendola, J. S. Berg, et al. 2014. Return of Genomic Results to Research Participants: The Floor, the Ceiling, and the Choices in Between. Am. J. Hum. Genet. 94:818–826. [12]

Bibliography

351

Jellinek, G. 1898. Das Recht der Minoritäten: Vortag Gehalten in der Juristischen Gesellschaft zu Wien [The Rights of Minorities]. Vienna: Hölder. [14] Jenkins, A. C., D. Dodell-Feder, R. Saxe, and J. Knobe. 2014. The Neural Bases of Directed and Spontaneous Mental State Attributions to Group Agents. PLoS One 9:e105341. [14] Joas, H. 1997. Die Entstehung der Werte. Frankfurt: Suhrkamp. [14] Johnson, D. D. P., and J. H. Fowler. 2011. The Evolution of Overcondence. Nature 477:317–320. [8, 9] Joly, Y., I. Ngueng Feze, and J. Simard. 2013. Genetic Discrimination and Life Insurance: A Systematic Review of the Evidence. BMC Med. 11:25. [12] Jones, C. J. 1990. Autonomy and Informed Consent in Medical Decisionmaking: Toward a New Self-Fullling Prophecy. Wash. Lee Law Rev. 47:379–430. [16] Jones, S. 2009. Conicting Evidence: Hermann Kant and the Opening of the Stasi Files. German Life Lett. 62:190–205. [2] ———. 2014. The Media of Testimony: Remembering the East German Stasi in the Berlin Republic. Basingstoke: Palgrave Macmillan. [1] Jönsson, M. L., U. Hahn, and E. J. Olsson. 2015. The Kind of Group You Want to Belong To: Effects of Group Structure on Group Accuracy. Cognition 142:191–204. [14] Jordan, J. J., M. Hoffman, M. A. Nowak, and D. G. Rand. 2016. Uncalculating Cooperation Is Used to Signal Trustworthiness. PNAS 113:8658–8663. [9] Jost, J. T., P. Barberá, R. Bonneau, et al. 2018. How Social Media Facilitates Political Protest: Information, Motivation, and Social Networks. Polit. Psychol. 39:85–118. [7] Juslin, P., and H. Olsson. 2005. Capacity Limitations and the Detection of Correlations: Comment on Kareev (2000). Psychol. Rev. 112:256–267. [14] Kadane, J. B., M. Schervish, and T. Seidenfeld. 2008. Is Ignorance Bliss? J. Philosophy 105:5–36. [1, 3] Kafka, P. 2016. An Astonishing Number of People Believe Pizzagate, the FacebookFueled Clinton Sex Ring Conspiracy Story, Could Be True. Recode, Dec 9, 2016. [7] Kahneman, D. 2011. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. [1, 17] Kahneman, D., and S. Frederick. 2002. Representativeness Revisited: Attribute Substitution in Intuitive Judgment. In: Heuristics and Biases, ed. T. Gilovich et al., pp. 49–81. New York: Cambridge Univ. Press. [4] Kahneman, D., and D. Lovallo. 1993. Timid Choices and Bold Forecasts: A Cognitive Perspective on Risk Taking. Manage. Sci. 39:17–31. [1] Kahneman, D., and A. Tversky. 1979. Prospect Theory: An Analysis of Decision under Risk. Econometrica 47:263–291. [5] ———. 1984. Choices, Values, and Frames. Am. Psychol. 39:341–350. [5] Kahneman, D., P. Wakker, and R. Sarin. 1997. Back to Bentham? Explorations of Experienced Utility. Q. J. Econ. 112:375–405. [11] Kalev, A., F. Dobbin, and E. Kelly. 2006. Best Practices or Best Guesses? Assessing the Efficacy of Corporate Affirmative Action and Diversity Policies. Am. Sociol. Rev. 71:589–617. [16] Kang, M. J., M. Hsu, I. M. Krajbich, et al. 2009. The Wick in the Candle of Learning: Epistemic Curiosity Activates Reward Circuitry and Enhances Memory. Psychol. Sci. 20:963–973. [1] Kant, I. 1784. Beantwortung der Frage: Was ist Aufklärung? [Answering the Question: What Is Enlightenment?]. Berl. Monatsschrift 4:481–494. [1, 14, 17] ———. 1785/1993. Grounding for the Metaphysics of Morals (transl. by J. W. Ellington, 3rd edition). Indianapolis: Hackett. [17]

352

Bibliography

Kaptchuk, T. J. 1998. Intentional Ignorance: A History of Blind Assessment and Placebo Controls in Medicine. Bull. Hist. Med. 72:389–433. [1] Kareev, Y. 2012. Advantages of Cognitive Limitations. In: Evolution and the Mechanisms of Decision Making, ed. P. Hammerstein and J. R. Stevens, pp. 169– 181, Strüngmann Forum Report, vol. 11, J. Lupp, series ed. Cambridge, MA: MIT Press. [10] Kareev, Y., and J. Avrahami. 2007. Choosing between Adaptive Agents: Some Unexpected Implications of Level of Scrutiny. Psychol. Sci. 18:636–641. [9, 10, 14] Kareev, Y., I. Lieberman, and M. Lev. 1997. Through a Narrow Window: Sample Size and the Perception of Correlation. J. Exp. Psychol. 126:278–287. [14] Karelaia, N., and R. M. Hogarth. 2008. Determinants of Linear Judgment: A MetaAnalysis of Lens Model Studies. Psychol. Bull. 134:404–426. [4] Karlsson, N., G. Loewenstein, and D. Seppi. 2009. The Ostrich Effect: Selective Attention to Information. J. Risk Uncertainty 38:95–115. [1, 5] Kasner, J. F. 2017. Navigating the 21st Century: Combating Populism to Create a Positive-Sum International System. SAIS Rev. Int. Aff. 37:19–31. [7] Katsikopoulos, K. V., L. J. Schooler, and R. Hertwig. 2010. The Robust Beauty of Ordinary Information. Psychol. Rev. 117:1259–1266. [14] Kaufman, D., J. Murphy, J. Scott, and K. Hudson. 2008. Subjects Matter: A Survey of Public Opinions About a Large Genetic Cohort Study. Genet. Med. 10:831–839. [12] Kaufmann, C. 2004. Threat Ination and the Failure of the Marketplace of Ideas. Intl. Security 29:5–48. [7] Keil, F. C. 2006. Explanation and Understanding. Annu. Rev. Psychol. 57:227–254. [5] Kernis, M. H., and B. M. Goldman. 2006. A Multicomponent Conceptualization of Authenticity: Theory and Research. Adv. Exp. Soc. Psychol. 38:283–357. [7] Kervyn, K., V. Yzerbyt, and C. M. Judd. 2010. Compensation between Warmth and Competence: Antecedents and Consequences of a Negative Relation between the Two Fundamental Dimensions of Social Perception. Eur. Rev. Soc. Psychol. 21:155– 187. [14] Kessler, G. 2017. President Trump Announces a Major U.S. Steel Expansion — That Isn’t Happening. Washington Post, June 28, 2018. [7] ———. 2018a. Not Just Misleading. Not Merely False. A Lie. Washington Post, August 23, 2018. [7] ———. 2018b. A Year of Unprecedented Deception: Trump Averaged 15 False Claims a Day in 2018. Washington Post, Dec. 30, 2018. [7] King, G., J. Pan, and M. E. Roberts. 2017. How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument. Am. Polit. Sci. Rev. 111:484–501. [14] Kinzler, K. D., K. Shutts, J. DeJesus, and E. S. Spelke. 2009. Accent Trumps Race in Guiding Children’s Social Preferences. Soc. Cogn. 27:623–634. [5] Kleinberg, J., Ludwig, J., S. Mullainathan, and A. Rambachan. 2018. Algorithmic Fairness. AEA Pap. Proc. 108:22–27. [16] Klessmann, C., H. J. Misselwitz, and G. Wichert, eds. 1999. Deutsche Vergangenheiten— eine gemeinsame Herausforderung: Der schwierige Umgang mit der doppelten Nachkriegsgeschichte [German Histories—a Shared Challenge: The Difficult Handling of a Double Postwar History]. Berlin: Christoph Links Verlag. [2] Kleven, H. J., C. T. Kreiner, and E. Saez. 2009. The Optimal Income Taxation of Couples. Econometrica 77:537–560. [11]

Bibliography

353

Klitzman, R., P. S. Appelbaum, and W. Chung. 2013. Return of Secondary Genomic Findings versus Patient Autonomy: Implications for Medical Care. JAMA 310:369– 370. [12] Kluger, A. N., and A. DeNisi. 1996. The Effects of Feedback Interventions on Performance: Historical Review, a Meta-Analysis, and a Preliminary Feedback Intervention Theory. Psychol. Bull. 119:254–284. [1, 5] ———. 1998. Feedback Interventions: Toward the Understanding of a Double-Edged Sword. Curr. Dir. Psychol. Sci. 7:67–72. [1] Knoch, H., and B. Möckel. 2017. Moral History: Überlegungen zu einer Geschichte des Moralischen im “langen” 20. Jahrhundert. Zeithistorisch. Forsch. 14:93–111. [14] Konow, J. 2000. Fair Shares: Accountability and Cognitive Dissonance in Allocation Decisions. Am. Econ. Rev. 90:1072–1091. [8] Kosinski, M., S. C. Matz, S. D. Gosling, V. Popov, and D. Stillwell. 2015. Facebook as a Research Tool for the Social Sciences: Opportunities, Challenges, Ethical Considerations, and Practical Guidelines. Am. Psychol. 70:543–556. [14] Köszegi, B. 2003. Health Anxiety and Patient Behavior. J. Health Econ. 22:1073–1084. [8] ———. 2006. Ego Utility, Overcondence, and Task Choice. J. Eur. Econ. Assoc. 4:673–707. [8] ———. 2010. Utility from Anticipation and Personal Equilibrium. Econ. Theory 44:415–444. [8] Köszegi, B., and M. Rabin. 2009. Reference-Dependent Consumption Plans. Am. Econ. Rev. 99:909–936. [8] Kozlov-Davis, J. A. 2001. A Hybrid Approach to the Use of Deliberate Ignorance in Conspiracy Cases. Michigan Law Rev. 100:473–501. [16] Krause, A., U. Rinne, and K. F. Zimmermann. 2012. Anonymous Job Applications in Europe. IZA J. Eur. Lab. Stud. 1:1–20. [4] Kreps, D. M. 1979. Representation Theorem for Preference for Flexibility. Econometrica 47:565–577. [8] Kreps, D. M., P. R. Milgrom, J. Roberts, and R. B. Wilson. 1982. Rational Cooperation in the Finitely Repeated Prisoners’ Dilemma. J. Econ. Theory 27:245–252. [10] Kreps, D. M., and E. L. Porteus. 1978. Temporal Resolution of Uncertainty and Dynamic Choice Theory. Econometrica 46:185–200. [1, 8] Kriner, D., and G. Wilson. 2016. The Elasticity of Reality and British Support for the War in Afghanistan. Br. J. Polit. Int. Rel. 18:559–580. [7] Krueger, J. I., ed. 2012. Social Judgment and Decision Making, Frontiers of Social Psychology. New York: Psychology Press. [14] ———. 2013. Social Projection as a Source of Cooperation. Curr. Dir. Psychol. Sci. 22:289–294. [14] ———. 2019. The Vexing Volunteer’s Dilemma. Curr. Dir. Psychol. Sci. 28:53–58. [14] Krueger, J. I., M. Acevedo, and J. M. Robbins. 2006. Self as Sample. In: Information Sampling and Adaptive Cognition, ed. K. Fiedler and P. Juslin, pp. 353–377. New York: Cambridge Univ. Press. [14] Krueger, J. I., and P. R. Heck. 2017. The Heuristic Value of p in Inductive Statistical Inference. Front. Psychol. 8:908. [14] Krueger, J. I., and A. L. Massey. 2009. A Rational Reconstruction of Misbehavior. Soc. Cogn. 27:785–812. [14]

354

Bibliography

Kührer-Wielach, F., and M. Nowotnick, eds. 2018. Aus den Giftschränken des Kommunismus: Methodische Fragen zum Umgang mit Überwachungsakten in Zentral- und Südosteuropa [From the Poison Cabinets of Communism: Methodological Questions on Dealing with Surveillance Files in Central and Southeastern Europe]. Regensburg: Verlag Friedrich Pustet. [2] Kull, S., C. Ramsay, and E. Lewis. 2003. Misperceptions, the Media, and the Iraq War. Polit. Sci. Q. 118:569–598. [7] Kull, S., C. Ramsay, A. Stephens, et al. 2006. Americans on Iraq: Three Years On. Prog. Int. Policy Attitudes1–19. [7] Kunda, Z. 1990. The Case for Motivated Reasoning. Psychol. Bull. 108:480–498. [14, 16] Kundnani, H. 2009. Utopia or Auschwitz? Germany’s 1968 Generation and the Holocaust. Crises in World Politics. London: C. Hurst and Co. [2] Kurowska, X., and A. Reshetnikov. 2018. Neutrollization: Industrialized Trolling as a Pro-Kremlin Strategy of Desecuritization. Security Dialogue 49:345–363. [7] Ladha, K. K. 1992. The Condorcet Jury Theorem, Free Speech, and Correlated Votes. Am. J. Pol. Sci. 36:617–634. [14] Lahann, B. 1992. Genosse Judas: Die zwei Leben des Ibrahim Böhme [Comrade Judas: The Two Lives of Ibrahim Böhme]. Berlin: Rowohlt. [2] Lakoff, G. 2017. A Taxonomy of Trump Tweets. Online: WYNC, New York Public Radio. Lander, E. S., F. Baylis, F. Zhang, et al. 2019. Adopt a Moratorium on Heritable Genome Editing. Nature 567:165–168. [14] Lapowsky, I. 2017. Everything Attorney General Jeff Sessions Has Forgotten under Oath. Wired, Nov. 17, 2017. [6] Largent, E. A., and R. T. Snodgrass. 2016. Blind Peer Review by Academic Journals. In: Blinding as a Solution to Bias: Strengthening Biomedical Science, Forensic Science, and Law, ed. C. T. Robertson and A. S. Kessellheim, pp. 75–96. San Diego: Elsevier. [4] Larrick, R. P., and D. C. Feiler. 2015. Expertise in Decision Making. In: The Wiley Handbook of Judgment and Decision Making, ed. G. Keren and G. Wu, pp. 696– 721. Malden, MA: Wiley Blackwell. [14] Larson, R. G., III. 2013. Forgetting the First Amendment: How Obscurity-Based Privacy and a Right to Be Forgotten Are Incompatible with Free Speech. Commun. Law Policy 18:91–120. [16] Lau, S. 2008. Information and Bargaining in the Hold-up Problem. RAND J. Econ. 39:266–282. [3] Laurie, G. T. 1999. In Defense of Ignorance: Genetic Information and the Right Not to Know. Eur. J. Health Law 6:119–132. [12] ———. 2000. Protecting and Promoting Privacy in an Uncertain World: Further Defences of Ignorance and the Right Not to Know. Eur. J. Health Law 7:185–191. [12] ———. 2014. Recognizing the Right Not to Know: Conceptual, Professional, and Legal Implications. J. L. Med. Ethics 42:53–63. [12] Lázaro-Muñoz, G., J. M. Conley, A. M. Davis, et al. 2015. Looking for Trouble: Preventive Genomic Sequencing in the General Population and the Role of Patient Choice. Am. J. Bioethics 15:3–14. [12] Lazarsfeld, P. F., and R. Merton. 1954. Friendship as Social Process: A Substantive and Methodological Analysis. In: Freedom and Control in Modern Society, ed. M. Berger et al., Van Nostrand Series in Sociology, W. E. Moore, series ed. New York: Van Nostrand. [10] Lee, C. J., C. R. Sugimoto, G. Zhang, and B. Cronin. 2013. Bias in Peer Review. J. Am. Soc. Inf. Sci. Technol. 64:2–17. [4]

Bibliography

355

Lee, J. C., and K. Quealy. 2018. The 487 (598) People, Places and Things Donald Trump Has Insulted on Twitter: A Complete List. https://archivalia.hypotheses. org/73789 (accessed Mar. 18, 2020). [7] Leipziger-Volkszeitung. 2002. SPD-Altkanzler im Interview mit unserer Zeitung [SPD’s Former Chancellor in Interview with Our Newspaper]. Leipziger Volkszeitung, June 29. [2] Lessig, L. 2009. Code: And Other Laws of Cyberspace. New York: Basic Books. [17] Leta Jones, M. 2016. Ctrl + Z: The Right to Be Forgotten. New York: NYU Press. [16] Lewandowsky, S. 2014. Conspiratory Fascination versus Public Interest: The Case of “Climategate.” Environ. Res. Lett. 9:111004. [15] ———. 2019. The “Post-Truth” World, Misinformation, and Information Literacy: A Perspective from Cognitive Science. In: Informed Societies: Why Information Literacy Matters for Citizenship, Participation and Democracy, ed. S. Goldstein, Information Literacy, Democracy and Citizenship. London: Facet Publishing. [7] Lewandowsky, S., and D. Bishop. 2016. Research Integrity: Don’t Let Transparency Damage Science. Nature 529:459–461. [15] Lewandowsky, S., J. Cook, and E. Lloyd. 2016. The “Alice in Wonderland” Mechanics of the Rejection of (Climate) Science: Simulating Coherence by Conspiracism. Synthese 195:175–196. [7] Lewandowsky, S., U. K. H. Ecker, and J. Cook. 2017. Beyond Misinformation: Understanding and Coping with the Post-Truth Era. J. Appl. Res. Mem. Cogn. 6:353–369. [7] Lewandowsky, S., U. K. H. Ecker, C. M. Seifert, N. Schwarz, and J. Cook. 2012. Misinformation and Its Correction: Continued Inuence and Successful Debiasing. Psychol. Sci. Pub. Int. 13:106–131. [7] Lewandowsky, S., G. E. Gignac, and S. Vaughan. 2013. The Pivotal Role of Perceived Scientic Consensus in Acceptance of Science. Nature Clim. Chang. 3:399–404. [7] Lewandowsky, S., E. A. Lloyd, and S. Brophy. 2018. When THUNCing Trumps Thinking: What Distant Alternative Worlds Can Tell Us About the Real World. Argumenta 3:217–231. [7] Lewandowsky, S., and J. Lynam. 2018. Combating “Fake News”: The 21st Century Civic Duty. The Irish Times [7] Lewandowsky, S., W. G. K. Stritzke, K. Oberauer, and M. Morales. 2005. Memory for Fact, Fiction, and Misinformation: The Iraq War 2003. Psychol. Sci. 16:190–195. [7] Lewinsohn-Zamir, D. 2012. The Questionable Efficiency of the Efficient Breach Doctrine. J. Inst. Theoret. Econ. 168:5–26. [1] Lewis, A. 2002. Engendering Remembrance: Memory, Gender and Informers for the Stasi. New German Crit. 86:103–134. [2] Lie, T. G., H. M. Binningsbø, and S. Gates. 2012. Post-Conict Justice and Sustainable Peace. New York: World Bank. Lieder, F., and T. L. Griffiths. 2019. Resource-Rational Analysis: Understanding Human Cognition as the Optimal Use of Limited Computational Resources. Behav. Brain Sci. 4:1–85. [14] Lintner, E. 1991. Redebeitrag. 31. Sitzung vom 13. Juni 1991. In: Stenographische Berichte 12. Deutscher Bundestag, ed. Deutscher-Bundestag, pp. 2357–2491. Bonn: Bonner Universitäts-Buchdruckerei. [2] Lipsey, R. G., and K. Lancaster. 1956. The General Theory of Second Best. Rev. Econ. Stud. 24:11–32. [11] Lipton, E., D. E. Sanger, and S. Shane. 2016. The Perfect Weapon: How Russian Cyberpower Invaded the U.S. New York Times, Dec. 13, 2016. [7]

356

Bibliography

List, C., and P. Pettit. 2013. Group Agency: The Possibility, Design and Status of Corporate Agents. Oxford: Oxford Univ. Press. [13] Livingstone, K. M., and D. M. Isaacowitz. 2019. Age Differences and Similarities in Spontaneous Use of Emotion Regulation Tactics across Five Laboratory Tasks. J. Exp. Psychol. 148:1972–1992. [5] Loewenstein, G. 1987. Anticipation and the Valuation of Delayed Consumption. Econ. J. 97:666–684. [1, 8] ———. 1994. The Psychology of Curiosity: A Review and Reinterpretation. Psychol. Bull. 116:75–98. [1, 8] ———. 1999. Because It Is There: The Challenge of Mountaineering: For Utility Theory. Kyklos 52:315–343. [14] Loewnstein, G., and E. Haisley. 2008. The Economist as Therapist: Methodological Ramications of “Light” Paternalism. In: The Foundations of Positive and Normative Economics: A Hand Book, ed. A. Caplin and A. Schotter, pp. 210–247, Handbooks in Economic Methodologies. New York: Oxford Univ. Press. [11] Loewenstein, G., and A. Molnar. 2018. The Renaissance of Belief-Based Utility in Economics. Nature Hum. Behav. 2:166–167. [14] Loewenstein, G., and D. A. Moore. 2004. When Ignorance Is Bliss: Information Exchange and Inefficiency in Bargaining. J. Legal Stud. 33:37–58. [1] Loftus, E. F. 1997. Creating False Memories. Sci. Am. 277:70–75. [6, 10] Lorenz, J., H. Rauhut, F. Schweitzer, and D. Helbing. 2011. How Social Inuence Can Undermine the Wisdom of Crowd Effect. PNAS 108:9020– 9025. [14] Luban, D. 1999. Contrived Ignorance. Georgetown Law J. 87:957–989. [16] Ma, I., A. G. Sanfey, and W. J. Ma. 2018. The Cost of Appearing Suspicious? Information Gathering Costs in Trust Decisions. bioRxiv doi 10.1101/495697. [9] MacCoun, R. J. 1993. Drugs and the Law: A Psychological Analysis of Drug Prohibition. Psychol. Bull. 113:497–512. [4] ———. 1998. Biases in the Interpretation and Use of Research Results. Annu. Rev. Psychol. 49:259–287. [4] ———. 2006. Psychological Constraints on Transparency in Legal and Government Decision Making. Swiss Polit. Sci. Rev. 12:112–123. [4, 15] MacCoun, R. J., and W. M. Hix. 2010. Unit Cohesion and Military Performance, in Sexual Orientation and U.S. Military Personnel Policy: An Update of Rand’s 1993 Study. Santa Monica: RAND. [4] MacCoun, R. J., and S. Perlmutter. 2015. Blind Analysis: Hide Results to Seek the Truth. Nature 526:187–189. [4] ———. 2017. Blind Analysis as a Correction for Conrmatory Bias in Physics and in Psychology. In: Psychological Science under Scrutiny: Recent Challenges and Proposed Solutions, ed. S. O. Lilienfeld and I. D. Waldman, pp. 297–322. Chichester: John Wiley and Sons. [4] Maclean, A. 2009. Autonomy, Informed Consent and Medical Law: A Relational Challenged. Cambridge Law, Medicine and Ethics. Cambridge: Cambridge Univ. Press. [16] Malle, B. F., S. Guglielmo, and A. E. Monroe. 2014. A Theory of Blame. Psychol. Inq. 25:147–186. [14] Malpas, P. 2005. The Right to Remain in Ignorance About Genetic Information: Can Such a Right Be Defended in the Name of Autonomy? N.Z. Med. J. 118:71–78. [12] Mandava, A., C. Pace, B. Campbell, E. Emanuel, and C. Grady. 2012. The Quality of Informed Consent: Mapping the Landscape. A Review of Empirical Data from Developing and Developed Countries. J. Med. Ethics 38:356–365. [12]

Bibliography

357

Mannheim, K. 1952. The Problem of Generations. In: Essays on the Sociology of Knowledge, ed. P. Kecskemeti, pp. 276–322. London: Routledge. [2] Marcowitz, R., and W. Paravicini, eds. 2009. Vergeben und Vergessen? Vergangenheitsdiskurse nach Besatzung, Bürgerkrieg und Revolution [Pardonner et oublier? Les discours sur le passé après l’occupation, la guerre civile et la révolution]. Munich: Oldenbourg Wissenschaftsverlag. [2] Mariotti, T., N. Schweizer, N. Szech, and J. von Wangenheim. 2018. Information Nudges and Self Control. SSRN Elec. J., May 1, 2018. [10, 11, 14] Marshall, G. 2014. Don’t Even Think About It: Why Our Brains Are Wired to Ignore Climate Change. New York: Bloomsburg. [1] Marshall, T. 1992. East German Crimes to Be Probed: Unication: A Special Commission Is Formed after a National Debate on Dealing with the Former Communist State’s History of Oppression. March 13, 1992. L.A. Times, March 13. [2] Mårtensson-Pendrill, A.-M. 2006. The Manhattan Project: A Part of Physics History. Phys. Educ. 41:493. [14] Martinelli, C. 2006. Would Rational Voters Acquire Costly Information? J. Econ. Theory 129:225–251. [1, 5, 6] Maslow, A. H. 1963. The Need to Know and the Fear of Knowing. J. Gen. Psychol. 68:111–125. [1] Mayer-Schönberger, V. 2009. Delete: The Virtue of Forgetting in the Digital Age. Princeton: Princeton Univ. Press. [16] McAdams, D. 2012. Strategic Ignorance in a Second-Price Auction. Econ. Letters 114:83–85. [1] McClennen, E. F. 1990. Rationality and Dynamic Choice: Foundational Explorations. Cambridge: Cambridge Univ. Press. [13] McCright, A. M., and R. E. Dunlap. 2017. Combatting Misinformation Requires Recognizing Its Types and the Factors That Facilitate Its Spread and Resonance. J. Appl. Res. Mem. Cogn. 6:389–396. [7] McDermott, R. 2019. Psychological Underpinnings of Post-Truth in Political Beliefs. PS Polit. Sci. Polit. 52:218–222. [7] McElreath, R., R. Boyd, and P. J. Richerson. 2003. Shared Norms Can Lead to the Evolution of Ethnic Markers. Curr. Anthropol. 44:122–129. [5] McGoey, L. 2012a. The Logic of Strategic Ignorance. Br. J. Sociol. 63:533–576. [1] ———. 2012b. Strategic Unknowns: Towards a Sociology of Ignorance. Econ. Soc. 41:1–16. [1] McGuire, A. L., and L. M. Beskow. 2010. Informed Consent in Genomics and Genetic Research. Annu. Rev. Genomics Hum. Genet. 11:361–381. [12] McGuire, A. L., S. Joffe, B. A. Koenig, et al. 2013. Ethics and Genomic Incidental Findings. Science 340:1047–1048. [12] McNair, A. G. K., F. MacKichan, J. L. Donovan, et al. 2016. What Surgeons Tell Patients and What Patients Want to Know before Major Cancer Surgery: A Qualitative Study. BMC Cancer 16:258–265. [16] McNair, B. 2017. From Control to Chaos, and Back Again: Journalism and the Politics of Populist Authoritarianism. Journalism Stud. 19:499–511. [7] McNamara, J. M., Z. Barta, and A. I. Houston. 2004. Variation in Behavior Promotes Cooperation in the Prisoner’s Dilemma Game. Nature 428:745–748. [10] McNamara, J. M., and S. R. X. Dall. 2010. Information Is a Fitness Enhancing Resource. Oikos 119:231–236. [10]

358

Bibliography

McNutt, R. A., A. T. Evans, R. H. Fletcher, and S. W. Fletcher. 1990. The Effects of Blinding on the Quality of Peer Review. A Randomized Trial. JAMA 263:1371– 1376. [4] McPherson, J. M., and L. Smith-Lovin. 1987. Homophily in Voluntary Organizations: Status Distance and the Composition of Face-to-Face Groups. Am. Sociol. Rev. 52:370–379. [10] McPherson, J. M., L. Smith-Lovin, and J. Cook. 2001. Birds of a Feather: Homophily in Social Networks. Annu. Rev. Sociol. 27:415–444. [10] McVittie, C., and A. McKinlay. 2018. “Alternative Facts Are Not Facts”: GaffeAnnouncements, the Trump Administration and the Media. Disc. Soc. 30:172–187. [7] Meier, C. 2010. Das Gebot zu vergessen und die Unabweisbarkeit des Erinnerns: vom öffentlichen Umgang mit schlimmer Vergangenheit. Munich: Siedler. [2, 14] Meinert, C. L. 1998. Masked Monitoring in Clinical Trials: Blind Stupidity? N. Engl. J. Med. 338:1381–1382. [4] Melnyk, D., and J. A. Shepperd. 2012. Avoiding Risk Information About Breast Cancer. Ann. Behav. Med. 44:216–224. [1] Mercier, H., and D. Sperber 2017. The Enigma of Reason. Cambridge, MA: Harvard Univ. Press. [13] Merenstein, D. 2004. Winners and Losers. JAMA 291:15–16. [10] Merton, R. K. 1987. Three Fragments from a Sociologist’s Notebooks: Establishing the Phenomenon, Specied Ignorance, and Strategic Research Materials. Annu. Rev. Sociol. 13:1–29. [1] Messick, D. M., and K. Sentis. 1983. Fairness, Preference, and Fairness Biases. In: Equity Theory: Psychological and Sociological Perspectives, ed. D. M. Messick and K. Cook, pp. 61–94. New York: Praeger Publ. [15] Middleton, A., K. I. Morley, E. Bragin, et al. 2016. Attitudes of Nearly 7000 Health Professionals, Genomic Researchers and Publics toward the Return of Incidental Results from Sequencing Research. Eur. J. Hum. Genet. 24:21–29. [12] Mihailidis, P., and S. Viotty. 2017. Spreadable Spectacle in Digital Culture: Civic Expression, Fake News, and the Role of Media Literacies in “Post-Fact” Society. Am. Behav. Sci. 61:441–454. [7] Miller, B. J., and S. Berger. 2019. Don’t Tell Me When I Am Going to Die. New York Times, June 22, 2019. [17] Miller, D. T., and D. Kahneman. 1986. Norm Theory: Comparing Reality to Its Alternatives. Psychol. Rev. 93:136–153. [14] Mills, J. 1965. Avoidance of Dissonant Information. J. Pers. Soc. Psychol. 2:589–593. [17] Minow, M. 1998. Between Vengeance and Forgiveness: Facing History after Genocide and Mass Violence. Boston: Beacon Press. [2] Mirrlees, J. 1971. An Exploration in the Theory of Optimum Income Taxation. Rev. Econ. Stud. 38:175–208. [11] Mitchell, T. M. 1997. Machine Learning. McGraw-Hill Series in Computer Science, C. L. Liu and A. B. Tucker, series eds. Boston: MIT Press and McGraw-Hill. [10] Mitscherlich, A., and M. Mitscherlich. 2007. Die Unfähigkeit zu Trauern: Grundlagen kollektiven. Munich: Piper. [2] Modigliani, F. 1986. Life Cycle, Individual Thrift, and the Wealth of Nations. Science 234:704–712. [1] Möllers, C. 2015. Die Möglichkeit der Normen: Über eine Praxis Jenseits von Moralität und Kausalität. Berlin: Suhrkamp. [14] Montagu, B., ed. 1841. The Works of Francis Bacon, Lord Chancellor of England. Philadelphia: Carey and Hart. [1]

Bibliography

359

Moody-Adams, M. M. 1994. Culture, Responsibility, and Affected Ignorance. Ethics 104:291–309. [16] Moore, W. E., and M. M. Tumin. 1949. Some Social Functions of Ignorance. Am. Sociol. Rev. 14:787–795. [1] Moss, M. 2013. Salt, Sugar, Fat: How the Food Giants Hooked Us. New York: Random House. [1] Mullainathan, S., and E. Washington. 2009. Sticking with Your Vote: Cognitive Dissonance and Political Attitudes. Am. Econ. J. App. Econ. 1:86–111. [8] Müller-Enbergs, H., ed. 1998. Inoffizielle Mitarbeiter des Ministeriums für Staatssicherheit: Richtlinien und Durchführungsbestimmungen [Unofficial Employees of the Ministry of State Security]. Berlin: Christoph Links Verlag. [2] Myerson, R. B., and M. A. Satterthwaite. 1983. Efficient Mechanisms for Bilateral Trading. J. Econ. Theory 29:265–281. [11] Nickerson, R. S. 1998. Conrmation Bias: A Ubiquitous Phenomenon in Many Guises. Rev. Gen. Psychol. 2:175–220. [14, 16, 17] Niethammer, L. 1982. Die Mitläuferfabrik: Die Entnazizierung am Beispiel Bayerns. Berlin: Dietz. [2] Nipperdey, T. 1980. Geschichte als Aufklärung [History as Enlightenment]. Die Zeit, February 22, 1980. [17] Nisbett, R. E., and L. Ross. 1980. Human Inference: Strategies and Shortcomings of Social Judgment. Century Psychology Series. Englewood Cliffs: Prentice-Hall. [14] Nørby, S. 2015. Why Forget? On the Adaptive Value of Memory Loss. Persp. Psychol. Sci. 10:551–578. [1, 6] North, D. C. 1990. Institutions, Institutional Change, and Economic Performance. New York: Cambridge Univ. Press. [17] Nowak, G., and P. Thagard. 1992. Copernicus, Ptolemy, and Explanatory Coherence. Minn. Stud. Phil. Sci. 15:274–309. [8] Nyborg, K. 2011. I Don’t Want to Hear About It: Rational Ignorance among DutyOriented Consumers. J. Econ. Behav. Org. 79:263–274. [1] Nyhan, B., and J. Reier. 2010. When Corrections Fail: The Persistence of Political Misperceptions. Polit. Behav. 32:303–330. [7] Nyholt, D. R., C. E. Yu, and P. M. Visscher. 2009. On Jim Watson’s APOE Status: Genetic Information Is Hard to Hide. Eur. J. Hum. Genet. 17:147–149. [1] Nzelibe, J. 2011. Partisan Conicts over Presidential Authority. William Mary Law Rev. 53:389–430. [16] Oaksford, M., and N. Chater. 1994. A Rational Analysis of the Selection Task as Optimal Data Selection. Psychol. Rev. 101:608–631. [8] ———. 2001. The Probabilistic Approach to Human Reasoning. Trend. Cogn. Sci. 5:349–357. [5] ———. 2007. Bayesian Rationality: The Probabilistic Approach to Human Reasoning. New York: Oxford Univ. Press. [14] O’Donoghue, T., and M. Rabin. 2006. Optimal Sin Taxes. J. Public Econ. 90:1825– 1849. [11] Okike, K., K. T. Hug, M. S. Kocher, and S. S. Leopold. 2016. Single-Blind versus DoubleBlind Peer Review in the Setting of Author Prestige. JAMA 316:1315–1316. [4] Oreskes, N., and E. M. Conway. 2010. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. London: Bloomsbury Academic. [7]

360

Bibliography

Ortmann, A., G. Gigerenzer, B. Borges, and D. G. Goldstein. 2008. The Recognition Heuristic: A Fast and Frugal Way to Investment Choice? In: Handbook of Experimental Economics Results, ed. C. R. Plott and V. L. Smith, pp. 993–1003, Handbooks in Economics, vol. 1. Amsterdam: North-Holland. [10] Ost, D. E. 1984. The “Right” Not to Know. J. Med. Phil. 9:301–312. [12, 16] Oster, E., I. Shoulson, and E. R. Dorsey. 2013. Optimal Expectations and Limited Medical Testing: Evidence from Huntington Disease. Am. Econ. Rev. 103:804–830. [5, 10, 14] Ott, B. L. 2017. The Age of Twitter: Donald J. Trump and the Politics of Debasement. Crit. Stud. Media Commun. 34:59–68. [7] Owens, K. 2017. Too Much of a Good Thing? American Childbirth, Intentional Ignorance, and the Boundaries of Responsible Knowledge. Sci. Technol. Hum. Values 42:848–871. [5] Page, S. E. 2008. The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. Princeton: Princeton Univ. Press. [14] Pagel, M. 2018. A News-Utility Theory for Inattention and Delegation in Portfolio Choice. Econometrica 86:491–522. [8] Pager, D., and H. Shepherd. 2008. The Sociology of Discrimination: Racial Discrimination in Employment, Housing, Credit, and Consumer Markets. Annu. Rev. Sociol. 34:181–209. [4] Paldam, M., and P. Nannestad. 2000. What Do Voters Know About the Economy? A Study of Danish Data, 1990–1993. Elect. Stud. 19:363–391. [14] Part, D., ed. 2013/2017. On What Matters, The Berkeley Tanner Lectures, vols. 1–3, S. Scheffler, series ed. Oxford: Oxford Univ. Press. [14] Pariser, E. 2011. The Filter Bubble: What the Internet Is Hiding from You. London: Penguin. [15] Parker, E. S., L. Cahill, and J. L. McGaugh. 2006. A Case of Unusual Autobiographical Remembering. Neurocase 12:35–49. [6] Passoth, J.-H., B. Peuker, and M. Schillmeier, eds. 2012. Agency without Actors? New Approaches to Collective Action, Routledge Advances in Sociology. London: Routledge. [14] Paul, C., and M. Matthews. 2016. The Russian “Firehose of Falsehood” Propaganda Model: Why It Might Work and Actions to Counter It. RAND. https://www.rand. org/content/dam/rand/pubs/perspectives/PE100/PE198/RAND_PE198.pdf (accessed Aug. 27, 2019). [7] Paul, L. A. 2014. Transformative Experience. Oxford: Oxford Univ. Press. [14] Payne, J. W., J. R. Bettman, and E. J. Johnson. 1993. The Adaptive Decision Maker. New York: Cambridge Univ. Press. [1] Pearl, J. 2000. Causality: Models, Reasoning, and Inference. Cambridge Cambridge Univ. Press. [4] Peck, C. J. 1984. A New Tort Liability for Lack of Informed Consent in Legal Matters. Lousiana Law Rev. 44:1289–1307. [16] Pedersen, A. P., and G. Wheeler. 2013. Demystifying Dilation. Erkenntnis 79:1305– 1342. [1] Pennycook, G., and D. G. Rand. 2019. Lazy, Not Biased: Susceptibility to Partisan Fake News Is Better Explained by Lack of Reasoning Than by Motivated Reasoning. Cognition 188:39–50. [7] Pérez-Escudero, A., J. Friedman, and J. Gore. 2016. Preferential Interactions Promote Blind Cooperation and Informed Defection. PNAS 113:13995–14000. [9]

Bibliography

361

Petchesky, B. 2013. Peyton Manning’s Naked Bootleg Is Football’s Most Unstoppable Weapon. July 10. https://deadspin.com/peyton-mannings-naked-bootleg-is-footballs-most-unsto-1441974629. (accessed Jan. 24, 2020). [3] Peters, M. A. 2018. The Return of Fascism: Youth, Violence and Nationalism. Educ. Phil. Theory 51:674–678. [7] Peters, S. A., S. M. Laham, N. Pachter, and I. M. Winship. 2014. The Future in Clinical Genetics: Affective Forecasting Biases in Patient and Clinician Decision Making. Clin. Genet. 85:312–317. [12] Pettit, P. 2003. Weakness of the Will and Practical Irrationality, S. Stroud and C. Tappolet, series eds. New York: Oxford Univ. Press. [14] Pfattheicher, S., and S. Schindler. 2016. Misperceiving Bullshit as Profound Is Associated with Favorable Views of Cruz, Rubio, Trump and Conservatism. PLoS ONE 11:e0153419. [7] Piketty, T., and E. Saez. 2013. A Theory of Optimal Inheritance Taxation. Econometrica 81:1851–1886. [11] Plate, L. 2015. Amnesiology: Towards the Study of Cultural Oblivion. Mem. Stud. 9:143–155. [2] Plaut, V. C., K. M. Thomas, K. Hurd, and C. A. Romano. 2018. Do Color Blindness and Multiculturalism Remedy or Foster Discrimination and Racism? Curr. Dir. Psychol. Sci. 27:200–206. [4] Popitz, H. 1980. Die normative Konstruktion von Gesellschaft. Tübingen: Mohr Siebeck. [14] ———. 2016. Über die Präventivwirkung des Nichtwissens: Dunkelziffer, Norm und Strafe [On the Preventive Effect of Ignorance: Unrecorded Figures, Norm, and Punishment]. In: Kriminologische Grundlagentexte, ed. D. Klimke and A. Legnaro. Wiesbaden: Springer. [2] Porat, A., and O. Yadlin. 2006. Promoting Consensus in Society through DeferredImplementation Agreements. Univ. Toronto Law J. 56:151–179. [16] Porter, T. M. 2005. Introduction: Historicizing the Two Cultures. Hist. Sci. 43:109–114. [14] Posner, R. A. 2014. Economic Analysis of Law Ninth Edition. Aspen Casebook Series. New York: Wolters Kluwer Law and Business. [16] Postmes, T., and R. Spears. 1998. Deindividuation and Antinormative Behavior: A Meta-Analysis. Psychol. Bull. 123:238–259. [4] Poulsen, A., and M. Roos. 2010. Do People Make Strategic Commitments? Experimental Evidence on Strategic Information Avoidance. Exp. Econ. 13:206–225. [3] Prager, J., J. I. Krueger, and K. Fiedler. 2018. Towards a Deeper Understanding of Impression Formation: New Insights Gained from a Cognitive-Ecological Perspective. J. Pers. Soc. Psychol. 115:379–397. [14] Prasad, M., A. J. Perrin, K. Bezila, et al. 2009. “There Must Be a Reason”: Osama, Saddam, and Inferred Justication. Sociol. Inq. 79:142–162. [7] Presidential Commission for the Study of Bioethical Issues. 2013. Anticipate and Communicate: Ethical Management of Incidental and Secondary Findings in the Clinical, Research, and Direct-to-Consumer Contexts. Washington, D.C.: GPO. Prier, J. 2017. Commanding the Trend: Social Media as Information Warfare. Strateg. Stud. Q. 11:50–85. [7] Proctor, R. N. 2008. Agnotology: A Missing Term to Describe the Cultural Production of Ignorance (and Its Study). In: Agnotology: The Making and Unmaking of Ignorance, ed. R. N. Proctor and L. Schiebinger. Stanford: Stanford Univ. Press. [7] ———. Golden Holocaust: Origins of the Cigarette Catastrophe and the Case for Abolition. Berkeley: Univ. of California Press. [7]

362

Bibliography

Proctor, R. N., and L. L. Schiebinger, eds. 2008. Agnotology: The Making and Unmaking of Ignorance. Stanford: Stanford Univ. Press. [1, 2, 14] Przeworski, A. 2016. Democracy: A Never-Ending Quest. Annu. Rev. Polit. Sci. 19:1–12. [7] Quandt, T. 2012. What’s Left of Trust in a Network Society? An Evolutionary Model and Critical Discussion of Trust and Societal Communication. Eur. J. Commun. 27:7–21. [7] Raab, E. L. 2004. The Parameters of Informed Consent. Trans. Am. Opthalmol. Soc 102:225–232. [16] Raab, M. H., N. Auer, S. A. Ortlieb, and C. C. Carbon. 2013. The Sarrazin Effect: The Presence of Absurd Statements in Conspiracy Theories Makes Canonical Information Less Plausible. Front. Psychol. 18:453. [7] Rabin, M. 1994. Cognitive Dissonance and Social Change. J. Econ. Behav. Org. 23:177–194. [8] Rachlinski, J. J., C. Guthrie, and A. J. Wistrich. 2011. Probable Cause, Probability, and Hindsight. J. Emp. Legal Stud. 8:72–98. [15] Ramsey, F. P. 1927. A Contribution to the Theory of Taxation. Econ. J. 37:47–61. [11] Rand, D. G., J. D. Greene, and M. A. Nowak. 2012. Spontaneous Giving and Calculated Greed. Nature 489:427–430. [9] Rapoport, A., and A. M. Chammah. 1966. The Game of Chicken. Am. Behav. Sci. 10:10–28. [14] Rappaport, R. A. 1979. Ecology, Meaning, and Religion, vol. 9. Berkeley: North Atlantic Books. [5] Raskin, J. D., and A. E. Debany. 2018. The Inescapability of Ethics and the Impossibility of “Anything Goes”: A Constructivist Model of Ethical Meaning Making. J. Constructiv. Psychol. 31:343–360. [7] Rathenow, L. 1990. Die Zeit heilt gar nichts: Vom Umgang mit den Stasi-Akten. Bl. Dtsch. Intl. Polit. 12:1461–1468. [2] Rawls, J. 1971. A Theory of Justice. Cambridge, MA: Harvard Univ. Press. [13] ———. 1979. A Theory of Justice (Rev. ed.). Cambridge, MA: Harvard Univ. Press. [16] ———. 1999. A Theory of Justice (Rev. ed.). Cambridge, MA: Harvard Univ. Press. [1] Read, S. J., E. J. Vanman, and L. C. Miller. 1997. Connectionism, Parallel Constraint Satisfaction Processes, and Gestalt Principles: (Re)Introducing Cognitive Dynamics to Social Psychology. Pers. Soc. Psychol. Rev. 1:26–53. [8] Regan, P. L. 1999. Great Expectations: A Contract Law Analysis for Preclusive Corporate Lock-Ups. Cardozo Law Rev. 21:1–119. [16] Reich, T., and Z. L. Tormala. 2013. When Contradictions Foster Persuasion: An Attributional Perspective. J. Exp. Soc. Psychol. 49:426–439. [7] Renwick, A., S. Allan, W. Jennings, et al. 2018. What Kind of Brexit Do Voters Want? Lessons from the Citizens’ Assembly on Brexit. Polit. Q. 89:649–658. [7] Rey-Mermet, A., and M. Gade. 2018. Inhibition in Aging: What Is Preserved? What Declines? A Meta-Analysis. Psychonom. Bull. Rev. 25:1695–1716. [5] Rhodes, R. 1988. Genetic Links, Family Ties, and Social Bonds: Rights and Responsibilities in the Face of Genetic Knowledge. J. Med. Phil 23:10–30. [12] Rice, C. 2015. Moving Beyond Causes: Optimality Models and Scientic Explanation. Nous 49:589–615. [13] ———. 2018. Idealized Models, Holistic Distortions, and Universality. Synthese 195:2795–2819. [13] Rice, C. M. 2013. Defending the Objective List Theory of Well-Being. Ratio 26:196– 211. [14]

Bibliography

363

Richey, M. 2018. Contemporary Russian Revisionism: Understanding the Kremlin’s Hybrid Warfare and the Strategic and Tactical Deployment of Disinformation. Asia Eur. J. 16:101–113. [7] Ricoeur, P. 2006. Memory, History, Forgetting (transl. K. Blamey and D. Pellauer). Chicago: Univ. of Chicago Press. [2] Rieff, D. 2017. In Praise of Forgetting: Historical Memory and Its Ironies. New Haven: Yale Univ. Press. [2] Ritov, I., and J. Baron. 1995. Outcome Knowledge, Regret, and Omission Bias. Org. Behav. Hum. Dec. Process. 64:119–127. [16] Ritov, I., and E. Zamir. 2014. Affirmative Action and Other Group Tradeoff Policies: Identiability of Those Adversely Affected. Org. Behav. Hum. Dec. Process. 125:50–60. [15] Robbins, I. P. 1990. The Ostrich Instruction: Deliberate Ignorance as a Criminal Mens Rea. J. Crim. L. Criminol. 81:191–234. [1, 16, 17] Roberts, M. E. 2018. Censored: Distraction and Diversion inside China’s Great Firewall. Princeton: Princeton Univ. Press. [14] Robertson, C. T., and A. S. Kesselheim, eds. 2016. Blinding as a Solution to Bias: Strengthening Biomedical Science, Forensic Science, and Law. London: Elsevier. [4, 16] Robinson, P. 2017. Learning from the Chilcot Report: Propaganda, Deception and the “War on Terror.” Intl. J. Contemp. Iraqi Stud. 11:47–73. [7] Roesler, A.-K., and B. Szentes. 2017. Buyer-Optimal Learning and Monopoly Pricing. Am. Econ. Rev. 107:2072–2080. [3] Rogerson, W. P. 1992. Contractual Solutions to the Hold-up Problem. Rev. Econ. Stud. 59:777–793. [3] Roiphe, R. 2011. The Ethics of Willful Ignorance. Georgetown J. Leg. Ethic. 24:187– 224. [1, 16] Romano, A. 2018. Twitter Released 9 Million Tweets from One Russian Troll Farm: Here’s What We Learned. Vox. https://www.vox.com/2018/10/19/17990946/twitterrussian-trolls-bots-election-tampering. (accessed Aug. 27, 2019). [7] Rosch, E. H. 1973. Natural Categories. Cogn. Psychol. 4:328–350. [5] Rosen, J. 2012. The Right to Be Forgotten. Stanford Law Rev. Online 64:88–92. [16] Rosenbaum, L. 2015. The Paternalism Preference: Choosing Unshared Decision Making. N. Engl. J. Med. 373:589–592. [1] Ross, A. S., and D. J. Rivers. 2018. Discursive Deection: Accusation of “Fake News” and the Spread of Mis- and Disinformation in the Tweets of President Trump. Soc. Media Society 4: [7] Ross, W. D., ed. 1924. Aristotle’s Metaphysics. Oxford: Clarendon Press. [1] Roth, A. E. 2007. Repugnance as a Constraint on Markets. J. Econ. Perspect. 21:37–58. [8] Roth, M. S. 2011. Memory, Trauma, and History: Essays on Living with the Past. New York: Columbia Univ. Press. [2] Rothberg, M. 2009. Multidirectional Memory: Remembering the Holocaust in the Age of Decolonization. Cultural Memory in the Present, M. Bal and H. de Vries, series eds. Stanford: Stanford Univ. Press. [2] Rothenhäusler, D., N. Schweizer, and N. Szech. 2018. Guilt in Voting and Public Good Games. Eur. Econ. Rev. 101:664–681. [14] Rothschild, M., and J. Stiglitz. 1976. Equilibrium in Competitive Insurance Markets: An Essay on the Economics of Imperfect Information. Q. J. Econ. 90:629–649. [3] Rothstein, M. A. 2008. GINA, the ADA, and Genetic Discrimination in Employment. J. L. Med. Ethics 36:837–840. [12]

364

Bibliography

Rozenblit, L., and F. Keil. 2002. The Misunderstood Limits of Folk Science: An Illusion of Explanatory Depth. Cogn. Sci. 26:521–562. [14] Ruby, R. 1999. Apprehending the Weapon Within: The Case for Criminalizing the Intentional Transmission of HIV. Am. Crim. Law Rev 36:313–335. [16] Ryle, G. 1946. Knowing How and Knowing That: The Presidential Address. Proc. Aristot. Soc. 46:1–16. [13] Saez, E. 2001. Using Elasticities to Derive Optimal Income Tax Rates. Rev. Econ. Stud. 68:205–229. [11] Sahdra, B., and P. Thagard. 2003. Self-Deception and Emotional Coherence. Minds Mach. 13:213–231. [8] Santayana, G. 2011. The Life of Reason or the Phases of Human Progress: Introduction and Reason in Common Sense. The Works of George Santayana, M. S. Wokeck and M. A. Coleman, series eds. Cambridge: MIT Press. [2] Sarch, A. F. 2014. Willful Ignorance, Culpability, and the Criminal Law. St. John’s Law Rev. 88:1023–1102. [1, 16] Sargent, G. 2015. The Plum Line: Who Is the “Authenticity” Candidate of 2016? Yup: It’s Donald Trump. Washington Post, Dec. 11, 2015. [7] Savage, L. J. 1972. The Foundations of Statistics. Mineola: Dover Publications. [14] Schädlich, H. J., ed. 1993. Aktenkundig. Reinbek bei Hamburg: Rowohlt. [2] Schäfers, B. 2010. Soziales Handeln und seine Grundlagen: Normen, Werte, Sinn. In: Einführung in Hauptbegriffe der Soziologie, ed. H. Korte and B. Schäfers, pp. 23– 44, Einführungskurs Soziologie, H. Korte and B. Schäfers, series eds. Weinheim: VS Verlag für Sozialwissenschaften. [14] Schaffner, B. F., and S. Luks. 2018. Misinformation or Expressive Responding? What an Inauguration Crowd Can Tell Us About the Source of Political Misinformation in Surveys. Pub. Opin. Q. 82:135–147. [7] Schelling, T. C. 1956. An Essay on Bargaining. Am. Econ. Rev. 46:281–306. [1, 3, 5, 14] ———. 1960. The Strategy of Conict. Cambridge, MA: Harvard Univ. Press. [9] Schlüter, K. 2012. Günter Grass im Visier: Die Stasi-Akte. Berlin: Christoph Links Verlag. [2] Schmidt, A. L., F. Zollo, M. Del Vicario, et al. 2017. Anatomy of News Consumption on Facebook. PNAS 114:3035–3039. [7] Schneider, C. E. 1998. The Practice of Autonomy: Patients, Doctors, and Medical Decisions. New York: Oxford Univ. Press. [16] Schneider, L. 1962. The Role of the Category of Ignorance in Sociological Theory: An Exploratory Statement. Am. Sociol. Rev. 27:492–508. [1] Schnibben, C. 2014. Mein Vater, ein Werwolf [My Father, a Werewolf]. Der Spiegel, April 14. [2] Schooler, L. J., and J. R. Anderson. 2017. The Adaptive Nature of Memory. In: Learning and Memory: A Comprehensive Reference, ed. J. H. Byrne, pp. 265–278. Oxford: Elsevier. [6] Schooler, L. J., and R. Hertwig. 2005. How Forgetting Aids Heuristic Inference. Psychol. Rev. 112:610–628. [1, 6, 10] Schumann, S. 1997. Vernichten oder Offenlegen? Zur Entstehung des Stasi-UnterlagenGesetzes: Eine Dokumentation der öffentlichen Debatte 1990/1991. Berlin: Der Bundesbeauftragte für die Unterlagen des Staatssicherheitsdienstes der Ehem. Dt. Demokratischen Republik. Schurz, G., and R. Hertwig. 2019. Cognitive Success: A Consequentialist Account of Rationality in Cognition. Top. Cogn. Sci. 11:7–36. [13]

Bibliography

365

Schütz, A. 1993. Der sinnhafte Aufbau der Sozialen Welt: Eine Einleitung in die verstehende Soziologie. Frankfurt: Suhrkamp. [14] Schwartz, B. 2004. The Paradox of Choice. New York: HarperCollins. [5] Schwartz, B., and R. Sommers. 2013. Affective Forecasting and Well-Being. In: Oxford Handbook of Cognitive Psychology, ed. D. Reisberg, pp. 704–716, Oxford Library of Psychology, P. E. Nathan, series ed. New York: Oxford Univ. Press. [5] Schweizer, N., and N. Szech. 2018. Optimal Revelation of Life-Changing Information. Manage. Sci. 64:5250–5262. [8, 10, 14] Seagren, C. W., and D. R. Henderson. 2018. Why We Fight: A Study of US Government War-Making Propaganda. Ind. Rev. 23:69–90. [7] Seeley, T. D. 2001. Decision Making in Superorganisms: How Collective Wisdom Arises from the Poorly Informed Masses. In: Bounded Rationality: The Adaptive Toolbox, ed. G. Gigerenzer and R. Selten, pp. 249–262, Dahlem Workshop Reports, J. Lupp, series ed. Cambridge, MA: MIT Press. [13] Selby, W. G. 2018. Donald Trump Says People Went out on Their Boats to Watch Hurricane Harvey. Politifact, June 7, 2018. [7] Sen, A. 1970. The Impossibility of a Paretian Liberal. J. Polit. Economy 78:152–157. [11] ———. 1987. On Ethics and Economics. The Royer Lectures, J. M. Letiche, series ed. Malden, MA: Blackwell. [14] Serra-Garcia, M., and N. Szech. 2019. The (in)Elasticity of Moral Ignorance. KIT Working Pap. Series Econ. 120: Dec. 2018. [8, 10, 14] Serwe, S., and C. Frings. 2006. Who Will Win Wimbledon? The Recognition Heuristic in Predicting Sports Events. J. Behav. Decis. Mak. 19:321–322. [10] Shani, Y., N. van de Ven, and M. Zeelenberg. 2012. Delaying Information Search. Judgment Dec. Making 7:750–760. [1] Shen, L., A. Fishbach, and C. K. Hsee. 2015. The Motivating-Uncertainty Effect: Uncertainty Increases Resource Investment in the Process of Reward Pursuit. J. Consum. Res. 41:1301–1315. [1] Siano, A., A. Vollero, F. Conte, and S. Amabile. 2017. “More Than Words”: Expanding the Taxonomy of Greenwashing after the Volkswagen Scandal. J. Bus. Res. 71:27–37. [7] Sicherman, N., G. Loewenstein, D. J. Seppi, and S. P. Utkus. 2016. Financial Attention. Rev. Finan. Stud. 29:863–897. [5] Sides, J., and D. J. Hopkins, eds. 2015. Political Polarization in American Politics. London: Bloomsbury Academic. [17] Silvia, P. J. 2008. Interest: The Curious Emotion. Curr. Dir. Psychol. Sci. 17:57–60. [1] Simmel, G. 1906. The Sociology of Secrecy and of the Secret Societies. Am. J. Sociol. 11:441–498. [14] ———. 1908/1992. Das Geheimnis und die geheime Gesellschaft. In: Soziologie, pp. 383–431. Frankfurt: Suhrkamp. [2] Simmons, J. P., L. D. Nelson, and U. Simonsohn. 2011. False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Signicant. Psychol. Sci. 22:1359–1366. [14] Simon, D. 2012. In Doubt: The Psychology of the Criminal Justice Process. Cambridge, MA: Harvard Univ. Press. [16] Simon, H. A. 1971. Designing Organizations for an Information-Rich World. In: Computers, Communication and the Public Interest, ed. M. Greenberger, pp. 31–72. Baltimore: Johns Hopkins Press. [14] ———. 1997. Models of Bounded Rationality: Empirically Grounded Economic Reason, vol. 3. Cambridge, MA: MIT Press. [14]

366

Bibliography

Simonovich, I. 2004. Attitudes and Types of Reaction toward Past War Crimes and Human Rights Abuses. Yale J. Intl. Law 29:343–361. [16] Sims, C. A. 2003. Implications of Rational Inattention. J. Monet. Econ. 50:665–690. [5, 8] Sloman, S., and P. Fernbach. 2017. The Knowledge Illusion: Why We Never Think Alone. New York: Riverhead Books. [5, 13] Sloof, R., H. Oosterbeek, and J. Sonnemans. 2007. Does Making Specic Investments Unobservable Boost Investment Incentives. J. Econ. Manage. Strategy 16:911–942. [3] Spence, M. 1973. Job Market Signaling. Q. J. Econ. 87:355–374. [10] Spiegelhalter, D. 2012. Using Speed of Ageing and “Microlives” to Communicate the Effects of Lifetime Habits and Environment. BMJ 345:e8223. [14] Spiliopoulos, L., and R. Hertwig. 2019. A Map of Ecologically Rational Heuristics for Uncertain Strategic Worlds. Advance Online Publication. https://doi.org/10.1037/ rev0000171. (accessed Jan. 31, 2020). [17] Sprenger, C. 2015. Judging Experimental Evidence on Dynamic Inconsistency. Am. Econ. Rev. 105:280–285. [14] Stacey, T. 2018. Beyond Populist Politics: Why Conventional Politics Needs to Conjure Myths of Its Own and Why It Struggles to Do So. Glob. Disc. 8:573–588. [7] Stanovich, K. 2010. Decision Making and Rationality in the Modern World. Fundamentals of Cognition. New York: Oxford Univ. Press. [13] Stanovich, K., and R. F. West. 2000. Individual Differences in Reasoning: Implications for the Rationality Debate? Behav. Brain Sci. 23:645–665. [13] Staub, E. 2014. The Challenging Road to Reconciliation in Rwanda: Societal Processes, Interventions and Their Evaluation. J. Soc.Polit. Psychol. 2:505–517. [14] Staub, E., L. A. Pearlman, A. Gubin, and A. Hagengimana. 2005. Healing, Reconciliation, Forgiving and the Prevention of Violence after Genocide or Mass Killing: An Intervention and Its Experimental Evaluation in Rwanda. J. Soc. Clin. Psychol. 24:297–334. [14] Steblay, N., H. M. Hosch, S. E. Culhane, and A. McWethy. 2006. The Impact on Juror Verdicts of Judicial Instruction to Disregard Inadmissible Evidence: A MetaAnalysis. Law Hum. Behav. 30:469–492. [16] Steblay, N., and E. F. Loftus. 2013. Eyewitness Identication and the Legal System. In: The Behavioral Foundations of Public Policy, ed. E. Shar, pp. 145–162. Princeton: Princeton Univ. Press. [16] Sterling, J., J. T. Jost, and G. Pennycook. 2016. Are Neoliberals More Susceptible to Bullshit? Judgment Dec. Making 11:352–360. [7] Stigler, G. J. 1961. The Economics of Information. J. Polit. Economy 69:213–225. [1, 5, 6, 9] Strevens, M. 2019. The Structure of Asymptotic Idealization. Synthese 196:1713–1731. [13] Studdert, D. M., and T. A. Brennan. 2001. No-Fault Compensation for Medical Injuries. JAMA 286:217–223. [16] Suiter, J., E. Culloty, D. Greene, and E. Siapera. 2018. Hybrid Media and Populist Currents in Ireland’s 2016 General Election. Eur. J. Commun. 33:396–412. [7] Suter, R. S., T. Pachur, and R. Hertwig. 2016. How Affect Shapes Risky Choice: Distorted Probability Weighting versus Probability Neglect. J. Behav. Decis. Mak. 29:437–449. [1] Suter, R. S., T. Pachur, R. Hertwig, T. Endestad, and G. Biele. 2015. The Neural Basis of Risky Choice with Affective Outcomes. PLoS ONE 10:e0122475. [1] Sweeny, K., D. Melnyk, W. Miller, and J. A. Shepperd. 2010. Information Avoidance: Who, What, When, and Why. Rev. Gen. Psychol. 14:340–353. [1, 17]

Bibliography

367

Swire, B., A. J. Berinsky, S. Lewandowsky, and U. K. H. Ecker. 2017. Processing Political Misinformation: Comprehending the Trump Phenomenon. Royal Soc. Open Sci. 4:160802. [7] Swire-Thompson, B., U. K. H. Ecker, S. Lewandowsky, and A. Berinski. 2019. They Might Be a Liar but They’re My Liar: Source Evaluation and the Prevalence of Misinformation. Polit. Psychol. 41:21–34. [7] Takala, T. 1999. The Right to Genetic Ignorance Conrmed. Bioethics 13:288–293. [12] ———. 2001. Genetic Ignorance and Reasonable Paternalism. Theor. Med. Bioeth. 22:485–491. [12] Tegmark, M. 2017. Life 3.0: Being Human in the Age of Articial Intelligence. New York: Alfred A. Knopf. [14] Tetlock, P. E. 2002. Social Functionalist Frameworks for Judgment and Choice: Intuitive Politicians, Theologians, and Prosecutors. Psychol. Rev. 109:451–471. [14] Thagard, P. 1989. Explanatory Coherence. Behav. Brain Sci. 12:435–467. [8] ———. 2006. Hot Thought: Mechanisms and Applications of Emotional Cognition. Cambridge, MA: MIT Press. [8] Thaler, R. H., and C. R. Sunstein. 2008. Nudge: Improving Decisions About Health, Wealth, and Happiness. New York: Yale Univ. Press. [11] The Economist. 2015. Naked Capitalism. https://www.economist.com/international/2015/09/26/naked-capitalism. (accessed Jan. 13, 2020). [1] Theye, K., and S. Melling. 2018. Total Losers and Bad Hombres: The Political Incorrectness and Perceived Authenticity of Donald J. Trump. S. Commun. J. 83:322–337. [7] Thomas, O. D. 2017. Good Faith and (Dis)Honest Mistakes? Learning from Britain’s Iraq War Inquiry. Politics 37:371–385. [7] Thunström, L., J. Nordström, J. F. Shogren, M. Ehmke, and K. van’t Veld. 2016. Strategic Self-Ignorance. J. Risk Uncertainty 52:117–136. [3] Thunström, L., K. F. van’t Veld, J. Shogren, and J. Nordström. 2014. On Strategic Ignorance of Environmental Harm and Social Norms. Rev. Econ. Politique 124:195–214. [1] Tinbergen, N. 1963. On Aims and Methods in Ethology. Z. Tierpsychol. 20:410–433. [10] Tismaneanu, V., and B. C. Iacob, eds. 2015. Remembrance, History, and Justice: Coming to Terms with Traumatic Pasts in Democratic Societies. Budapest: Central European Univ. Press. [2] Todd, P. M., G. Gigerenzer, and ABC Research Group. 2012. Ecological Rationality: Intelligence in the World. New York: Oxford Univ. Press. [1] Tomkins, A., M. Zhang, and W. D. Heavlin. 2017. Reviewer Bias in Single versus Double-Blind Peer Review. PNAS 114:12708–12713. [4] Townley, C. 2011. A Defense of Ignorance: Its Value for Knowers and Roles in Feminist and Social Epistemologies. Lanham, MD: Lexington Books. [5] Traulsen, A., and C. Hauert. 2009. Stochastic Evolutionary Game Dynamics. In: Reviews of Nonlinear Dynamics and Complexity, ed. H. G. Schuster, pp. 25–61, vol. 2. Weinheim: Wiley. [9] Trimmer, P. C., and A. I. Houston. 2014. An Evolutionary Perspective on Information Processing. Top. Cogn. Sci. 6:312–330. [10] Trinidad, S. B., S. M. Fullerton, and W. Burke. 2015. Looking for Trouble and Finding It. Am. J. Bioethics 15:15–17. [12] Trivers, R. 2000. The Elements of a Scientic Theory of Self-Deception. Ann. N. Y. Acad. Sci. 907:114–131. [6] ———. 2011a. Deceit and Self-Deception: Fooling Yourself the Better to Fool Others. London: Penguin. [6, 9, 10]

368

Bibliography

Trivers, R. 2011b. The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life. New York: Basic Books. [3] Trouillot, M.-R. 1995. Silencing the Past: Power and the Production of History. Boston: Beacon Press. [2, 13] Tsay, C.-J. 2013. Sight over Sound in the Judgment of Music Performance. PNAS 110:14580–14585. [4, 17] Turner, C. 2009. The Burden of Knowledge. Georgia Law Rev. 43:297. [16] Turvey, M. T. 1977. Preliminaries to a Theory of Action with Reference to Vision. In: Perceiving, Acting, and Knowing. Toward an Ecological Psychology, ed. R. Shaw and J. Bransford, pp. 211–266, Psychology Library Editions. Oxford: Erlbaum. [15] Tversky, A., and D. Kahneman. 1971. Belief in the Law of Small Numbers. Psychol. Bull. 76:105–110. [14] Ullmann-Margalit, E. 2000. On Not Wanting to Know. In: Reasoning Practically, ed. E. Ullmann-Margalit, pp. 72–84. New York: Oxford Univ. Press. [1] Ulrich, C. M., and C. Grady, eds. 2018. Moral Distress in the Health Professions. Cham: Springer. [12] Ungar, S. 2008. Ignorance as an Under-Identied Social Problem. Br. J. Sociol. 59:301–326. [1] van der Linden, S. L., C. E. Clarke, and E. W. Maibach. 2015. Highlighting Consensus Among Medical Scientists Increases Public Support for Vaccines: Evidence from a Randomized Experiment. BMC Public Health 15:1207. [7] Van Der Vyver, J. D. 2004. The International Criminal Court and the Concept of Mens Rea in International Criminal Law. Univ. Miami Intl. Comp. Law Rev. 12:57–149. [16] Van der Weele, J. J. 2012. When Ignorance Is Innocence: On Information Avoidance in Moral Dilemmas. https://www.tse-fr.eu/sites/default/les/medias/stories/ SEMIN_11_12/IAST/paper%20joelvdw.pdf. (accessed Feb. 20, 2020). [3] Van Dijk, E., and M. Zeelenberg. 2007. When Curiosity Killed Regret: Avoiding or Seeking the Unknown in Decision-Making under Uncertainty. J. Exp. Soc. Psychol. 43:656–662. [1] Van Ness, D. W., and K. H. Strong. 2015. Restoring Justice: An Introduction to Restorative Justice: Fifth Edition. New York: Routledge. [16] Varshizky, A. 2012. Alfred Rosenberg: The Nazi Weltanschauung as Modern Gnosis. Polit. Relig. Ideol. 13:311–331. [7] Vascik, G. S., and M. R. Sadler, eds. 2016. The Stab-in-the-Back Myth and the Fall of the Weimar Republic. London: Bloomsbury Academic. [2] Vermeule, A. 2001. Veil of Ignorance Rules in Constitutional Law. Yale Law J. 111:399– 433. [16] Vitriol, J. A., and J. K. Marsh. 2018. The Illusion of Explanatory Depth and Endorsement of Conspiracy Beliefs. Eur. J. Soc. Psychol. 48:955–969. [14] Voegelin, E. 2000. The Political Religions. In: The Collected Works of Eric Voegelin: Modernity without Restraint: The Political Religions; the New Science of Politics; and Science, Politics, and Gnosticism, ed. M. Henningsen, pp. 21–73, vol. 5. Columbia: Univ. of Missouri Press. [7] Volk, M. L., and P. A. Ubel. 2011. Better Off Not Knowing: Improving Clinical Care by Limiting Physician Acess to Unsolicited Diagnostic Information. Arch. Int. Med. 171:487–488. [1] von Hippel, W., and R. Trivers. 2011. The Evolution and Psychology of Self-Deception. Behav. Brain Sci. 34:1–16. [6] von Hodenberg, C. 2018. Das andere Achtundsechzig: Gesellschaftsgeschichte einer Revolte [The Other Sixty-Eight: Social History of a Revolt]. Munich: C. H. Beck. [2]

Bibliography

369

von Weizäcker, R. 1985. Rede Zum 8. Mai 1985: Gedenkveranstaltung im Plenarsaal des Deutschen Bundestages zum 40. Jahrestag des Endes des Zweiten Weltkrieges in Europa [Speech on 8 May 1985: Commemoration Ceremony in the Plenary Hall of the German Bundestag on the 40th Anniversary of the End of the Second World War in Europe]. Berlin: Bundespräsidialamt. Vosoughi, S., D. Roy, and S. Aral. 2018. The Spread of True and False News Online. Science 359:1146–1151. [17] Wahlin, T. B. R. 2007. To Know or Not to Know: A Review of Behaviour and Suicidal Ideation in Preclinical Huntington’s Disease. Patient Educ. Counsel. 65:279–287. [1] Waisbord, S. 2018a. The Elective Affinity between Post-Truth Communication and Populist Politics. Commun. Res. Prac. 4:17–34. [7] ———. 2018b. Why Populism Is Troubling for Democratic Communication. Commun. Cult. Crit. 11:21–34. [7] Waldmann, M. R., and J. H. Dieterich. 2007. Throwing a Bomb on a Person versus Throwing a Person on a Bomb: Intervention Myopia in Moral Intuitions. Psychol. Sci. 18:247–253. [14] Waldmann, M. R., J. Nagel, and A. Wiegmann. 2012. Moral Judgment. In: The Oxford Handbook of Thinking and Reasoning, ed. K. J. Holyoak and R. G. Morrison, pp. 364–389, Oxford Library of Psychology. New York: Oxford Univ. Press. [14] Walker, R., and P. Rocha da Silva. 2015. Emerging Trends in Peer Review: A Survey. Front. Neurosci. 9:169. [4] Walker, R. S., S. Wichmann, T. Mailund, and C. J. Atkisson. 2012. Cultural Phylogenetics of the Tupi Language Family in Lowland South America. PLoS ONE 7:e35025. [5] Wang, Y., and M. Kosinski. 2018. Deep Neural Networks Are More Accurate Than Humans at Detecting Sexual Orientation from Facial Images. J. Pers. Soc. Psychol. 114:246–257. [14] Wasko, M. M., and S. Faraj. 2000. “It Is What One Does”: Why People Participate and Help Others in Electronic Communities of Practice. J. Strat. Inform. Syst. 9:155– 173. [15] Wason, P. C., and P. N. Johnson-Laird. 1972. Psychology of Reasoning: Structure and Content. Cambridge, MA: Harvard Univ. Press. [5] Watson, L. 2018. Systematic Epistemic Rights Violations in the Media: A Brexit Case Study. Soc. Epistemol. 32:88–102. [7] Watts, D. J. 2002. A Simple Model of Global Cascades on Random Networks. PNAS 99:5766–5771. [10] Weckel, U. 2016. Shamed by Nazi Crimes: The First Step Towards Germans Reeducation or a Catalyst for Their Wish to Forget? In: Reverberations of Nazi Violence in Germany and Beyond, ed. S. Bird et al., pp. 33–46. London: Bloomsbury Academic. [2] Wegwarth, O., and G. Gigerenzer. 2013. Trust-Your-Doctor: A Simple Heuristic in Need of a Proper Social Environment. In: Simple Heuristics in a Social World, ed. R. Hertwig et al. New York: Oxford Univ. Press. [5] Wehling, P. 2015a. Fighting a Losing Battle? The Right Not to Know and the Dynamics of Biomedical Knowledge Production. In: Routledge International Handbook of Ignorance Studies, ed. M. Gross and L. McGoey, pp. 206–214. Basingstoke: Taylor & Francis. [1]

370

Bibliography

Wehling, P. 2015b. Vom Nutzen des Nichtwissens, vom Nachteil des Wissens: Zur Einleitung. In: Vom Nutzen des Nichtwissens: Sozial- und kulturwissenschaftliche Perspektiven [On the Value of Non-Knowledge: Perspectives from Social and Cultural Studies], ed. P. Wehling, pp. 9–50, Sozial Theorie. Bielefeld: transcript Verlag. [2] ———. 2019. Die letzte Rettung? Das Recht auf Nichtwissen in Zeiten von Anlageträger-Screening und Genom-Sequenzierung. In: Das sogenannte Recht auf Nichtwissen: Normatives Fundament und anwendungspraktische Geltungskraft, ed. G. Duttge and C. Lenk, pp. 233–251. Paderborn: Mentis Verlag. [14] Weiler, P. C. 1993. The Case for No-Fault Medical Liability. Maryland Law Rev. 52:908–950. [16] Weinzierl, M. 2014. The Promise of Positive Optimal Taxation: Normative Diversity and a Role for Equal Sacrice. J. Public Econ. 118:128 –142. [11] ———. 2017. A Welfarist Role for Nonwelfarist Rules: An Example with Envy. Polit. Sci. [11] Weiss, N. A. 2005. A Course in Probability. Boston: Pearson. [14] Welch, H. G. 2015. Less Medicine, More Health: 7 Assumptions That Drive Too Much Medical Care. Boston: Beacon Press. [1] Wells, G. L., M. Small, S. Penrod, et al. 1998. Eyewitness Identication Procedures: Recommendations for Linups and Photospreads. Law Hum. Behav. 22:603–647. [16] Welzer, H., S. Moller, and K. Tschuggnall. 2002. “Opa war kein Nazi”: Nationalsozialismus und Holocaust im Familiengedächtnis [“Grandpa Wasn’t a Nazi”: National Socialism and Holocaust in Family Memory]. Die Zeit der Nationalsozialismus: Eine Buchreihe. Frankfurt: Fischer Taschenbuch. [2] Wheeler, D. A., M. Srinivasan, M. Egholm, et al. 2008. The Complete Genome of an Individual by Massively Parallel DNA Sequencing. Nature 452:872–876. [1] White, J. 2016. Dismiss, Distort, Distract, and Dismay: Continuity and Change in Russian Disinformation (Policy Brief 13). Polit. Sci. [7] Williams, M. S., and R. Nagy. 2012. Introduction. In: Transitional Justice: Nomos Li, ed. M. S. Williams et al., pp. 1–30, Nomos Li. New York: NYU Press. [2] Wilson, T. D. 2002. Strangers to Ourselves: Discovering the Adaptive Unconscious. Cambridge, MA: Harvard Univ. Press. [8] Wilson, T. D., and D. T. Gilbert. 2003. Affective Forecasting. Adv. Exp. Soc. Psychol. 35:345–411. [5] ———. 2005. Affective Forecasting Knowing What to Want. Curr. Dir. Psychol. Sci. 14:131–134. [12, 14] Winter, J. 2010. Thinking About Silence. In: Shadows of War: A Social History of Silence in the Twentieth Century, ed. E. Ben-Ze’ev et al., pp. 3–31. Cambridge: Cambridge Univ. Press. [2] ———. 2016. Shell Shock, Gallipoli and the Generation of Silence. In: Beyond Memory: Silence and the Aesthetics of Remembrance, ed. A. Dessingué and J. M. Winter, pp. 195–208, Routledge Approaches to History. New York: Routledge. [2] Wistrich, A. J., C. Guthrie, and J. J. Rachlinski. 2005. Can Judges Ignore Inadmissible Information: The Difficulty of Deliberately Disregarding. Univ. PA Law Rev. 153:1251–1345. [16] Wittgenstein, L. 1953. Philosophical Investigations. Oxford: Basil Blackwell. [5] Wolf, S. M., G. J. Annas, and S. Elias. 2013. Patient Autonomy and Incidental Findings in Clinical Genomics. Science 340:1049–1050. [12]

Bibliography

371

Wolf, S. M., F. P. Lawrenz, C. A. Nelson, et al. 2008. Managing Incidental Findings in Human Subjects Research: Analysis and Recommendations. J. L. Med. Ethics 36:219–248. [12] Wollenberger, V. 1992. Virus der Heuchler: Innenansicht aus Stasi-Akten [Virus of the Hypocrites: A View of the Interior from Stasi Files]. Berlin: Elefanten Press. [2] Wood, A. K., and A. M. Ravel. 2018. Fool Me Once: Regulating Fake News and Other Online Advertising. S. California Law Rev. 91:1223–1278. [7] Wood, M. J., K. M. Douglas, and R. M. Sutton. 2012. Dead and Alive: Beliefs in Contradictory Conspiracy Theories. Soc. Psychol. Pers. Sci. 3:767–773. [7] Wood, T., and E. Porter. 2018. The Elusive Backre Effect: Mass Attitudes’ Steadfast Factual Adherence. Polit. Behav. 41:135–163. [7] Woodford, M. 2009. Information-Constrained State-Dependent Pricing. J. Monet. Econ. 56:S100–S124. [14] Wooley, K., and J. L. Risen. 2018. Closing Your Eyes to Follow Your Heart: Avoiding Information to Protect a Strong Intuitive Preference. J. Pers. Soc. Psychol. 114:230–245. [5] Wren, T. E., ed. 1990. The Moral Domain: Essays in the Ongoing Discussion between Philosophy and the Social Sciences, Studies in Contemporary German Social Thought. Cambridge, MA: MIT Press. [14] Yaniv, I., D. Benador, and M. Sagi. 2004. On Not Wanting to Know and Not Wanting to Inform Others: Choices Regarding Predictive Genetic Testing. Risk, Decision and Policy 9:317–336. [1, 5] Yariv, L. 2002. I’ll See It When I Believe It? A Simple Model of Cognitive Consistency. Cowles Fdn. Discuss. Papers 1352: [8] Zahavi, A. 1975. Mate Selection: A Selection for a Handicap. J. Theoret. Biol. 53:205– 214. [10] Zamir, E. 1990. Market Overt in the Sale of Goods: Israeli Law in a Comparative Perspective. Israel Law Rev. 24:82–127. [16] ———. 2015. Law, Psychology, and Morality: The Role of Loss Aversion. New York: Oxford Univ. Press. [16] Zamir, E., and D. Teichman. 2018. Behavioral Law and Economics. New York: Oxford Univ. Press. [15, 16] Zane, D. M., J. R. Irwin, and R. W. Reczek. 2016. Do Less Ethical Consumers Denigrate More Ethical Consumers? The Effect of Willful Ignorance on Judgments of Others. J. Consum. Psychol. 26:337–349. [14] Zelinsky, N. A. G. 2017. Foreign Cyber Attacks and the American Press: Why the Media Must Stop Reprinting Hacked Material. Yale Law J. Forum 127:286. [7] Zschirnt, E., and D. Ruedin. 2016. Ethnic Discrimination in Hiring Decisions: A MetaAnalysis of Correspondence Tests 1990–2015. J. Ethn. Migrat. Stud. 42:1115–1134. [4]

Subject Index accountability 38, 51, 52, 97, 105, 294, 306 accusation 24, 112 actionability 78–80, 212 adverse selection 42, 43 affect regulation. See emotions affirmative action 57–60, 269, 292, 294 age, role of 76, 79, 85, 86, 115, 219 agnotology 103, 256 altruism 132, 143, 144, 174, 188, 190–192, 197 Alzheimer disease 3, 12, 207, 210 amnesty 26, 28, 30, 236, 245, 246, 307 anonymity 54–61, 281, 299, 310–312 right to be forgotten 299, 313 anticipatory utility 15, 123–126, 130, 163 antidiscrimination law 219, 294, 297, 331 anxiety 14, 129, 130, 162, 163, 203, 204, 207, 280 articial intelligence 260, 275, 277, 289 as-if models 15, 122, 159 attention 7, 10, 11, 76, 92, 128, 131, 160–162, 255, 256 attorney–client relationship 10, 309, 313, 314 autonomy 17, 199, 247, 256, 263, 268, 312 patient 200–208, 213, 214, 280, 314, 315 bargaining 4, 8, 40, 42, 67, 192, 194, 316 take-it-or-leave-it 44–46 belief-based utility 126, 129, 159, 160, 163, 182, 266, 267 belief consistency 125, 130–137 bias 7, 51–62, 160, 178, 196, 243, 252, 270 collider 166–172 conrmation 54, 71, 131, 135, 136, 306, 310, 326 prestige-based 56, 87 selection 233, 321 self-serving 45, 48, 134 unconscious 219, 233, 305, 310, 311

bidding 169, 170 bioethics 199–201, 207, 212 blinding 51–62, 67, 169–171, 193–197, 199, 310. See also double-blind procedure, veil of ignorance in data analysis 52, 54, 55, 60, 62, 199 machine learning 291–293 orchestral auditions 10, 57, 81, 103, 219, 243, 270, 322 peer review 51, 52, 54–56, 60, 82, 169, 246, 276, 322 tax policy 196, 197 willful 301–304, 307, 316, 317 bounded optimality 255 bounded rationality 164, 168, 176, 254, 255, 280 burden of proof 215, 284, 309 chronological age 76, 79, 85, 86, 115, 219 civility 113, 224 climate change 75, 79, 103, 113, 133, 237, 308 denial 87, 194, 197 cognitive dissonance 6, 8, 34, 131, 136, 171, 195 coherence rationality 133, 248, 257 collective 130, 165, 170, 177–180, 257, 258, 276 decision making 204, 206, 208, 223, 228 forgetting 21, 236, 307 group-mind fallacy 246 ignorance 222–227, 299, 306, 307 knowledge 218, 225–227, 235, 302 memory 22, 27, 37, 38, 236, 237, 313 role of the individual 245–247 sins of the past 245, 268 collider bias 166–172 commitment 42, 45, 129, 141–146, 172, 231 modeling 146–152 competition 58, 110, 150, 176, 246, 282 market 43, 188, 225, 279–283

374

Subject Index

conrmation bias 54, 71, 131, 135, 136, 306, 310, 326 consequentialism 247, 249, 250, 262, 329 contracts 42, 275, 277–283, 280, 303, 330 breach of 12, 279 cooperation 33, 38, 45, 79, 141–146, 176, 246, 247 coordination 41, 76, 87, 113, 175, 247 culpability 82, 302, 303 culture, role of 66, 74, 82, 86, 87, 243, 244, 254, 306 rituals 23, 24, 87 curiosity 4, 16, 80, 128, 149, 160, 162 data analysis blinding 52, 54, 55, 60, 62, 199 overtting 125, 165–167, 255, 322 deception 39, 40, 56, 97, 101–104, 117 self- 46, 98, 133, 150, 176 decision making 74, 83, 122, 159 collective 204, 206, 208, 223, 228 computerized 250, 269, 282, 290, 293, 311 judicial 13, 219, 226, 257, 276 maximizing 83, 85, 91, 125, 130, 134, 139, 188, 232 quality of 124, 125, 134–136 strategic 41, 247, 256 decision theory 229, 254 deliberate, dened 140, 238, 263, 299 deliberate ignorance, dened 5, 65–72, 121, 122, 134, 140, 156, 217, 227, 262, 318–323 deliberate opacity 277, 287–289, 294 deontology 247–249, 260, 262, 329 dictator game 8, 47, 157, 246 digital technology 80, 115, 181, 282, 313, 326, 331 discrimination 203, 204, 211, 235, 269, 291, 294, 295, 306 affirmative action 57–60, 219, 269, 292, 294, 297, 331 gender 56, 58–61, 219, 270, 291, 306, 311, 322, 331 racial 62, 236, 306 sexual orientation 56, 59, 269, 311 statistical 294 workplace 6, 56–61, 270, 305

disinformation 103, 104, 108, 109, 112–115, 282, 326 domain specicity 66, 72–75, 137 double-blind procedure 51, 219, 224, 310, 322 peer review 55, 56 randomized trials 10, 62, 224, 233 duration-dependent content-based models 128–130 duration-independent content-based models 126–128 East German Revolution 28–31 ecological rationality 14, 232, 239, 255, 319 education 26, 27, 57, 167, 196, 237–239, 280, 297, 331 emotions 15, 45–47, 124, 182 anxiety 14, 129, 130, 162, 163, 280 hope 6, 14, 124, 129, 242 regulation of 6, 16, 67, 84–86, 94–98, 125, 164, 210, 211, 244, 248, 253, 254, 268 envelope game 141–152, 172 envy 7, 94, 188–190, 197 ethics 10, 14, 82, 199–201, 207, 212, 248, 264, 266–271 moral wiggle room 132, 140, 148, 174, 251, 268 evidence law 299, 300, 308–310 expungement of criminal records 312, 320 fairness 10, 16, 45, 47, 51, 54, 60, 67, 136, 227, 247, 269, 270, 297, 322 judicial 309 forgetting 16, 20–23, 25, 37, 68, 89–99, 136, 145, 165, 166, 224, 236, 312, 313, 320 collective 21, 236, 307 historical amnesia 20, 22 French Revolution 21, 24, 265 future research 16, 66, 121, 137, 139, 182, 197, 241, 250, 262, 264, 265, 277, 295, 296

Subject Index genetic testing 3, 10–13, 43, 72, 75, 83, 197, 199–216, 245, 248, 251, 254, 266, 267, 321 genomic sequencing 199–201, 212, 215, 327 heuristics 15, 135, 255, 319 fast-and-frugal 233 recognition 89, 90, 93, 94, 164–167, 232 simple 140, 144, 166 take-the-best 219, 230, 232 historical amnesia 20, 22 HIV 5, 12, 13, 66, 71, 161–163, 245 Holocaust 21, 27 hope 14, 15, 124, 129, 242 Huntington disease 5, 72, 75, 83, 159–162, 203, 207, 211, 218, 220, 248, 254, 266 identity 22, 28, 34, 123, 137, 250, 257, 304 blinding 54, 55, 57, 170, 270, 276, 310, 322 protection 131–133, 136, 313 ignorance collective 222–227, 299, 306, 307 dened 140, 218, 238 equilibrium 132, 143, 145–148, 150, 151 evolution of 86, 144, 145, 147 intentional 66, 68, 71, 223, 227 Nichtwissen 21, 22 nonstrategic 140, 149, 150 ostrich effect 4, 9, 149 regulating 279–282, 287, 295, 296, 312, 313, 330 strategic 8, 13, 39–49, 67, 139–152, 268 willful 9, 10, 20, 101–117 impartiality 10, 13, 16, 67, 219, 269, 270, 310, 322. See also veil of ignorance individual choice 106, 121–138, 157, 175, 181, 188, 205 impact on the collective 245–247

375

information acquisition 124, 127, 128, 137, 160, 217, 242, 258, 301, 304, 305 avoidance 40, 66, 73, 74, 130, 158, 160, 169, 171, 174, 181, 193, 194, 321 collective 223, 224 costs 91, 140, 158, 169, 181, 256, 261 cultural transmission 254 limiting 187, 278, 279, 281 overload 11, 17, 89, 90 preferring not to know 158–165 quarantining 286 utility of 14, 65, 77–80, 83, 121, 127 welfare implications 188–198 information gap model 161, 182 informed consent 200, 212, 214, 280, 314, 315 instrumental rationality 228, 231, 232, 233 intentionality 69, 223 intertemporal choice 258, 259, 262 intuition 41, 106, 250, 251, 264 job market 52, 54, 60, 61, 66, 82, 311 blinding 56, 58, 59 judgment 4, 77, 134, 189, 230, 235, 238 fairness 60, 297 lens-model approach 52–54 moral 79, 247, 251 validity of 51–55, 60 jury decision making 13, 219, 226, 257, 276 justice 10, 25, 31, 38, 48, 236, 264 restorative 21, 37, 236, 300, 306, 307 knowledge 4, 14, 19, 39, 41, 77, 82, 105, 106, 220, 227, 260 collective 218, 224–226, 225–227, 302 distribution of 217, 218, 223, 238, 239 institutional 283, 284 leaking 40, 45–47 link to power 22, 36, 241 in political transformation 23–31 practical 221–223 propositional 221–223, 225

376

Subject Index

liability 12, 13, 79, 283–287, 300, 302, 305, 308, 324 avoidance 9, 10, 13, 97 fault-based 301, 315 strict 301, 303, 304 liberal values 101, 107, 187, 189, 192, 193, 197 machine learning 260, 275, 277, 289–295, 297 Manhattan project 261 market competition 43, 188, 225, 279–283 medical testing 5, 6, 82, 122, 124, 195, 199–216, 212, 254, 321. See also genetic testing, genomic sequencing, right not to know (RNTK) HIV 5, 12, 13, 66, 71, 161–163, 245 Huntington disease 72, 75, 83, 159–162, 203, 207, 211, 220, 248, 254, 266 incidental ndings 200–202, 209, 212, 215, 327 PSA screening 170, 171, 259 memory 16, 23, 30, 76, 89–98, 223, 320 collective 22, 27, 37, 38, 236, 237, 313 false 26, 164 in posttraumatic stress disorder 97, 164, 182 memory politics 19, 21, 22, 25, 26, 36–38, 236, 307 models 121–138, 146–148, 155–184, 252–258, 324–325 as-if 15, 122, 159 classication 123–126 coherence 133 commitment 146–152 consistency of beliefs 125, 130–137 content of beliefs 126–130 duration-dependent content-based 128–130 duration-independent content-based 126–128 information gap 161, 182 parallel constraint satisfaction 132, 133 population-level 177–179 signaling 149–152, 174 morality 8, 31, 175, 233, 243, 247–259

moral judgment 79, 247, 250, 251 moral wiggle room 132, 140, 148, 174, 251, 268 motivated beliefs 195, 196 motivated reasoning 253, 305, 306, 310 naked bootleg play 39, 40, 45, 46 Nazi Germany 25–27, 31, 33, 106, 306 negotiations 8, 39–50, 67, 140, 308 dened 40 holdup problem 42, 44 Nichtwissen 21, 22 nonstrategic ignorance 140, 149, 150 ostrich effect 4, 9, 149 normative analysis 77–88, 241–271 opportunism equilibrium 142, 146, 150 optimism 127–130, 136, 162, 182 organizations, role of 170, 171, 225, 226, 257, 277, 283–287 ostrich effect 4, 9, 149 overtting data 125, 165–167, 255, 322 parallel constraint satisfaction models 132, 133 Pareto optimal outcome 190, 193 partial ignorance equilibrium 146–148, 151 patient autonomy 200–208, 213, 214, 280, 314, 315 patient preferences 13, 68, 126, 199, 202, 203, 206, 208, 211–215, 278, 315. See also right not to know informed consent 200, 212, 214, 280, 314, 315 peer review 51, 52, 54–56, 60, 82, 169, 246, 276, 322 performance enhancement 7, 67, 89, 93, 98, 125, 219, 224, 227, 261 performance evaluations 82, 194, 195, 278, 322 of teachers 245, 270, 271 planning 76, 157 plausible deniability 47, 276 political transformation 21–24, 28, 264, 265

Subject Index posttraumatic stress disorder 97, 164, 182 power 23–25, 28, 31, 35, 37, 236, 327, 328 bargaining 42, 44, 47 link to knowledge 22, 36, 241 prestige-based bias 56, 87 prisoner’s dilemma 176, 247, 261 privacy 58, 70, 73, 205, 206, 214, 267, 269, 311, 313, 325 data 286, 289, 292 sperm donor 280, 281 private choice 189–193, 195, 197, 204 procedural rationality 228–231 propositional knowledge 221–223, 225 prosociality 174, 175 PSA screening 170, 171, 259 psychological mechanisms 75–77, 80, 122, 149, 169, 181, 306 public policy 192, 194, 205, 235, 238, 249, 277, 279, 287–289 rationality bounded 164, 168, 176, 254, 255, 280 coherence 133, 248, 257 ecological 14, 232, 239, 255, 319 functional 248 instrumental 228, 230, 232, 233 norms 243, 247, 262 principles 252–257, 266–271 procedural 228–231 substantive 228 recognition heuristic 89, 90, 93, 94, 164–167, 232 reconciliation 19, 20, 25, 37, 38, 236, 307 regret avoidance 6–8, 34, 67, 84, 125, 130, 140, 164, 248, 315, 316 regulating ignorance 279–282, 287, 295, 296, 312, 313, 330 remembrance 19–22, 25, 31, 36, 224, 236 research 244, 260, 261 future challenges 16, 66, 121, 137, 139, 182, 197, 241, 250, 262, 264, 265, 277, 295, 296 peer review 51, 52, 54–56, 60, 169, 246, 322 restorative justice 21, 37, 236, 300, 306, 307 revealed preferences 189, 192

377

right not to know (RNTK) 12, 13, 199–214, 248, 250, 265, 280, 314, 327, 328 arguments against 205–216 arguments for 204 right to be forgotten 299, 313 rituals 23, 24, 87 satiscing 83, 85 second best, theory of 187, 196 selection bias 233, 321 self-control 132, 155, 158, 159, 192, 193, 258 self-deception 46, 98, 133, 150, 176 self-image 20, 34, 37, 47, 132, 136, 159, 174, 175, 194, 195, 247. See also identity self-serving bias 45, 48, 134 shaming 24, 26, 28, 31, 34, 35 signaling model 149–152, 174 simple heuristics 140, 144, 166 social cohesion 19, 28, 31, 33, 37, 38 social control 242 social media 10, 70, 110, 113, 226, 245, 313, 326 algorithms 250, 282 social norms 70, 131, 133, 174, 182, 243, 264, 265, 296, 330 politeness 70, 71 respect for privacy 70, 73, 205, 267, 269, 292, 311, 313, 325 societal dynamics 177–181 societal transformations 19–38, 70, 254, 264, 265, 268 amnesty 26, 28, 30, 236, 245, 307 reconciliation 19, 20, 25, 37, 38, 236, 307 remembrance 19–22, 25, 31, 36, 224, 237 restorative justice 21, 37, 237, 300, 306, 307 sperm donation 280, 281 spillover effect 177, 179, 180, 296 standard economic model 156, 164 standard model of rational choice 168, 252–256, 258, 262 Stasi les 5, 22, 28–36, 81, 179, 245, 253, 254, 265, 276

378

Subject Index

static equilibrium analysis 150, 151 stigma 6, 24, 79, 203, 204, 265 strategic decision making 41, 247, 256 strategic ignorance 8, 13, 39–49, 67, 139–152, 268 subjective utility 159, 163, 232 substantive rationality 228 surprise maximization 7, 67, 125, 128, 129 naked bootleg play 39, 40, 45, 46 suspense maximization 7, 67, 125, 129, 158, 242 taboo trade-offs 242, 288 take-it-or-leave-it bargaining 44–46 take-the-best heuristic 219, 230, 232 taxation 190–194, 197, 296 optimal 187, 190–192 sin tax 192, 193 technology, role of 10, 90, 110, 222, 250, 275, 277, 289–295. See also social media digital 80, 115, 181, 282, 313, 326, 331 theory of the second best 187, 196 time, role of 78, 79, 84–87, 126, 127, 137, 244, 258, 259, 264 transitional societies 21–38, 81, 264, 265, 268, 321 amnesty 26, 28, 30, 236, 245, 246, 307 reconciliation 19, 20, 25, 37, 38, 236, 307 remembrance 19–22, 25, 31, 36, 224, 237 restorative justice 21, 37, 236, 300, 306, 307

transparency 51, 52, 148, 264, 276, 287–289, 294, 296 trust 19, 29, 36, 42, 70–74, 82, 103, 173, 182, 267, 293, 296, 297 games 145, 247 in media 111, 115 truth-nding 22, 32, 34, 35, 38, 62, 101–108, 228, 233, 236, 307, 326 uncertainty 78, 79, 127, 147, 221, 237, 261, 308 choice/preference for 3, 137, 246 institutional frameworks 287, 308 resolving 124, 125, 128, 129, 148 unconscious bias 219, 233, 305, 310, 311 utilitarianism 14, 188, 190, 233, 249, 325, 328, 329 utility of information 14, 65, 77–80, 83, 121, 127 validity of judgments 51–55, 60 veil of ignorance 10, 16, 25, 27, 29, 48, 52, 224, 266, 307, 308 institutional 299, 316 Vergangenheitsbewältigung 19, 25, 30, 31, 36 Weimar Republic 25, 26 welfare implications 12–14, 45, 82, 88, 149, 174, 187–198, 249, 250, 328, 329 willful blindness 301–304, 307, 316, 317 willful ignorance 9, 10, 20, 101–117

Strüngmann Forum Report Series* Youth Mental Health: A Paradigm for Prevention and Early Intervention Edited by Peter J. Uhlhaas and Stephen J. Wood ISBN: 9780262043977 The Neocortex Edited by Wolf Singer, Terrence J. Sejnowski and Pasko Rakic ISBN: 9780262043243 Interactive Task Learning: Humans, Robots, and Agents Acquiring New Tasks through Natural Interactions Edited by Kevin A. Gluck and John E. Laird, ISBN: 9780262038829 Agrobiodiversity: Integrating Knowledge for a Sustainable Future Edited by Karl S. Zimmerer and Stef de Haan ISBN: 9780262038683 Rethinking Environmentalism: Linking Justice, Sustainability, and Diversity Edited by Sharachchandra Lele, Eduardo S. Brondizio, John Byrne, Georgina M. Mace and Joan Martinez-Alier ISBN: 9780262038966 Emergent Brain Dynamics: Prebirth to Adolescence Edited by April A. Benasich and Urs Ribary ISBN: 9780262038638 The Cultural Nature of Attachment: Contextualizing Relationships and Development Edited by Heidi Keller and Kim A. Bard Hardcover: ISBN: 9780262036900, ebook: ISBN: 9780262342865 Winner of the Ursula Gielen Global Psychology Book Award Investors and Exploiters in Ecology and Economics: Principles and Applications edited by Luc-Alain Giraldeau, Philipp Heeb and Michael Kosfeld Hardcover: ISBN: 9780262036122, eBook: ISBN: 9780262339797 Computational Psychiatry: New Perspectives on Mental Illness edited by A. David Redish and Joshua A. Gordon, ISBN: 9780262035422 Complexity and Evolution: Toward a New Synthesis for Economics edited by David S. Wilson and Alan Kirman, ISBN: 9780262035385 The Pragmatic Turn: Toward Action-Oriented Views in Cognitive Science edited by Andreas K. Engel, Karl J. Friston and Danica Kragic ISBN: 978-0-262-03432-6 Translational Neuroscience: Toward New Therapies edited by Karoly Nikolich and Steven E. Hyman, ISBN: 9780262029865 *

Available at https://mitpress.mit.edu/books/series/strungmann-forum-reports

Trace Metals and Infectious Diseases edited by Jerome O. Nriagu and Eric P. Skaar, ISBN 978-0-262-02919-3 Rethinking Global Land Use in an Urban Era edited by Karen C. Seto and Anette Reenberg, ISBN 978-0-262-02690-1 Schizophrenia: Evolution and Synthesis edited by Steven M. Silverstein, Bita Moghaddam and Til Wykes, ISBN 978-0-262-01962-0 Cultural Evolution: Society, Technology, Language, and Religion edited by Peter J. Richerson and Morten H. Christiansen, ISBN 978-0-262-01975-0 Language, Music, and the Brain: A Mysterious Relationship edited by Michael A. Arbib, ISBN 978-0-262-01962-0 Evolution and the Mechanisms of Decision Making edited by Peter Hammerstein and Jeffrey R. Stevens, ISBN 978-0-262-01808-1 Cognitive Search: Evolution, Algorithms, and the Brain edited by Peter M. Todd, Thomas T. Hills and Trevor W. Robbins, ISBN 978-0-262-01809-8 Animal Thinking: Contemporary Issues in Comparative Cognition edited by Randolf Menzel and Julia Fischer, ISBN 978-0-262-01663-6 Disease Eradication in the 21st Century: Implications for Global Health edited by Stephen L. Cochi and Walter R. Dowdle, ISBN 978-0-262-01673-5 Dynamic Coordination in the Brain: From Neurons to Mind edited by Christoph von der Malsburg, William A. Phillips and Wolf Singer, ISBN 978-0-262-01471-7 Linkages of Sustainability edited by Thomas E. Graedel and Ester van der Voet, ISBN 978-0-262-01358-1 Biological Foundations and Origin of Syntax edited by Derek Bickerton and Eörs Szathmáry, ISBN 978-0-262-01356-7 Clouds in the Perturbed Climate System: Their Relationship to Energy Balance, Atmospheric Dynamics, and Precipitation edited by Jost Heintzenberg and Robert J. Charlson, ISBN 978-0-262-01287-4 Winner of the Atmospheric Science Librarians International Choice Award Better Than Conscious? Decision Making, the Human Mind, and Implications For Institutions edited by Christoph Engel and Wolf Singer, ISBN 978-0-262-19580-5