Bias seems to be everywhere. Biased media outlets decisively influence the political opinions and votes of millions of p
508 113 2MB
English Pages 288 [249] Year 2023
Table of contents :
Halftitle page
Title page
Copyright page
Dedication page
Contents
Acknowledgments
Introduction
1. A Familiar Phenomenon
2. The Philosophy of Bias: Expanding the Playing Field
3. Apology
Part I. Conceptual Fundamentals
1. Diversity, Relativity, Etc.
1. Diversity
2. Relativity
3. Directionality
4. Bias about Bias
5. Biased Representation
6. Parts and Wholes
2. Pluralism and Priority
1. Explanatory Priority
2. Are People (Ever) the Fundamental Carriers of Bias?
3. Processes and Outcomes
4. Unbiased Outcomes from Biased Processes?
5. Biased Outcomes from Unbiased Processes?
6. Pluralism
Part II. Bias and Norms
3. The Norm-Theoretic Account of Bias
1. The Diversity of Norms
2. Disagreement
3. The Perspectival Character of Bias Attributions
4. When Norms Conflict
4. The Bias Blind Spot and the Biases of Introspection
1. The Introspection Illusion as a Source of the Bias Blind Spot
2. Why We’re More Likely to See People as Biased When They Disagree with Us
3. Is It a Contingent Fact That Introspection is an Unreliable Way of Telling Whether You’re Biased?
4. How the Perspectival Account Explains the Bias Blind Spot, as Well as the Biases of Introspection
5. Against “Naïve Realism”, For Inevitability
5. Biased People
1. Biases as Dispositions
2. Bias as a Thick Evaluative Concept
3. Biased Believers, Biased Agents
4. Biased Agents, Unreliable Agents
5. Overcompensation
6. Norms of Objectivity
1. Some Varieties
2. Constitutive Norms of Objectivity
3. Following the Argument Wherever It Leads
7. Symmetry and Bias Attributions
1. Two Challenges
2. Norms without Bias?
3. Symmetry
4. Bias without Norms?
5. Pejorative vs. Non-Pejorative Attributions of Bias
Part III. Bias and Knowledge
8. Bias and Knowledge
1. Biased Knowing
2. Can Biased Beliefs Be Knowledge?
3. Are Biases Essential to Knowing?
4. Knowledge and Symmetry
5. How and When Bias Excludes Knowledge: A Proposal
9. Knowledge, Skepticism, and Reliability
1. Biased Knowing and Philosophical Methodology
2. Are We Biased Against Skepticism?
3. Reliability and Contingency
4. A Tale of Three Thinkers
10. Bias Attributions and the Epistemology of Disagreement
1. On Attributing Bias to Those Who Disagree with Us
2. The Case for Skepticism
3. Against Skepticism
11. Main Themes and Conclusions
1. Five Themes
2. Conclusions
Part I: Conceptual Fundamentals
Part II: Bias and Norms
Part III: Bias and Knowledge
Bibliography
Index
Bias
Bias A Philosophical Study THOMAS KELLY
Great Clarendon Street, Oxford, OX2 6DP, United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © Thomas Kelly 2022 The moral rights of the author have been asserted First Edition published in 2022 Impression: 1 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America British Library Cataloguing in Publication Data Data available Library of Congress Control Number: 2022940834 ISBN 978–0–19–284295–4 ebook ISBN 978–0–19–265461–8 DOI: 10.1093/oso/9780192842954.001.0001 Printed and bound in the UK by Clays Ltd, Elcograf S.p.A. Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work.
To the people who taught me philosophy at Harvard University, and at the University of Notre Dame
Contents Acknowledgments Introduction 1. A Familiar Phenomenon 2. The Philosophy of Bias: Expanding the Playing Field 3. Apology I. CONCEPTUAL FUNDAM ENTALS 1. Diversity, Relativity, Etc. 1. Diversity 2. Relativity 3. Directionality 4. Bias about Bias 5. Biased Representation 6. Parts and Wholes 2. Pluralism and Priority 1. Explanatory Priority 2. Are People (Ever) the Fundamental Carriers of Bias? 3. Processes and Outcomes 4. Unbiased Outcomes from Biased Processes? 5. Biased Outcomes from Unbiased Processes? 6. Pluralism II. BIAS AND NORM S 3. The Norm-Theoretic Account of Bias 1. The Diversity of Norms 2. Disagreement 3. The Perspectival Character of Bias Attributions 4. When Norms Conflict 4. The Bias Blind Spot and the Biases of Introspection 1. The Introspection Illusion as a Source of the Bias Blind Spot
2. Why We’re More Likely to See People as Biased When They Disagree with Us 3. Is It a Contingent Fact That Introspection is an Unreliable Way of Telling Whether You’re Biased? 4. How the Perspectival Account Explains the Bias Blind Spot, as Well as the Biases of Introspection 5. Against “Naïve Realism”, For Inevitability 5. Biased People 1. Biases as Dispositions 2. Bias as a Thick Evaluative Concept 3. Biased Believers, Biased Agents 4. Biased Agents, Unreliable Agents 5. Overcompensation 6. Norms of Objectivity 1. Some Varieties 2. Constitutive Norms of Objectivity 3. Following the Argument Wherever It Leads 7. Symmetry and Bias Attributions 1. Two Challenges 2. Norms without Bias? 3. Symmetry 4. Bias without Norms? 5. Pejorative vs. Non-Pejorative Attributions of Bias III. BIAS AND KNOWLEDGE 8. Bias and Knowledge 1. Biased Knowing 2. Can Biased Beliefs Be Knowledge? 3. Are Biases Essential to Knowing? 4. Knowledge and Symmetry 5. How and When Bias Excludes Knowledge: A Proposal 9. Knowledge, Skepticism, and Reliability 1. Biased Knowing and Philosophical Methodology 2. Are We Biased Against Skepticism? 3. Reliability and Contingency 4. A Tale of Three Thinkers 10. Bias Attributions and the Epistemology of Disagreement 1. On Attributing Bias to Those Who Disagree with Us
2. The Case for Skepticism 3. Against Skepticism 11. Main Themes and Conclusions 1. Five Themes 2. Conclusions Bibliography Index
Acknowledgments Almost all of this book consists of previously unpublished material. The main exception to this is Chapter 6, §3, which updates material drawn from my paper “Following the Argument Where It Leads,” Philosophical Studies (2011). I utilize that material with the permission of Springer Publishers. In addition to those thanked in that paper, I would also like to acknowledge a number of other individuals and institutions for their help along the way. Earlier versions of some of these ideas were presented in talks that I gave at Rutgers, Princeton, Notre Dame, Fordham, Union College, the Orange Beach Epistemology Workshop sponsored by the University of Alabama, and at Pacific and Eastern division meetings of the American Philosophical Association. I am grateful to the audiences present on those occasions for their feedback. Material for the book was presented in two graduate seminars, one at Princeton, the other at a joint Rutgers-Princeton seminar co-taught with Ernest Sosa. I am grateful to the participants in those seminars, and especially to Ernie for proposing our joint seminar and suggesting that I present some of my work on bias as my contribution to it. Two readers for Oxford University Press, one of whom was Endre Begby, the other of whom remains anonymous, provided timely, very helpful, and much appreciated sets of comments on an earlier version of the manuscript. In addition, I would like to acknowledge the following individuals for their help: Robert Audi, Alisabeth Ayars, Nathan Ballantyne, Lara Buchak, Pietro Cibinel, Brett Copenger, Silvia De Toffoli, Sinan Dogramaci, Felipe Doria, Andy Egan, Adam Elga, Jan Engelmann, Johann Frick, Samuel Fullhart, Fiona Furnari, Dan Garber, Jorge Garcia, Jeremy Goodman, Peter Graham, Alex Guerrero, Elizabeth Harman, Brian Hedden, Grace Helton, Thomas Hurka, Mark Johnston, Brett Karlan, Hilary Kornblith, Barry Lam, Tania Lombrozo, Harvey Lederman, Sebastian Liu, Errol Lord, Jon Matheson, Aidan McGlynn, Sarah McGrath, Tori McGeer, Philip Pettit, Ted Poston, Emily Pronin, Katie Carpenter Rech, Gideon Rosen, Peter Singer, Michael Smith, Joshua Smith, Roy Sorensen, Meghan Sullivan, Una Stojnić, Katia Vavova, Mike Veber, and Snow Zhang. For some of the information about John Cook Wilson’s life used in the Introduction, I relied on Mathiew Marion’s entry on Wilson in The Stanford Encyclopedia of Philosophy. Peter Momtchiloff was an ideal editor, full of judicious advice and preternatural patience for my many missed deadlines, only some of which could plausibly be blamed on a global pandemic. I thank him for his encouragement and support of the project. My greatest intellectual debts are to the people who taught me philosophy, to whom this book is dedicated. My love for philosophy was first ignited as an undergraduate at the University of Notre Dame in the 1990s when I encountered an unusually inspiring and encouraging group of teachers, a group that included Neil Delaney Sr., Alasdair MacIntyre,
Marian David, Vaughn McKim, David Solomon, Mike Loux, David O’Connor, Leopold Stubenberg, Karl Ameriks, Fred Freddoso, and Philip Quinn. Decades later, my memories of those early interactions continue to provide me with models for what a good teacher of philosophy should be like. At Harvard University, I had the further exceptionally good fortune to have as advisers Robert Nozick, Derek Parfit, and Jim Pryor, each of whom was almost unbelievably helpful and supportive of me in overlapping and complementary ways. When I finished my dissertation, each of the three advised me to publish it as a book, but I demurred, on the grounds that no one would want to read an entire book of philosophy by a previously unpublished graduate student. Although this book is not the one that they encouraged me to write, one of the standards that I kept in mind while writing it was to produce a book that they would have found worthwhile, and something that I would have been proud to present to them. I’m sorry that I’ll never have the chance to present a copy to Bob or Derek, or to discuss its ideas with them. My greatest debts of all are to my family. Thanks to my wife Sarah McGrath for all that you did for me and for our children during the years in which this book was written. Thanks to Owen, Orla, and Hugh for being the source of most of my smiles, and for making things seem worthwhile. Finally, thank you to my parents, Tom and Toddy Kelly, for your unwavering and unstinting love and support in all things, for as long as I can remember.
Introduction
1. A Familiar Phenomenon Which types of human beings best exemplify true courage? When asked this question, some people might immediately think of soldiers on a battlefield or of firefighters rushing into a burning building. Others might think first of political dissidents or civil rights activists, individuals who knowingly risk grave harm in order to speak Truth to Power. When the philosopher Plato posed this question over two thousand years ago, he offered us a surprising answer. True courage, Plato suggested, is most likely to be found among the philosophers. Even allowing for the significant differences between the philosophers of Plato’s time and the professors of philosophy of the 21st century, Plato’s answer to his own question does not exactly leap off the page at the reader as the most plausible thing he might have come up with—to put it mildly. Moreover, coming as it does from Plato, the answer strikes us not only as implausible but also as suspicious. It’s very much the type of answer that we might expect a philosopher—but no one else!—to give. Although he offers us a characteristically complicated argument for his view, we naturally suspect Plato of bias when he tells us that it’s really the philosophers who are the most courageous of all. Of course, if in fact Plato was biased about certain questions, he’s in good company. Presumably, people have exhibited biases—in their behavior, as well as in their beliefs about what’s true—for as long as there have been people. Concerns about bias—usually, concerns about the biases of other people, and more occasionally, concerns about one’s own—also have a long history. In philosophy, an interest in the topic of bias predates any of the figures or texts that are treated as canonical by the Western philosophical tradition: in the East, Confucius took the absence of bias to be one of the characteristic features that distinguishes the virtuous person from others.1 The word “bias” entered the English language in the 1500s. Its origins are instructive. One of its earliest uses—and the one from which all current uses of the word that are relevant to this book derive—was as a technical term in competitive sports. In the English sport of lawn bowling, or bowls, the object of the game is to roll your ball (or “bowl”) as closely as possible to another, smaller ball that has already been rolled somewhere on the field of play. (As that description suggests, the sport is a close cousin to the now somewhat better-known sport of bocce ball. The English rulers of the day generally disapproved of lawn bowling, on the grounds that its surging popularity might divert interest from archery, which they viewed
as better preparation for warfare.) Although the basic object of the sport is simple, much of its interest and challenge comes from an important twist. Each ball was constructed so as to have a certain “bias,” which distorts the direction in which it would otherwise travel when it is rolled. Originally, the bias was produced by making sure that one side of each ball was weighted more heavily than the other side. (Usually, lead was used to achieve this purpose.) This lack of balance ensured that the ball would travel, not in a straight line, but in a curved path in the direction of its bias. The general idea of a bias as that which distorts something in a certain direction, away from the natural path or direction that it would otherwise take, perhaps through a mechanism involving unbalanced weighting, survives to this day in many of the most prominent contemporary uses of the term “bias.”2 As the example of Plato suggests, intelligence is no guarantee against bias. The ease with which even the most impressive among us succumb to bias, and the perceived importance of avoiding it, sometimes makes its apparent absence especially celebrated. When, fifteen years after George Washington’s death, Thomas Jefferson sat down to explain what in his eyes made Washington a great leader, he gave pride of place not to Washington’s genius for military tactics, nor to his reputed personal honesty, but rather to his capacity for unbiased decisionmaking: His integrity was most pure, his justice the most inflexible I have ever known, no motives of interest or consanguinity, of friendship or hatred, being able to bias his decision.
Of course, Washington was not without his biases. Some of these were invisible to Jefferson, since he shared them, and they were common among those whom he regarded as his peers. Like Jefferson and Washington, we too no doubt have biases that are largely invisible to us, perhaps because they are shared by those around us whom we respect and admire, including some biases that will seem obvious to later generations. Like Jefferson, we too are much concerned with bias and its absence, both among our contemporaries and among those now dead. Indeed, although bias has undoubtedly always been with us, our explicit concern with bias, and our tendency to see or conceptualize our most pressing social problems in terms of it, has perhaps never been greater than it is now. Notably, this tendency is shared by people who might otherwise seem to have little in common, such as those who occupy opposite ends of the political spectrum. For example, on the one hand, a prominent theme among social critics on the left is that radical changes to existing institutions are called for in response to pernicious and systemic racial biases, as manifested in the criminal justice system, or in the glaring disparities in wealth among different racial groups. On the other hand, conservative critics who deny that such changes are in order will frequently make much of what they regard as entrenched political biases in the media and in higher education. Moreover, on both sides, there is an emphasis on the various forms that biases can take. Even if bias was once thought of primarily as a characteristic of individual people, when the would-be reformer condemns the criminal justice system as racially biased, or the conservative claims that the mainstream media has a left-wing political bias, each is concerned with bias primarily as a characteristic not of individuals but rather of institutions, at least in the first instance. That
bias is the problem—or at least, a big part of our problems—is a common theme, even among people who agree about little else.
2. The Philosophy of Bias: Expanding the Playing Field This book is a philosophical exploration of bias, in the sense of “bias” that we naturally suspect Plato of when he tells us that, contrary to what we might have thought, it’s actually the philosophers who are the most courageous of all. (Although as will become clear, the book is not exclusively concerned with biases exhibited by people.) I originally conceived of the project as a relatively wide-ranging exploration of a number of theoretically interesting issues about bias, issues that were largely independent of one another, as opposed to the development and defense of some more general, overarching theory of bias. For the most part, the book reflects that original vision. In particular, much of it consists of a series of independent proposals about different questions, and the individual chapters are largely selfcontained and can be read more or less independently of one another, in accordance with the reader’s interests.3 Nevertheless, despite my original intentions, and despite my awareness that the historical track record of philosophers who offer very abstract and general theories of this or that phenomenon is dismal to the point of embarrassment, as the project progressed I found myself increasingly attracted to a general theoretical framework for thinking about bias that seemed both fruitful and to illuminate many of the questions that had originally drawn me to the topic in the first place. I call this theoretical framework the norm-theoretic account of bias, and I put forward the basic ideas in Chapter 3. (Readers who wish to begin by considering that account might turn directly to Chapter 3. In particular, what I say there does not presuppose acceptance of or even acquaintance with the conclusions and arguments of the first two chapters, which are devoted to a number of fundamental conceptual questions about bias that I believe can be pursued independently of a commitment to any more specific theoretical framework.) Various later sections of the book are devoted to elaborating, refining, and defending that account, as well as to exploring some of its limitations. To the extent that I have a general “theory of bias” to offer, the norm-theoretic account is it. In order to introduce some of the basic ideas of the account, an analogy is useful. Imagine three archers who take turns shooting at a target. One of the three archers is highly skilled and regularly hits the target; the other two are unskilled and typically miss it. However, the two unskilled archers differ from one another in a significant way. Although the first unskilled archer typically misses, her misses have no pattern: she is as likely to miss to the right as to the left, as likely to miss high as low, and so on. In contrast, the other unskilled archer has a hitch in his shooting form, which causes him to regularly miss low and to the left. Given the hitch in his shooting form, he is disposed to miss low and to the left, and this is something that is true of him even when he is not actually shooting, even if the main evidence that observers have that he is disposed in this way depends on his actually having missed low and to the left in the past. On the norm-theoretic account of bias, a biased person
is much like the archer who is disposed to miss in a particular direction, as opposed to an archer who consistently succeeds in hitting the target or an archer whose misses are random. In slogan form and to a first approximation, the core thought behind the norm-theoretic account is this: a bias involves a systematic departure from a genuine norm or standard of correctness. When people (or groups of people, or institutions) count as biased, this is because they systematically depart from certain norms, or because they are disposed to do so. I certainly claim no great novelty for this way of thinking about bias.4 Indeed, especially given how much attention has been devoted to the topic of bias in various contexts, I would regard with suspicion any account of it that did not strike us as relatively familiar, at least in broad outline. This core thought can usefully be regarded as a kind of generalization of or abstraction from the more specific notions of bias that are in play in various disciplines, as well as one that’s faithful to much that we say and think about bias in everyday life. To take one important example, consider the notion of bias as it is understood in statistics. In statistics, bias is the tendency of a statistical technique to either overestimate or underestimate the true value of some parameter. In this context, the relevant norm or standard is truth or accuracy, and something counts as biased in virtue of departing from the truth or the actual value in a patterned or predictable way—for example, by consistently either underestimating or overestimating it—as opposed to a way that’s unpatterned, unpredictable, or random. (As in a case in which someone repeatedly guesses wildly, and thus ends up missing the truth in a way that’s “all over the map.”) According to the norm-theoretic account of bias, the notion of bias that’s of interest to statisticians is actually a species or special case of a much more general phenomenon: the special case in which the relevant norm is truth, or accurate estimation.5 In the archery example, success consists in hitting the target. What is analogous to hitting the target when it comes to bias? As just noted, in many cases the relevant norm or standard of correctness is truth or accurate estimation. However, although truth is often an important norm, in many contexts the relevant norm that determines whether something counts as biased might have nothing to do with truth or accuracy. For example, many social scientists and philosophers think that an action is rational if and only if it maximizes expected value from the agent’s point of view. If in fact the norm governing rational action is that of maximizing expected value, then according to the norm-theoretic account of bias, a person might count as biased in virtue of their tendency to systematically depart from this norm. Consider, for example, status quo bias, the tendency to favor actions that preserve the status quo over alternative actions that are equally good or better options. According to the normtheoretic account of bias, a person who exhibits status quo bias counts as biased in virtue of being disposed to depart from the norm of maximizing expected value in a way that’s patterned, predictable, and systematic, as opposed to in a way that’s random or unsystematic. More generally, many behavioral biases can be understood as tendencies to systematically depart from the norm of maximizing expected value, just as many cognitive biases can be understood as tendencies to systematically depart from the norm of truth or accuracy. One more example, for now. Consider the moral norm according to which we should treat other people with the respect to which they are entitled. A person who regularly fails to treat
others with the respect to which they are entitled violates an important moral norm and is properly subject to criticism for it. It doesn’t follow, however, that he’s biased. (Perhaps he’s simply an unbiased jerk, whose failures to treat others with respect don’t depend on their being a certain race, or sex, or age. He’s an “equal opportunity offender,” as it were.) Suppose, however, that he’s more likely to violate the norm with respect to women as opposed to men, or blacks as opposed to whites, or the old as opposed to the young. In that case, his departures from the relevant norm exhibit a systematic pattern, and it’s appropriate to attribute to him a racist, sexist, or ageist bias, as the case may be. According to the norm-theoretic account of bias, biases typically involve systematic departures from norms. As is often noted, the term “norm” is ambiguous, in multiple ways.6 What sense of the term is relevant here? First, the relevant sense is not that of a statistical norm. A contemporary American family that has eight children is outside the norm in that respect, but that doesn’t mean that the family or any of its members is biased. Second, the relevant sense of “norm” is also not the sense which is in play in the academic literatures on “social norms” or “gender norms,” according to which norms are, roughly, a matter of a given society’s unwritten rules, conventions, or expectations about how people should behave. In that sense of the term, it’s a norm in many societies that women as opposed to men will do a majority of the housework and child-rearing. However, even if they belong to a society in which that norm holds sway, it doesn’t follow that a husband-and-wife pair who divides up the housework and child-rearing tasks evenly is biased. On the contrary, their egalitarian division of labor might very well reflect a lack of bias about such matters. Similarly, any pathbreaking artist, trendsetter, or iconoclast will systematically depart from prevailing norms in the sense of common expectations about behavior, but they are not biased on that account. Contrast the sense of “norm” in which many of us think that it’s a genuine norm that people should be treated with respect. In this sense of the term, if it’s a genuine moral norm that people should be treated with respect, then we have good reason to treat them in this way, and we should. In this sense, the truth of the claim that treating people with respect is a genuine norm doesn’t depend on how often people are treated in this way in practice (even if it’s common for people to be treated with disrespect, that has no tendency to show that it’s not a genuine norm), or on how it fits with the unwritten rules and social expectations that prevail in a given society. (Even in a society in which people are expected to show disrespect to members of certain groups, this is consistent with the idea that those members should be treated with respect.) According to the norm-theoretic account of bias, when someone is properly subject to criticism as biased, this involves their systematically departing from a genuine norm in this sense.7 Although genuine moral norms provide a paradigm of the relevant kind of norms, they are a species of a more inclusive genus. As noted above, many economists hold that maximizing value from the agent’s point of view is the norm of rational action. If so, then one can count as biased by departing from that norm in certain systematic ways. Here, the relevant norm is, at least in the first instance, a norm of practical rationality as opposed to morality. More generally, there is an impressive diversity of norms relative to which someone or something
might count as biased, including moral norms, norms of justice, norms of accuracy, and norms of both theoretical and practical rationality. (For discussion of these, see Chapter 3, §1, “The Diversity of Norms.”) Although the core idea that biases involve systematic departures from genuine norms might seem familiar, at least in some of its instantiations, I argue that when it is developed in the most plausible way it has far-reaching and radical implications. For example, I argue that both morality and rationality sometimes require us to be biased, in the pejorative sense of “biased” (Chapter 3, §4). The norm-theoretic account also has radical implications for the way we should view other people and our relationships to them. For example, in many cases of disagreement, we are rationally required to view those who disagree with us as biased, even if we know absolutely nothing about how they arrived at their views or about their psychologies, beyond the mere fact that they disagree with us in the way that they do. Appreciating why this is so allows us to offer a compelling explanation of a theoretically interesting and practically important empirical phenomenon: the fact that accusations of bias often inspire not only denials but also countercharges of bias, to the effect that the original accusation of bias is due to the fact that those who make it are themselves biased (Chapter 3, §3). Another set of applications of the norm-theoretic account concerns the so-called “bias blind spot,” or our tendency to see bias in other people in ways that we fail to see it in ourselves. The bias blind spot has been extensively documented and discussed by social psychologists in a fascinating body of research.8 I argue that, when developed in its most plausible form, the norm-theoretic account offers a compelling explanation of the bias blind spot, an explanation that improves upon the kinds of hypotheses favored by the psychologists (Chapter 4, §§4 and 5). This explanation depends on what I call the perspectival character of bias attributions. As I understand it, the perspectival character of bias attributions has both a psychological aspect and a rational aspect. As a psychological matter, our views about a topic—for example, our views about politics—naturally influence our higher-order judgments about who is biased about that topic (and vice versa), in predictable and familiar ways. But crucially, what holds as a matter of psychology holds with respect to rationality as well. Thus, our first-order views about a topic rationally constrain and influence our higherorder judgments about who is biased about it (including our higher-order judgments about ourselves), in systematic ways. Moreover, the perspectival account of bias attributions that I develop also provides insight into another notorious and well-documented psychological phenomenon: the fact that introspection is a highly unreliable way of recognizing one’s own biases (Chapter 4, §§3 and 4). I argue that the unreliability of introspection as a means for detecting our own biases is not a contingent fact of our psychology, but rather something that holds of necessity. To put it provocatively, if a bit too crudely: on the account that I offer, even God could not have made us creatures who reliably detect their own biases by introspection. In typical cases in which someone counts as biased because they systematically depart from a norm, the norm from which they depart is not itself about bias. For example, norms like treat other people with the respect to which they are entitled, or maximize expected value, don’t say anything about bias, and they can be grossly violated by a perfectly unbiased
person no less than by a person who is biased. However, some norms are specifically concerned with bias, in one way or another. For example, a norm might tell us to correct for a bias, or to take steps to prevent its manifestation, or to be objective in certain ways. I call these norms of objectivity, and I explore them in detail in Chapter 6. There I distinguish between three fundamentally different kinds of norms of objectivity—norms of preemption, norms of remediation, and constitutive norms of objectivity—and I explore their characteristic features and the ways in which they interact with each other and with norms of other kinds (Chapter 6, §§1 and 2). I also offer an account of a venerable intellectual ideal closely associated with objectivity, the ideal of “following the argument wherever it leads” (Chapter 6, §3). As some of the applications mentioned above suggest, the norm-theoretic account of bias has implications not only for questions about what bias is but also for related but distinct questions about the circumstances in which we attribute bias. Although that topic recurs throughout the book, it is explored most thoroughly in Chapter 7, “Symmetry and Bias Attributions.” Among the issues that I explore there is the pervasive role of symmetry considerations in guiding our thinking about bias. I also address what I take to be an important feature of our use of the term “bias”: namely, that while many uses of the term in both the sciences and in everyday life presuppose or imply that being biased is at least in some respect a negative thing, it is also true that people often use the term in both contexts in ways that don’t presuppose or imply this. I argue that, far from being a relatively uninteresting case of simple ambiguity, this is actually a significant fact about the way in which bias attributions work, and I offer an account of our practices that makes sense of it. The account developed there also proves useful in answering a number of objections to the norm-theoretic account, including objections that proceed from the facts that (1) we sometimes attribute bias even though we don’t think that any genuine norm has been violated, and (2) conversely, there are some systematic departures from genuine norms that we would not ordinarily count as biases. As noted above, although the norm-theoretic account plays a prominent role in the book, much of what follows is independent of it. A major theme—and the starting point of Chapter 1—is the sheer diversity of things to which we apply the labels “biased” and “unbiased.” Thus, we routinely describe people as biased, as well as many of their mental states (e.g. their beliefs or perceptions); their actions, including their linguistic behavior and its products (e.g. the testimony that they offer); and the larger groups to which they belong. (For example, just as we might describe an individual Supreme Court judge as biased, so too we might attribute bias to the Supreme Court as a whole.) In addition, we also attribute bias to some inanimate objects (e.g. dice and coins), many temporally extended processes (as in “the biased admissions process”), as well as to the outcomes of those processes (as in “the biased admissions decision”) and much else besides. How are these things related to one another? Are some of them more fundamental than others? If so, which? For example, imagine that a biased judge arrives at a biased verdict by reasoning in a biased way. Here we have at least three different things that can accurately be described as biased: the verdict, the process by which it is reached, and the judge himself. Assuming the obvious point that it’s not a coincidence that all three things count as biased, what exactly is
their relationship? Does the verdict count as biased because it is produced by a biased reasoning process, or does the process count as biased because it produces biased verdicts? And what is it, exactly, that makes the judge himself count as biased—the fact that he arrived at a biased verdict, or that he reasoned in a biased way, or both, or something else entirely? I explore questions of this general kind in Chapter 2, “Pluralism and Priority.” Among other conclusions, I argue that unbiased processes sometimes produce biased outcomes, and that biased processes sometimes produce unbiased outcomes. The general picture that I endorse there is a kind of robust pluralism about bias. The view is pluralistic in two different respects. First, a wide variety of different things (including things that belong to fundamentally different categories) are genuinely biased. The fact that in the course of everyday life we attribute bias more or less all over the place is not a result of overly causal speech or sloppy thought on our part. Rather, it reflects something deep about the nature of bias itself. (This is the “pluralism” in “robust pluralism.”) Second, not only are many radically different kinds of things genuinely biased, but no one of these is always fundamental or foundational compared to the rest. Rather, among the many things that can be biased, different types are fundamental in different contexts. This is the sense in which the pluralism that I endorse is robust. Broadly conceptual questions about the nature of bias of this general sort are pursued throughout the book. Can one be biased in favor of something without being biased against anything, or vice versa? Or do “positive biases” and “negative biases” always come in complementary pairs (Chapter 1, §3)? What is the relationship between a whole’s being biased and bias among its parts? For example, what is the relationship between a group of people having or lacking a certain bias, and the individual people who make up the group having or lacking that bias? I argue that although a group’s being biased often depends upon (and is explained by) a critical mass of its members having that bias or some relevantly similar one, there is no necessity in either direction: a whole might be biased even if all of its parts or members are unbiased, and conversely, a whole can be unbiased even if its parts are biased (Chapter 1, §6). I explain how even a news organization that consistently offers perfectly unbiased coverage of every event that it covers might nevertheless be biased, and I consider the significance of that possibility. In addition, I argue that whether a person is biased about a given issue is sometimes a relative matter (Chapter 1, §2); that biases of people are best understood as multiply realizable dispositions (Chapter 5, §1); and that two people might share the same bias—for example, the same racial bias—even if they share nothing else in common, in terms of their mental states, their behavior, or their dispositions (Chapter 5, §3). I also explore a number of striking facts about bias, including that fact that a severely biased person might be highly reliable if their bias dovetails with their environment in the right way (Chapter 9, §§3 and 4); that one common way of ending up biased is by trying not to be, as in cases of overcompensation (Chapter 5, §5); that even an account of a person or historical event that is wholly true and known to be so might nevertheless be biased (Chapter 1, §5); and that claims of bias often presuppose substantive and potentially controversial evaluative or normative claims about how it’s appropriate to think or act, claims that are not themselves about bias (Chapter 1, §5, Chapter 3, §3, Chapter 5, §2).
In addition to work by other philosophers, this book draws on research by psychologists, economists, political scientists, historians, and theorists of law. It aspires to be of interest to a relatively broad audience. Nevertheless, in various respects it is very much a work of philosophy and reflects the fact that it was written by an analytic philosopher, for better or for worse—in its choice of topics, its manner of addressing them, and the style in which it is written. Moreover, even when compared to other works of analytic philosophy, the questions pursued here reflect my own biases—or at least, my professional training and knowledge base. One of the fascinating things about bias as a topic of philosophical reflection is the way that it cuts across so many of the various subfields of the subject, including the philosophy of mind and psychology, social philosophy (especially feminist philosophy and the philosophy of race), the general theory of knowledge, and moral and political philosophy, as well as others. Although the book addresses issues that are relevant to each of these subfields, a relatively large portion of it is devoted to questions that belong to the theory of knowledge, in addition to the kinds of quite general metaphysical and conceptual questions about bias that are center stage for much of it. In particular, Part III, “Bias and Knowledge” is devoted to exploring connections between bias and various topics that belong to the theory of knowledge, including skepticism, the underdetermination of theory by evidence, cognitive reliability, the epistemic significance of disagreement, as well as the connections between bias and knowledge itself. Among other conclusions, I defend the possibility of what I call “biased knowing” (Chapter 8, §1), and I argue that its possibility has significant implications both for philosophical methodology and for skepticism (Chapter 9, §§1.and 2). I explore the commonsense idea that a belief is not knowledge if its formation is the manifestation of a bias (Chapter 8, §2). I also take up the seemingly incompatible idea that, far from excluding knowledge, biases are actually essential to knowing, and are in fact deeply implicated in paradigmatic instances of knowledge acquisition and exemplary human reasoning, an idea that might be plausibly taken to be a central lesson of some of the most interesting science and philosophy of the last seventy years (Chapter 8, §3). I argue that, properly interpreted, there is a sense in which both of these ideas are true, and I show how they can be consistently reconciled. (See Chapter 8, §4 and especially §5, “How and when bias excludes knowledge: a proposal.”) In Chapter 9, “Bias Attributions and the Epistemology of Disagreement,” I devote extended attention to an extremely common but also extremely suspicious habit, our tendency to attribute bias to well-credentialed and seemingly formidable people on “the other side” of controversial issues, in a way that serves to minimize the perceived pressure to reconsider or revise our own views about those issues. I propose that both the presence of bias and its absence are subject to strong “the rich get richer, and the poor get poorer” effects, as initially biased believers tend to fall into cognitively vicious cycles while initially unbiased believers tend to benefit from cognitively virtuous ones. This focus on broadly epistemological questions about bias, particularly in the later parts of the book, is not based on a conviction that these questions are more important than any number of other philosophical questions about bias. Indeed, I do not even claim that they are as important as some other philosophical questions about bias that are not pursued here. (Still less do I claim that they are as important as many of the questions about bias that are
typically pursued outside of philosophy, e.g. questions how best to reduce or eliminate some pernicious bias in certain concrete contexts.) But they seemed like the issues that I was best prepared to address given my philosophical background and knowledge base, and the ones about which I was most likely to say something worthy of the consideration of others. At least compared to the amount of attention that bias has received from psychologists, it’s fair to say, I think, that the topic has been relatively underexplored by philosophers. In recent years, that has begun to change. Much of this recent philosophical work concerns the theoretically interesting and practically important topic of implicit bias.9 Notwithstanding the interest and importance of that topic, this book is not concerned with implicit bias in particular as opposed to the more general phenomenon of which it is a species. (Although as that suggests, I would regard it as a good objection to the claims made here if they failed to apply to the phenomenon of implicit bias.) Inasmuch as a good number of the issues that I pursue have not been directly addressed in the philosophical literature, one aspiration of the current work is to expand the philosophical playing field and to put more questions into play, if only by proposing inadequate answers to them that can then be criticized and improved upon by others.
3. Apology John Cook Wilson, the founder of a philosophical movement known as “Oxford Realism,” held one of the most prestigious professorships at Oxford University from 1889 until his death in 1915. Although it is not among the views for which he is remembered today, Cook Wilson thought that he had identified an important bias that amounts to a kind of occupational hazard for authors who publish their work. In the century since his death, psychologists and others have conducted countless studies of an ever-increasing list of biases. Despite this, I do not believe that the specific bias which concerned Cook Wilson has ever been systematically investigated by anyone. In fact, subsequent history has paid so little attention to this bias that it does not, as far as I know, even have a name. Cook Wilson’s idea was that, once authors commit to a view in print, they would, more often than not, be disposed to continue to believe the view and defend it even if later investigation turned up good reasons for thinking that it is mistaken. They would thus be inclined to engage in fruitless and unproductive exchanges with their critics, even when their critics were right. By contrast, an author who has not yet published their view will be at least relatively open to changing their mind in response to compelling objections. (Contrast the well-known phenomenon of confirmation bias. In the case of confirmation bias, the important distinction is between views that the believer already holds and views that they do not already hold. In the case of Cook Wilson’s bias, the important distinction is between views that one holds but has not yet published and views that one has already published.) In part because of this concern, Cook Wilson published little philosophy during his lifetime. Instead, he privately circulated pamphlets containing the latest statements of his views among his colleagues and students, pamphlets that he was constantly revising. Only
after many years did he sit down to finally compose his magnum opus. Unfortunately, by then he had only a short time left to live, and he never came close to finishing the task. It was not until eleven years after his death that the two-volume treatise that bears his name, Statement and Inference, finally appeared. Still, it is clear that this treatise is very far from the book that Cook Wilson would have written had he lived: it was cobbled together by a colleague from Cook Wilson’s lecture notes, the circulated pamphlets, and some letters that he had left behind. I have no reason to think that I am immune to the scholarly bias that so worried Cook Wilson. Like him, I have no appetite for engaging in unproductive exchanges with critics. On the other hand, I also have a very strong preference for this book to appear while I am still alive as opposed to dead. (And even if I did not have that preference, I have no disciples to whom its posthumous publication could be entrusted.) Given these considerations, it seemed on balance to make sense to proceed and place the present volume before the public, including potential critics. I thereby run the risk of losing a level of objectivity about its weaknesses that I might otherwise have retained, and of being drawn into unproductive exchanges in which I attempt to defend the indefensible. I apologize in advance for any annoyance or frustration that might be caused in these ways. Bias: A Philosophical Study. Thomas Kelly, Oxford University Press. © Thomas Kelly 2022. DOI: 10.1093/oso/9780192842954.003.0001
1 “The virtuous man can see a question from all sides without bias. The small man is biased and can see a question only from one side.” The Analects, 2.14 (Confucius 1979). 2 As is often the case when it comes to the development of the English language, Shakespeare seems to have been a key figure in the relevant history. He uses the word “bias” eleven times in eight of his plays. In some cases, the word is used in talking about the game of bowls. But in other cases, his characters use it figuratively, extending the concept familiar from the sport in order to describe their own situations, and in a way that’s recognizably akin to common contemporary uses. 3 The proposals are summarized in the eleventh and final chapter of the book, “Main Themes and Conclusions.” 4 For example, among psychologists, compare the characterization offered in Baron (2012:1). 5 As the statistics example suggests, the sense of “systematic” that matters for the norm-theoretic account is the sense which contrasts with “random,” as opposed to the sense that means “done or acting according to a fixed plan or system.” Notoriously, many biases operate in ways that don’t involve anyone’s acting in accordance with a fixed plan or system. For an intellectually engaging account of the contrast between bias and random error or “noise,” and an attempt to redress the usual imbalance of attention towards the former at the expense of the latter, see Kahneman, Sibony, and Sunstein (2021). 6 See, e.g., Brennan et al. (2013):2–4. Compare Wedgwood (2018):23–4. 7 The relevant sense of “norm” is thus the same as that which figures in “normative economics” (as opposed to “positive economics”), “normative jurisprudence,” or, in philosophy, “normative ethics” and “normative epistemology.” Even once statistical norms and social norms are set aside for current purposes, one might still think that any attempt to understand bias in terms of systematic norm violations will inevitably yield an overly inclusive account, one that incorrectly attributes bias even where none exists. For example, a serial killer who consistently employs a particular method of execution violates a genuine moral norm against harm in a way that is non-random, but it doesn’t follow that he’s biased. (Imagine that he selects his victims without regard to their race, ethnicity, gender, etc.) Conversely, one might also think that any such account will also be underinclusive and fail to count some genuine cases of bias as such. For example, we frequently attribute bias to inanimate objects such as coins and dice, and a coin that counts as biased in virtue of being disposed to land heads rather than tails seems to have something important in common with a judge who counts as biased in virtue of being antecedently disposed to rule in favor of the prosecution rather than the defense. However, coins are not subject to genuine norms, in
anything like the way judges (or more generally, people) are subject to genuine norms. For a discussion of such considerations and how they can be accommodated within the kind of approach explored here, see especially Chapter 7. 8 See especially Pronin, Lin, and Ross (2002); Pronin, Gilovich, and Ross (2004); Ehrlinger, Gilovich, and Ross (2005); Armor (1999); and Ross, Ehrlinger, and Gilovich (2016), as well as the further references provided in Chapter 4. 9 For a sampling of the literature, see Beeghly and Madva (2020), Johnson (2021), Mandelbaum (2016), Gendler (2011), Schwitzgebel (2010); the useful surveys by Brownstein (2017), Holroyd (2017), Kelly and Roedder (2008), and Holroyd, Scaife, and Stafford (2017); and especially, the two volume collection edited by Brownstein and Saul (2016). The point of departure for this philosophical literature is the explosion of empirical research on implicit bias in recent decades within social psychology. See, e.g., the landmark study by Greenwald and Banaji (1995) and, for a popular overview, Greenwald and Banaji (2013).
PART I
CONCEPTUAL FUNDAMENTALS
1 Diversity, Relativity, Etc. 1. Diversity Let’s start with a very general question: What types of things can be biased or unbiased? Clearly, not everything is of the right type. The number 17 has various properties, such as the property of being odd as opposed to even. But it doesn’t seem to make much sense to claim that the number 17 is biased or unbiased, and the same seems true of every other number. Similarly, most of the things that natural scientists theorize about, ranging from the very large (e.g. planets and universes) to the very small (e.g. subatomic particles) aren’t the kinds of things that we would naturally describe as either biased or unbiased. Thus, the mere fact that something lacks the property of being biased doesn’t mean that it’s unbiased, in the sense in which “unbiased” is more or less synonymous with either “objective” (if what’s under consideration are, e.g. judges) or with “fair” (if what’s under consideration are coins). Biased and unbiased are contraries, not contradictories. Although the fact that something is unbiased entails that it’s not biased, the fact that it’s not biased does not entail that it’s unbiased. (My office desk chair lacks the property of being biased, and is therefore not biased, but it’s not unbiased.) Roughly and to a first approximation: something is unbiased if and only if it’s the sort of thing that might have been biased, but isn’t.1 Nevertheless, although many things are neither biased nor unbiased, it’s striking how many things do routinely get described in these terms. It will be helpful to briefly survey some of the more important categories: • We frequently predicate bias of people or particular individuals. (We say: “He’s biased” or “She’s biased” with respect to some issue or cluster of issues.) • We predicate bias of individuals in their social roles, as in “the biased judge.” • Similarly, we attribute bias to groups or collections of people (e.g. “the biased committee.”) • We attribute bias to inanimate objects (e.g. “the biased dice,” or “the biased coin”). • We often predicate bias of things that can play the role of evidence. For example, we talk about biased samples, biased testimony, biased surveys, biased data, biased information, biased studies, biased research, and biased tests. • Relatedly, we attribute bias to sources of information or putative information (e.g.
• •
•
•
•
“Fox News has a conservative bias,” “MSNBC has a liberal bias.”) We sometimes attribute bias to temporally extended processes, practices, and procedures, as in “a biased admissions process” or “a biased job search.” An especially important category for both philosophers and psychologists consists of mental states: we regularly speak of biased perceptions, biased beliefs, biased judgments, biased opinions, and so on. Overlapping with the previous category, we frequently attribute bias to the outcomes of deliberative processes: thus, we talk about biased verdicts, biased decisions, biased evaluations, biased grades, and so on. We often predicate bias of broadly linguistic phenomena, as when we speak of biased narratives, biased texts, biased labels, biased descriptions, biased testimony, biased interpretations, biased presentations, biased discussions, biased accounts, and biased reports. Recently, there has been significant discussion of biased algorithms.2
This list is far from a complete inventory.3 Nevertheless, it suffices to make the point that many different things can be biased, at least if we take ordinary thought and talk at anything like face value. Generally speaking, to describe a person or thing as biased is not to make a neutral statement about that person or thing. For example, in any ordinary context, the claim that a particular judge is biased would naturally be understood as a criticism, and if the claim were made in the judge’s presence, we would naturally expect them to deny it. Similarly, when someone claims that a particular interpretation of an historical event or a text is biased, we naturally take them to be disputing that interpretation; someone who claims that a scientific study or political poll is biased is naturally understood as suggesting that its putative findings shouldn’t be accepted at face value, and so on. However, although attributions of bias often involve a negative judgment about a person or thing, this is not always the case. In fact, in both everyday life and in the sciences, it’s also common for people to attribute bias where no negative judgment is intended; in such cases, the attribution functions much more like a neutral description. Consider a couple of examples of this. When cognitive scientists construct models of human cognition in order to study how human beings manage to learn about the world, they sometimes speak of our “inductive biases.”4 In such contexts, there is no suggestion that inductive biases are a bad thing, or that it would be better if we didn’t have them. On the contrary, it’s assumed that such biases play an indispensable role in human reasoning at its best, and that without them, learning from experience would be impossible.5 Compare the following remark from Tyler Burge: [T]here is a methodological bias in favor of taking natural discourse literally, other things being equal. For example, unless there are clear reasons for construing discourse as ambiguous, elliptical, or involving special idioms, we should not so construe it. (1979/2007:116)
Even once it’s acknowledged that many uses of “bias” are not intended to convey a negative assessment, one might still have expected that, when the term is used in the context of a discussion of proper methodology (as when one speaks of “methodological biases”), it would carry at least a negative connotation. But as the quotation from Burge shows, even that much isn’t true. Thus, in some cases, talk of “bias” involves a negative assessment while in other cases it doesn’t.6 As is sometimes noted, this is true even when what is at issue are uses of the term “bias” in the same academic discipline,7 or within the same scholarly literature.8 Given this, I regard it as a good question—and an open one—how much the two cases have in common. Is it simply a coincidence that we use the same word “bias” to do quite different jobs, in anything like the way it’s simply a coincidence that we use the same English word “bank” to talk about both financial institutions and river banks? For reasons to be explored later, I think that that suggestion is implausible. On the other hand, if, as I believe, the two uses have something important in common, then a good theory of bias, or a good theory of our practices of attributing bias, should make clear what that is. I offer my own account of this in Chapter 7. That having been said, as my list of examples suggests, in this book I’ll be primarily concerned with bias in the non-neutral, evaluative sense (or in its non-neutral, evaluative uses), and the norm-theoretic account of bias that I offer in Chapter 3 is intended as an account of bias in this sense. As the list also suggests, even with that narrowing of focus, the sheer range and diversity of things to which we attribute bias is striking. In the rest of this chapter, I explore a number of basic structural features of bias that should, I believe, be recognized independently of a commitment to any specific theory of bias. In §1, I explore and defend the idea that whether a person or thing counts as biased can be a relative matter. In §3, I explore the fact that biases characteristically have a direction, and the related fact that they typically come in complementary pairs (a bias in favor of something typically comes paired with a bias against some salient alternative). §4 and §5 offer preliminary discussions of the important phenomena of higher-order bias (or bias about bias itself) and biased representation. Finally, §6 takes up some of the metaphysical and conceptual questions that arise from the fact that both wholes (e.g. groups of people) and their component parts (e.g. the individual people who make up the group) can be biased or unbiased. Each of these discussions is self-contained and can be read independently of the others.
2. Relativity Often, a person or thing might count as biased under one description but as unbiased under another. Relatedly, whether they count as biased or unbiased might very well depend upon our interests and purposes in raising the question of whether they’re biased in the first place, or on which question we seek to answer by consulting them. Imagine a presidential approval poll in which people are asked whether they currently approve or disapprove of the job the American president is doing, but in which those conducting the poll deliberately only survey
residents of the state of Louisiana. If the poll is an attempt to gauge the President’s level of support among Americans, then the poll counts as biased—indeed, both the sample of voters, and the procedure used to generate that sample, count as paradigms of biased samples and biased sampling procedures, respectively. On the other hand, if the question at issue is the President’s support among residents of Louisiana, then the identical poll, sample, and sampling procedure might all be unbiased, and exemplary in other respects as well. The same point holds for texts. An account of social life in mid-Victorian England that dwelled on the characteristic moral shortcomings of that era while omitting any mention of its more positive features might justly be accused of bias. On the other hand, the same charge would be misplaced when directed at the same text if it purported to be an account of the limited opportunities available to women at the time, although such an account might still count as biased against the Victorians for other reasons (e.g. if it failed to provide a certain kind of historical context, or suggested that that moral failing was in some way unique to Victorian society). It’s obvious enough that whether someone or something counts as biased might depend on what question is at issue. But whether something counts as biased can be relative in more interesting and surprising ways as well. In particular, even when we hold the question or issue fixed, whether a person or thing counts as biased might still be a relative matter. Consider, for example, the following case: ALREADY CONVINCED: Before a criminal trial begins, a person announces that she’s already firmly convinced that the defendant committed the crime of which he stands accused.
If the person is in the pool of candidates to serve on the jury for the trial, she will be rightly excused from jury duty on the grounds that she doesn’t qualify as an unbiased juror. Similarly, if she’s a judge and would otherwise be in line to preside over the trial, the trial will be assigned to one of her colleagues: given that she’s already made up her mind, she doesn’t qualify as an unbiased judge. However, suppose that the reason why she’s firmly convinced that the accused committed the crime is this: she personally witnessed him commit the crime, and thus knows that he did. Let’s stipulate that the witness saw the event under more or less ideal viewing conditions, and that she was not in any way a biased observer or perceiver: if someone else had committed the crime rather than the actual defendant, then the witness would believe that that person, and not the actual defendant, committed the crime, and so on. Although such a person wouldn’t qualify as an unbiased juror or judge, she would count as an unbiased witness. If the prosecution calls her as a witness, and the defense attorney objects on the grounds that she’s biased against his client, it’s the defense attorney who speaks falsely. (Contrast a case in which she is a biased witness, who mistakenly believes that she saw the accused commit the crime, but where that mistaken belief is a product of racial or ethnic animus.) If, notwithstanding her prior belief in the defendant’s guilt, the protagonist in ALREADY CONVINCED somehow did make it on to the jury or serve as the judge, the later discovery of this fact would constitute a compelling basis for the appeal of a guilty verdict on the grounds that the defendant didn’t receive a fair trial: the judge or a member of the jury was biased against him. In contrast, the fact that a witness was firmly convinced before the trial
begins that the defendant committed the act of which he’s accused provides no basis at all for an appeal on the grounds of bias. Indeed, in the case of the witness, things seem to run in the opposite direction: if it’s later revealed that, on the eve of the trial, the witness lacked the belief that the defendant committed the act of which he’s accused, and only arrived at that belief over the course of the trial, then that would seem to undermine the credibility of her allegedly eyewitness testimony, and cast severe doubt on the claim that she’s an unbiased witness. With respect to the issue that’s before the court, the same person (with the same mental states, and so on) would count as biased if they were playing the role of judge or juror but as unbiased if they were playing the role of witness. More generally, whether a person counts as biased or unbiased with respect to an issue might be relative to which social role they occupy.
3. Directionality We often say things of the form “So-and-so is biased.” However, such statements should generally be understood as elliptical. Unless the context already makes their answers clear, the bare, unadorned claim that “So-and-so is biased” naturally invites the questions: “Against whom or what?” or “In favor of whom or what?” Something that’s biased isn’t biased simpliciter or tout court. Rather, it’s biased in favor of x, or against the ys, or…Any bias has a direction or valence. Given that everything that’s biased is either biased in favor of something or biased against something else, is everything that’s biased both? Or is it possible to be biased in favor of something without being biased against anything (or vice versa)? Consider the simple paradigm of the biased coin. Any coin that’s biased in favor of heads is ipso facto biased against tails, and vice versa. (In that case, the property of being biased in favor of heads and the property of being biased against tails are—literally, for once!—two sides of the same coin.) Does what’s clearly true of biased coins hold for bias in general? Consider the following case: DESPISED CANDIDATE: At a given time, fifteen candidates are vying for their party’s presidential nomination. A news network is biased against one of the candidates because of her staunchly anti-interventionist foreign policy views, but it’s scrupulously neutral among the other fourteen.
Is this a case in which the network has a bias against someone without a corresponding bias in favor of someone else? In a context in which what’s salient is a one-to-one comparison between the anti-interventionist candidate and any one of the other fourteen, the network counts both as biased against the anti-interventionist and also as biased in favor of the candidate with whom she’s being compared; here the negative bias or bias-against is paired with a corresponding positive bias, or bias-in-favor. What about a context in which what’s salient is the entire field? In that context, it will still be correct to describe the network as biased against the anti-interventionist, but at best misleading to describe it as biased in favor of any of the other fourteen, for that seems to overstate its support for any of the others.
However, even in the more inclusive context, the negative bias against the anti-interventionist carries in its wake a corresponding positive bias in favor of another entity, namely, the rest of the field. It might seem then, that whenever it’s correct to describe a person or thing as biased against something, it will also be correct to describe them as biased in favor of something else (and vice versa) even if the “something else” is simply the complement of the relevant set. However, we should not generalize too quickly from examples like that of the biased coin or DESPISED CANDIDATE. On any given flip, the coin will land either heads or tails but not both; its landing heads precludes its landing tails, and vice versa. Similarly, a contested presidential primary is a competitive context in which a scarce good—the party’s presidential nomination—is being allocated; any one of the candidates’ receiving the nomination precludes any of the other candidates receiving it. We should also test the hypothesis that positive and negative biases always come in pairs in cases in which this feature is absent. For this purpose, consider the following case: OPEN-ENDED ADMISSIONS: An admissions process is open-ended, in the sense that there is not a finite number of spots. Applicants for admission are thus not in competition with one another, and any applicant who satisfies the relevant standard for admission is automatically admitted, regardless of how many others already have been admitted under that standard. Approximately 2 percent of the applicants have feature F. The admissions process is biased in their favor: they are either automatically admitted, or else the standard for admission that they must meet is significantly lower than the usual standard that applies to the other 98 percent. However, the fact that there are some Fs in the pool who get admitted because the standard that they must satisfy is lower in no way adversely affects the chances that any particular not-F will be admitted. Not only does the admission of some Fs by way of the lower standard not reduce the spots open to any of the other applicants, but the fact that the Fs are favored hasn’t in any way altered the probability that any other applicant will be admitted, inasmuch as if there hadn’t been any Fs in the pool, the standards that not-Fs would need to satisfy in order to be admitted would be exactly as they are as things actually stand. (Although it’s true of at least some non-Fs who are rejected that: if they had been F rather than not-F, then they would have been accepted rather than rejected, since they then would have been evaluated according to a lower standard than the one that they failed to meet.)
Is OPEN-ENDED ADMISSIONS a case where we see a bias-in-favor without any corresponding bias-against? In presenting the example to students and colleagues, I’ve found that people’s intuitive judgments about it are quite mixed. Some think (and sometimes, adamantly insist) not only that the process is biased in favor of Fs but also that it’s biased against not-Fs, while others deny this, and still others confess to having no clear intuitions about the example. It’s unsurprising that the case inspires mixed reactions. Notice that even the question “Does the fact that the process has a pro-F bias ever explain why a particular person who isn’t F got rejected?” will elicit mixed responses, depending on how one thinks about explanation. On the one hand, it’s stipulated in the example that the fact that the process has a pro-F bias is probabilistically irrelevant to any not-F’s chances of admission: if there had been no pro-F bias, the applicant’s chances of admission would have been no higher. Thus, if one takes probabilistic relevance as necessary for explanation, the pro-F bias never explains why any not-F applicant fails to be admitted. On the other hand, consider some particular not-F who isn’t admitted, but who would have been if they had been F (since in that case they would have met the standard for admission). Notably, any such individual will satisfy the traditional “but for” criterion of discrimination as understood in American
discrimination law: they would have been admitted, “but for” the fact that they are not-F rather than F.9 Here I won’t attempt to resolve things one way or the other but simply note that the example seems to be something of a marginal case. Notice also that as soon as we alter the example so that Fs and not-Fs are competing for a finite, limited number of spots, it seems much more natural and straightforward to describe it as a case in which the process is both biased in favor of Fs and therefore biased against not-Fs.10 Setting aside certain arguable cases such as OPEN-ENDED ADMISSIONS, negative biases and positive biases typically come paired with one another in a complementary package. However, in some cases it’s the positive bias which is fundamental while in other cases it’s the negative bias which is fundamental. A father who nepotistically hires his own child for a job over another applicant whom he would otherwise have hired is both biased in favor of his child and biased against the other applicant. Here it’s the father’s positive bias in favor of his child that’s fundamental. Because of this, it’s the presence of the positive bias that explains the negative bias, and not vice versa. (It’s true to say that the father is biased against the other applicant because he’s biased in favor of his child, but false to say that the father is biased in favor of his child because he’s biased against the other applicant.) On the other hand, if he hires a white applicant over a black applicant because he wrongly believes that blacks make poor workers, then he’s both biased against the black applicant and biased in favor of the white applicant; but in this case, it’s the negative bias which is fundamental and which explains the positive bias.11 Suppose that a person is biased in favor of X and against some salient alternative Y, and that the bias manifests itself in a judgment about the relative merits of X and Y. In paradigmatic cases, the content of the judgment will itself favor X over Y. However, this is not always the case. For example, a bias in favor of X over Y might give rise to a judgment whose content favors neither X nor Y, as in the following example: BIASED PARENT: A parent is biased when it comes to evaluating their child’s musical performances. When the child performs at a recital, the parent’s bias leads them to conclude that the performance was equal to the best student’s, whose performance was in fact objectively superior to all of the others and was recognized as such by the other parents.
Here the parent’s bias in favor of their own child manifests itself not in a judgment of superiority but in a judgment of equality or parity. Similarly, a bias against Y might manifest itself in a judgment or decision that doesn’t disfavor Y relative to salient alternatives. (A bias against the most talented performer might lead one to conclude that they performed on a par with the rest of the field, whereas in fact their performance was objectively superior.) In general, there is no safe inference from the fact that “So-and-so’s judgment/decision did not go against Y” to the conclusion that “So-and-so’s judgment/decision was not a manifestation of bias against Y.” Suppose that the performance of the biased parent’s child was in fact the worst of all, but the parent’s bias causes them to judge that it was in fact on a par with the rest. Notice that the parent’s judgment that my child’s performance was just as good as any of the other children’s
counts as a clear case of a biased judgment, notwithstanding the fact that its content does not itself favor the object of the bias against any salient alternative with which it’s being compared. Indeed, a judgment might be biased in favor of X over Y even if its content favors Y over X, as when a parent’s bias leads him to conclude that their child’s performance was somewhat weaker than the best student’s, an assessment that greatly underestimates the actual difference between the two. We will return to these points and their significance below.
4. Bias about Bias Consider a judge who is biased against defendants of a certain race. Given the right background conditions, this bias will manifest itself in certain judgments that come at the expense of defendants of that race, and when this happens those judgments themselves count as biased: they are biased judgments, whether this is recognized by anyone or not. Nevertheless, although the judgments in question are biased, they will generally not be judgments about bias. Rather, the contents of the biased judgments might be that the defendant acted in a way that violated a certain law, or that the testimony that she offered on her own behalf is not credible (in cases in which an unbiased judge would have drawn the opposite conclusions). In contrast, consider a case in which the question before a judge is whether the defendant is guilty of a hate crime. Given standard definitions, a person is guilty of a hate crime only if the commission of the crime involved an element of bias. For example, the Federal Bureau of Investigations (FBI) defines a hate crime as “a criminal offense against a person or property motivated in whole or in part by an offender’s bias against a race, religion, disability, sexual orientation, ethnicity, gender, or gender identity.”12 Thus, given standard definitions, the judgment that someone is guilty of a hate crime is a judgment that is (in part) about bias. Of course, judgments about bias can themselves be either biased or unbiased. If, for example, a judge’s racial bias leads him to conclude that a given defendant is guilty of a hate crime (when an unbiased judged would not have reached that conclusion), then this is an example of a biased judgment about bias. In contrast, if a perfectly unbiased judge reaches the same conclusion, then her judgment is an unbiased judgment about bias. Most of the biases that have been catalogued by psychologists, as well as those that are familiar from everyday life, are first-order biases. But a person or group of people might also exhibit a higher-order bias. In particular, they might be biased when it comes to attributing bias itself: as in the case of a judge who is biased in his verdicts about which defendants are guilty of hate crimes, a person’s judgments about who or what things are biased, or the extent to which those people or things are biased, might themselves exhibit some bias. For example, I might be biased about which media sources are biased. Or I might be disposed to think that the political opinions of anyone who disagrees with me must be due to the distorting influence of bias while assuming that my own political opinions are the deliverances of pure rational reflection on the facts, undistorted by any such contaminating influences. In fact, social psychologists have gathered much evidence that higher-order biases such as
these are pervasive. A particularly notable and theoretically interesting example is “the bias blind spot,” or the apparently widespread tendency to assume that one’s own judgments are less susceptible to bias than the judgments of others.13 Interestingly, the social psychologist Lee Jussim (2012) has argued at length that social psychologists are themselves biased in favor of attributing biases to ordinary people and exaggerating the extent to which ordinary people are biased. If so, then this would be another instance of higher-order bias. Consider also “media watchdog” organizations that purport to monitor the media for political bias. Some of these organizations are on the political right, while others are on the political left.14 Given the political orientations of these organizations and the people who financially support them, it would be unsurprising if their attributions of media bias themselves exhibit certain characteristic biases. Or consider organizations whose announced mission includes detecting and publicizing bias against some racial, ethnic, or religious group: for example, the Anti-Defamation League (ADL), or the Council on AmericanIslamic Relations (CAIR), or the Catholic League. For any given group, questions can be asked about whether it’s biased in attributing the relevant kind of bias. Of course, since any group, like any person, is fallible in attributing bias, the mere fact that there are cases in which the group gets things wrong doesn’t show that it’s biased. But we can ask whether these mistakes exhibit any pattern that would indicate a bias. For example, is the group more likely to go wrong by incorrectly attributing bias where none is present, than to go wrong by refraining from attributing bias in a case in which such attribution is warranted? Or do its attributions of ethnic/religious/racial bias display some political bias? (For example, is an organization devoted to identifying instances of anti-semitism more likely to call out antisemitism when it comes from figures on the political right than from those on the political left, or vice versa?) On the one hand, any such organizations will have an interest in preserving its credibility and reputation over time, and this might provide an incentive to avoid attributing bias where it doesn’t in fact exist, something that might tend to support a relatively conservative policy when it comes to cases that are less than clear-cut. On the other hand, donors to the organization might be displeased if genuine cases of bias go unrecognized or unreported, or else might lose interest in the cause if relatively few cases are identified; and this might create pressures in the opposite direction. It is an empirical question how well any given organization responds to and balances such competing forces, and whether it falls short in a way that indicates a bias. Of course, CAIR will be particularly concerned with possible instances of anti-Muslim bias as opposed to possible instances of anti-Catholic bias, while the Catholic League will be particularly concerned with the latter as opposed to the former. Does this fact alone show that the two organizations are biased, since they are selective in their attributions of bias based on religion? No, for neither organization purports to inform us about religious bias in general, as opposed to the more specific type of religious bias that is its characteristic concern. By contrast, an organization that did purport to monitor and track religious bias more generally, but whose outputs were identical to those of either CAIR or the Catholic League, might correctly be described as biased on those grounds. This is an instance of the phenomenon of question- or issue-relativity discussed in §2.15
Suppose that I’m disposed to uncritically assume that other people’s opinions about politics are more likely to be biased than my own. This is a second-order bias: it’s a bias that concerns a first-order bias. More generally, the examples discussed in the psychological literature on the bias blind spot are cases of second-order bias. Can we go up more levels? Consider someone who is biased in their attributions of higher-order bias, or who is more likely to attribute it to certain types of people than to others. This would be someone who exhibits a third-order bias. Consider the following possible example. In discussions of the bias blind spot, it’s sometimes noted that people on the political right will frequently claim to detect a left-wing bias in news coverage, while those who are on the political left will frequently claim to detect a right-wing bias—even in response to the very same articles. This is sometimes treated as evidence that both sides exhibit a higher-order bias: they both tend to attribute political bias in a biased fashion. However, in any particular case, it isn’t a given that both sides are wrong in claiming to detect bias. Imagine a hypothetical case in which the story really is biased in one direction or the other, and this is something that one of the two sides (or at least, some people on one of the two sides), has correctly picked up on. If so, then a psychologist who treats things as being perfectly symmetrical between the two sides errs.16 In such circumstances, it’s possible that the source of the error is a third-order bias: the psychologist is biased in favor of finding people guilty of the relevant second-order bias (i.e. being biased in attributing bias to news coverage), which leads the psychologist to biasedly conclude that the experimental subjects are biased in their judgments of bias. In explaining why people go wrong or make mistakes, might there be a general bias in favor of explanations that invoke bias, as opposed to explanations that invoke random error or simple human fallibility? Perhaps no living thinker has contributed more to the study of bias than the Nobel laureate Daniel Kahneman. In his most recent book, Kahneman and his co-authors Olivier Sibony and Cass Sunstein turn their attention to its counterpart, random error or “noise.” After arguing at length that noise is a relatively neglected phenomenon, both compared to bias and to its objective importance, they conclude by speculating that this is due to a deep-seated tendency of the human mind to prefer causal explanations to thinking in purely statistical terms: Cognitive biases…are often used as explanations for poor judgments. Similarly, analysts invoke overconfidence, anchoring, loss aversion, availability bias, and other biases to explain decisions that turned out badly. Such bias-based explanations are satisfying, because the human mind craves causal explanations. Whenever something goes wrong, we look for a cause—and often find it. In many cases, the cause will appear to be a bias. …Bias has a kind of explanatory charisma, which noise lacks. If we try to explain, in hindsight, why a particular decision was wrong, we will easily find bias and never find noise. Only a statistical view of the world enables us to see noise, but that view does not come naturally—we prefer causal stories. The absence of statistical thinking from our intuitions is one reason that noise receives so much less attention than bias does. (2021:369)
If this speculation is correct, then we would expect human beings to have a quite general bias in favor of explanations that invoke bias as opposed to alternative explanations that do not, quite irrespective of subject matter. Of course, even if such a general bias in favor of finding bias exists, it might be masked in specific contexts, including by countervailing biases
against finding bias, as when a person’s own racial biases make them overly reluctant to conclude that bias against a particular racial group explains why that group is underrepresented in some desirable field or occupation.
5. Biased Representation When the topic of bias is broached, many naturally think first of biased people. Similarly, when psychologists and philosophers investigate biases, they typically concentrate on biases exhibited by human subjects, either in their capacity as believers, as agents, or both. Much of what follows will have a similar focus. In this section, however, I want to examine a notion that in at least some respects seems to me to be more fundamental: that of a biased representation. Consider again the incomplete list of things that can be biased offered at the beginning of this chapter. A notable feature of many items on the list is that they have representational content. That is, they present a subject matter as being one way rather than another, and so can be either true or false, depending on whether the subject matter is in fact the way it’s represented as being. This holds for many of the mental phenomena on the list—for example, beliefs, perceptions, and judgments—as well as for many of the linguistic phenomena—for example, pieces of testimony, or descriptions of an object, event, or person. Thus, just as we can evaluate a belief as a true or false depiction of reality, so too we can evaluate it as a biased or unbiased depiction of reality. Similarly, just as we might evaluate a piece of testimony, a description of a person, or an account of an historical event as true or false, accurate or inaccurate, so too we might evaluate it as biased or unbiased. Given that many representations can be evaluated in both of these ways, we can ask: how are the two types of evaluations related to one another? Clearly, much of our desire for unbiased representations of a topic stems from our interest in arriving at true or accurate views about it. An employer conducting a job search will prefer to receive unbiased letters of recommendation rather than biased ones, because they have an interest in arriving at true rather than false views about the recommended candidates. However, notwithstanding the important and obvious connections between truth and lack of bias on the one hand and falsity and bias on the other, slack is possible in both directions. A judgment formed in response to the best evidence available at the time it is made might be unbiased even if it turns out to be false. Moreover, just as an absence of bias doesn’t entail truth, the presence of bias doesn’t entail falsity: if a juror concludes that a defendant is guilty because of the defendant’s race, the juror’s belief that the defendant is guilty counts as a biased belief, even if later evidence shows that the defendant really did commit the crime of which they’re accused and so the juror’s belief was true all along. Given that the juror’s biased belief is based on the race of the defendant rather than on relevant evidence, it fails to count as knowledge even if it turns out to be true. Significantly, however, even if an account is not only true but also known to be true by the person who offers it, it might still count as biased in virtue of its relationship to true information that it
does not contain. Perhaps in paradigmatic cases, a biased account of a person or event will include at least some false claims about that person or event. But in principle, a biased account might include only true information, while nevertheless presenting the person or event in a systematically misleading way in virtue of not including equally relevant facts that point in a different direction from the facts that it includes. Consider, for example: BIASED DESCRIPTION: I describe Jim to you. Although everything that I tell you about Jim is known to be true, I’m careful to include anything that I know that reflects negatively on him, while carefully filtering out any information that casts him in a favorable light, or which would tend to mitigate or provide context for the negative information. The description that I produce in this way thus presents Jim in a misleadingly negative way.
In the familiar courtroom oath, the witness swears to tell “the truth, the whole truth, and nothing but the truth.” In these terms, a witness’s account might count as biased even if contains “nothing but the truth” precisely because of the way in which it departs from the requirement to be “the whole truth.” Of course, even in cases in which a biased account contains “nothing but the truth,” there will be another connection between its biased character and falsity or error. Namely, the biased account will tend to make it natural to draw various false conclusions about the subject that it describes. Indeed, for a consumer of the account who knows that it’s true but doesn’t know that it’s biased, it might be perfectly reasonable to draw those false conclusions. Although there is nothing false in the account itself, and therefore nothing that is logically entailed by the account is false, it’s characteristic of the true-but-biased collection of facts to non-deductively support or license some false conclusions about the subject matter, conclusions that wouldn’t be licensed by an unbiased account. In terms of the example above: a description of a person that includes unflattering information about them while systematically excluding flattering information will tend to license overly negative, false conclusions about that person, conclusions that wouldn’t be licensed by a fuller and more comprehensive body of information. Even if most misleading descriptions mislead in virtue of providing false information about their objects, it is characteristic of biased accounts to be misleading even when they are wholly true. In H.P. Grice’s classic paradigm of linguistic communication (1989), a central distinction is between what is literally said by an assertion and the information that’s communicated or conveyed by that assertion. (In the context of a letter of recommendation for an academic job search, the remark that a candidate has excellent handwriting literally says something complimentary about the candidate’s handwriting, but its inclusion might very well serve to communicate negative information about the candidate’s relevant skill set.) In a paradigmatic case of a true but biased description of a person or event, although what’s literally said about the person or event is true, some of what’s conveyed or communicated about the person or event will be false. Apropos of Grice’s distinction, many philosophers have claimed that it’s morally worse to intentionally mislead someone by outright lying (as in a case in which one asserts a claim that one knows to be false), than it is to intentionally mislead them by asserting true propositions from which they naturally and predictably draw reasonable but false conclusions.17 The analogous view about bias would be this: it’s morally worse to
intentionally mislead someone by presenting them with a false account than it is to intentionally mislead them by presenting an account that’s substantially true but biased. Whatever might be said on behalf of such a moral view, we can note one way in which misleading others by propagating biased though true accounts might sometimes be worse than misleading them by propagating false information. Namely, when a biased although true account gives rise to a mistaken impression in a rational audience, correcting or combatting that mistaken impression might very well be less straightforward and more difficult than if the same mistaken impression were based on a false account. When a mistaken impression of a person or event is based on a substantially false account of that person or event, the possibility exists, at least in principle, of correcting that impression by showing that the account on which it’s based is substantially false. In some cases, although certainly not in all, this might be a relatively straightforward matter, something that can be achieved by providing evidence against the key claims of the account; when presented with such evidence, a rational audience will revise its view of the matter accordingly. In contrast, when a mistaken impression is based on a biased account that’s substantially true, there is no parallel possibility of correction: precisely because the main claims in the account are true, any evidence that’s brought forward against them is ipso facto misleading evidence, in the sense that it’s evidence that speaks against some truth. Indeed, in the case of the true though biased account, critical scrutiny that focuses directly on the truth value of the claims that comprise the account might very well have the effect of increasing its credibility and the credibility of the person who offers it. What’s required in order to correct the mistaken impression is rather to show that the account on which it is based is biased: even if the account contains “nothing but the truth,” it’s not “the whole truth” about the person or event, in some important sense. However, the norm of telling “the whole truth” about a topic is an ideal, and one that stands in need of interpretation in any given context. For inevitably, any account of an actual person or event, however comprehensive and unbiased, will omit some truths about it, including some that are potentially interesting and important.18 Since any actual account of a person or event will inevitably be selective, leaving out some facts that might very well have been included, it’s not enough, in order to sustain a charge of bias, to note that an account leaves out some such information. Rather, what needs to be established is that the information provided, although correct, is in some important respect collectively unrepresentative: given what is included, there is some equally relevant information that ought to have been included but wasn’t. However, in contrast to the comparatively straightforward task of providing evidence that shows or suggests that some part of an account is false, the claim that certain information that was not included in the account should have been included in it given what information the account does include might very well involve a relatively subtle and contestable judgment of comparative relevance, and one that depends on holistic considerations that aren’t easily summarized. The idea that, in order to sustain a charge of bias, the person making the charge often incurs a commitment to certain substantive normative claims—in this case, a claim to the effect that information that’s not present in the account ought to have been included—is one to which we’ll return in the pages that follow.
6. Parts and Wholes Consider again the incomplete list of things that can be biased offered at the beginning of this chapter. A notable feature of the list is that some of the items on it frequently stand in partwhole relations to others. An obvious example of this is people and groups of people. For example, just as we can evaluate individual judges as biased or unbiased, so too we can evaluate a court that’s composed of the individual judges as biased or unbiased. (Consider the way in which we might attribute a certain bias to the US Supreme Court.) The same point holds for processes and procedures, which in a structurally parallel way often stand in partwhole relations to other processes and procedures. The overall admissions process at a university will typically consist of various (sub-)processes, corresponding to the different stages in which the original pool of applicants is progressively narrowed down. Given this, we can evaluate both the overall process as well as the sub-processes with respect to bias. This section explores the question: what is the relationship between a whole’s being biased and bias among its parts? Often, when a whole is biased, the explanation for this will be that at least some of its parts (or some critical mass of its parts) have the relevant bias themselves, or some relevantly similar bias. The fact that the news organization has a certain political bias is because some or enough of the individual reporters and editors who work for it have that political bias. Similarly, if the whole is unbiased, the correct explanation for its being so (at least at one level of abstraction) will often be a lack of bias among its members, as when the balanced, even-handed coverage of the newspaper is attributable to the same virtues among the members of its staff. When an organization or group has a certain bias because a critical mass of its members have that bias, we can distinguish between two different possibilities. In the first type of case, it’s an historical accident that enough individuals with the relevant bias belong to the group; there is no mechanism that tends to select for (either intentionally or unintentionally) individuals with that bias, or that tends to filter out individuals who lack it. In the second type of case, it’s no accident that people with the relevant bias are now so well-represented among the group’s membership: a person’s having the bias made it more likely that they would end up a member (whether having the bias was directly selected for, or whether it was positively correlated with other traits that were directly selected for). Perhaps having the bias made a potential member a more attractive candidate to those with the power to determine membership in the group, who were positively disposed toward people with the bias (even if not under the description “biased”). Or perhaps people with the relevant bias are more likely, for one reason or another, to enter the field in the first place, and so such people were disproportionately represented in the pool of potential members. In both the first and second type of case, the fact that the group currently has a certain bias is due to the fact that a critical mass of its current members have that bias. But in the second type of case, there is also a deeper explanation for why the group now has the relevant bias, a type of explanation that won’t be available in the first type of case. Let’s call an explanation that accounts for why a whole has or lacks a certain bias in terms of its parts having or lacking that bias a bottom-up explanation. In addition to bottom-up
explanations of bias, there can also be top-down explanations, in which the presence or absence of bias at the level of the whole explains why its parts or constituent members tend to have or lack that bias. (Consider, e.g., a case in which one manifestation of a group’s having a certain bias is that it maintains an admissions or recruitment policy that tends to select for individual people with that bias.) Particularly when we’re interested in explaining why a particular bias persists through time, a comprehensive explanatory story might very well include both types of explanations. Thus, the fact that an organization currently has a certain bias might be due to the fact that many of its current members have it (a bottom-up explanation); while the fact that many of its current members have it might be due to the organization’s having had that bias at earlier times, when those current members were offered membership and recruited (a top-down explanation); and the fact that the organization had the bias in the past might be due in turn to the fact that it was had by many of its members at that earlier time.19 Particularly when we’re concerned with a top-down explanation of why bias is present at the level of the parts, it can be a subtle matter exactly which facts about bias are being explained and which facts are being left unexplained. Often, even a successful top-down explanation of why bias is present or widespread at the level of the parts won’t do anything to explain why any of the individual parts has the relevant bias in the first place. Consider the following example: BIASED ORGANIZATION. Many of the current members of an organization have an anti-black bias. This is due to the fact that the organization has a particularly odious hiring policy. The hiring policy ensures, not only that people who aren’t black are more likely to be hired over equally or better-qualified people who are black, but also that nonblacks who have an anti-black bias are more likely to be hired over non-blacks who don’t. (That is, the hiring policy effectively selects for people who have an anti-black bias.)
When we explain why so many of the organization’s current members have an anti-black bias, we will cite the biased hiring policy; we thus offer a compelling top-down explanation, one that explains bias among the parts (the individual members) by invoking bias at the level of the whole (the organization itself). However, notice that the biased hiring policy does not explain why any current member has the bias. After all, by hypothesis, any person who is favored by the biased hiring policy already had the relevant bias prior to being hired by the organization. (Contrast a case in which the bias of the organization really does explain why one of its members has the bias, e.g., if a condition of continued employment involves watching videos that tend to inculcate the bias in those who don’t already have it.) In explaining why a person favored by the biased hiring policy has the relevant bias, we might cite, for example, the fact that they grew up in a racist environment, or that they had a certain upbringing; in any case, we will not cite the biased hiring policy, in order to explain this fact. Here and elsewhere, we can distinguish between the severity of a bias and how deeply entrenched it is. A severe bias need not be deeply entrenched. (Imagine that a news organization’s severe political bias is due to the fact that some of its most senior and powerful members are severely biased, but that when those members retire or age out of power there is no mechanism that makes it likely that those who replace them will be similarly biased.) On the other hand, a comparatively mild bias might be deeply entrenched if
the bias is woven into the culture of the organization, or if it’s thought that (e.g.) any shift in the way that the organization covers politics would cause it to lose its market niche. Of course, although the severity of a bias can come apart from its degree of entrenchment, many of the worst biases have been both severe and deeply entrenched, as exemplified by the way in which many American institutions have historically been biased against AfricanAmericans. Even once it’s acknowledged that the severity of a bias can come apart from its degree of entrenchment, it’s tempting to think that the two are at least positively correlated: generally speaking, more severe biases will be more entrenched, and vice versa. However, there are also reasons to think that the two might be negatively correlated, at least in an important class of cases. (Here I am indebted to Brian Hedden.) Notably, more severe biases are also more detectable, and in that respect more likely to receive pushback, and so might be less entrenched on that basis. (If raging misogyny is more likely to be called out than subtle forms of sexism that are invisible to most, then the latter might be more difficult to eliminate for that very reason.) Questions about the severity of a bias and the degree to which it’s currently entrenched should also be distinguished from questions about how historically contingent it is. As noted above, the fact that a group currently has a certain bias might be highly contingent as a historical matter if it was originally formed by individuals who just so happened to share the relevant bias. Nevertheless, the bias of the group might be both severe and also wellentrenched if the original members arrange things (whether intentionally or not) in a way that in effect guarantees that the bias will persist through time. Bias at the level of the whole often reflects (and is explained by) bias among its parts. That much seems obvious and familiar. But precisely because it so often works that way, we should be wary of any simple reductionist pictures about the relationship between bias at the different levels. In general, even when we are concerned with parts and wholes that can be biased, having biased parts is neither a necessary nor a sufficient condition for a whole’s being biased. First: in principle, a whole might be unbiased even if its constituent parts are biased to a high degree. Perhaps the most obvious possibility here is when the biases of the parts offset or counteract one another in such a way as to produce a lack of bias at the level of the whole. This could even be due to intentional design—for example, someone could deliberately design an organization or institution to be unbiased in spite of, or even because of, the biases of its constituent parts or members. As a rough comparison, think of the idea behind adversarial systems of justice. In the American legal system, there is no ideal according to which the defense attorney is supposed to be scrupulously neutral between their client and the prosecution; nor is there any ideal to the effect that the prosecution is supposed to be scrupulously neutral. In contrast, many alternative legal systems don’t incorporate these partisan elements. It’s an empirical question which system does a better job.20 In principle (although no doubt, this kind of thing is difficult to pull off in practice) a procedure which deliberately incorporates biased parts, even heavily biased parts, might score better when we evaluate the whole.
Interestingly, there is some experimental evidence that the human perceptual system works according to similar principles. On the one hand, the human visual system exhibits a certain “center bias” when it comes to locating objects and events in space: visual stimuli tend to be perceived as nearer the center of the visual field than they really are. On the other hand, our auditory system exhibits a certain “periphery bias”: auditory stimuli tend to be perceived as farther away from the center of the auditory field than they really are. However, when visual and auditory stimuli are presented in the same location simultaneously and observers perceive a common source for them, the observers’ accuracy improves, as the two biases offset one another.21 It seems then, that there can be unbiased wholes that have biased parts. What about the other way around? Does bias at the level of the whole entail having biased parts? Consider the claim that: ? (1) If a whole is biased, then at least some of its proper parts are.
There are counterexamples to this claim. For example, I might be biased about some issue, even though neither of my collar bones nor any of the other parts of my body is biased, and even if I’m nothing over and above the sum of my body parts.22 Thus, we should reject (1) and instead accept (2): √(2) There can be biased wholes that have no biased parts.
However, I don’t think that the choice of (2) over (1) for the reasons given speaks to the most philosophically interesting issue in this vicinity. Let’s formulate a hypothesis that brings that issue to the fore. In theorizing about truth and related notions, philosophers have found it useful to appeal to the notion of a statement’s being truth-apt. Truth-aptness is a property had by something that is capable of being either true or false. Thus, a standard declarative sentence is truth-apt, in contrast to (e.g.) questions or commands, which aren’t the kinds of things that can be true or false. Following this example, let’s introduce a parallel notion of something’s being biasapt, which is a property had by something just in case it might be either biased or unbiased. With this piece of terminology in hand, consider the following hypothesis: (3) EMERGENT BIAS: A whole might be biased even if: (i) it has at least some proper parts that are bias-apt, and (ii) all of its bias-apt parts are unbiased. EMERGENT BIAS goes considerably beyond (2) in that, unlike (2), its truth isn’t guaranteed by the possibility of biased wholes whose proper parts are neither biased nor unbiased. Indeed, I regard EMERGENT BIAS as the most philosophically interesting claim in this vicinity. Is it true? I believe that it is: there can biased wholes that are entirely made up of unbiased parts.23 For example, consider again the case of a news organization that specializes in reporting
on politics. Presumably, the reporters will have their own personal political opinions, just like everyone else. Suppose that when one considers the reporters as individuals in isolation, and even when one examines them going about their business doing their specific jobs for the organization, one wouldn’t attribute bias to them: they don’t stand out in any salient way from other people in the same profession whom one would unhesitatingly describe as “unbiased.” (Whenever one engages in a pairwise comparison between a member of the organization and a non-member to whom one would attribute bias, the member of the organization is always judged superior along this dimension.) However, suppose further that all or virtually all of the people who work for this particular news organization are likeminded when it comes to politics. They share all of the same political views, or at least they hold political views that fall within the same relatively narrow band of opinion. We can suppose that all of these views might even be perfectly reasonable things for them to think, given their evidence, their past experiences and individual life histories, and so on.24 Nevertheless, the utter lack of diversity of opinion—a characteristic that only emerges when they are considered not as individuals but as a group—might result, by way of familiar mechanisms, in biased coverage of the news. In which case, it would be correct to describe the news organization as biased, even though one wouldn’t attribute bias to any of the individuals who make it up.25 The point is even clearer when we turn from people and groups of people to the phenomenon of representation itself. In the last section, we noted that, at least in principle, an account of a person or event might be biased even though it’s wholly true, in the sense of being made up exclusively of true claims. In such cases, the account is biased not because of the true claims that it includes, but because it includes those claims while failing to include other relevant claims about the topic that would have served to create a more accurate and balanced picture if they had been included as well. Consider then the individual true claims that jointly make up the biased account. Taken individually, there might be nothing about any of the claims that makes it biased. (Although some claims can appropriately be described as biased claims, none of these can; we can suppose that any one of them might equally well have figured in a perfectly unbiased account of the topic, had they been surrounded by different claims.) Rather, what has the property of being biased is the whole that the individual claims jointly comprise, the account itself. Imagine next a news organization that manages to satisfy this difficult to achieve ideal: it offers perfectly unbiased coverage of every issue or event that it covers. Does it follow that its overall coverage of the news is unbiased? No, for there is still another dimension with respect to which we can check its overall coverage for bias, namely which issues and events it covers and which it doesn’t. Inevitably, a news organization will cover some events and issues as opposed to others, and at least some of those that aren’t covered might reasonably be considered newsworthy. Given that some things that might very well have been omitted from coverage get covered, and other things that might very well have been covered are not, we can ask whether these choices exhibit some bias. For example, perhaps there are some issues or topics that naturally tend to be “grist for the mill” of progressives as opposed to conservatives, while other stories tend to be grist for the mill of conservatives as opposed to progressives: on these topics progressives (or conservatives) tend to be on the defensive,
since the most natural and straightforward reactions tend to favor the other side. If a news organization consistently covers stories of the former sort as opposed to the latter (or vice versa), then this might very well indicate a political bias, even if the way in which it covers these topics is not itself biased. Or imagine a news organization that disproportionately covers personal scandals that arise for politicians belonging to one political party as opposed to another party. (Notice that, at least in principle, the news organization could still offer unbiased coverage of any scandal that it does cover by, e.g., carefully discussing any mitigating factors, and so on.) Given the inevitability of selection, we can always ask: “Why is this person/event/issue being discussed, rather than some other person/event/issue that might seem to have an equally good claim on our attention?” Certain answers to this question will be consistent with an absence of bias, while others will not. The overall coverage offered by the news organization might be biased, even if its coverage of any particular event that it does cover is unbiased, and even though its overall coverage of the news consists in its coverage of each of the events that it covers. However carefully one inspects its coverage of each individual event in isolation, one never finds any bias; the bias emerges only at the level of the whole, or as a property of the larger pattern. More generally: there can be biased wholes with unbiased parts. To be clear, I don’t think that this is the usual case. An organization that exhibits political bias in its choice of which stories to cover is unlikely to offer unbiased coverage of those stories. Similarly, inasmuch as I think that Fox News and MSNBC have certain biases, I think that those biases are shared, as a matter of sociological fact, by a significant number of people who are responsible for content at those organizations; and that the former is primarily due to the latter. However, it also seems that lack of bias at the level of the parts doesn’t guarantee lack of bias at the level of the whole. In view of this, it’s worth staying alert to the possibility of such emergent or holistic bias. The claims about parts and wholes put forward in this section are intended to apply quite generally across the diverse categories of things that can be biased: for example, to groups of people and their individual members, as well as to linguistic entities, such as accounts of historical events and the more specific claims that comprise those accounts. When an account of an historical event counts as biased in a certain way because enough of the more specific claims that make it up are biased in that way, there is a clear sense in which the historical account and the more specific claims belong to the same fundamental category, even though the account will be more comprehensive than the more specific claims. However, it’s a central fact about the nature of bias that something might count as biased because of the relationship that it stands in to biased things that belong to radically different categories. This is among the central features of bias that we will explore in the next chapter. Bias: A Philosophical Study. Thomas Kelly, Oxford University Press. © Thomas Kelly 2022. DOI: 10.1093/oso/9780192842954.003.0002
1 Of course, in a context in which judges or coins are under discussion, the claim that a given judge or coin is not biased will be treated as equivalent to the claim that it’s unbiased, precisely because it is understood that it’s the type of thing that might exemplify the relevant properties.
2 On “algorithmic” or “machine bias,” see especially Danks and London (2017), Hedden (2021), Johnson (2021, forthcoming), Begby (2021:Ch. 8), Castro (2019), and Fazelpour and Danks (2021). 3 For example, in addition to the categories listed above, bias is sometimes attributed to entire intellectual disciplines. Consider, for example, Kwame Anthony Appiah’s remarks on cultural anthropology: Anthropology…has a professional bias toward difference. Who would want to go for a year of fieldwork “in the bush” in order to return with the news that “they” do so many things just as we do? We don’t hear about crosscultural sameness for the same reason that we don’t hear about all those noncarcinogenic substances in our environment: sameness is the null result (2005:254). As Glasgow (2009:64) notes, it’s sometimes claimed that entire countries can be racist. Assuming that being racist is a way of being biased, such claims entail that entire countries are among the things that can be biased. 4 For example, Griffiths, Kalish, and Lewandowsky (2008:3503) characterize inductive biases as “the factors that lead a learner to choose one hypothesis over another when both are equally consistent with the observed data.” Talk of “inductive biases” in this sense is common among philosophers as well. 5 For illuminating reflections on the theme, see especially Antony (1993). I discuss this idea in Chapter 8 below. 6 The same is true of other words in the vicinity, e.g. “stereotype” and its cognates. For discussion, see Beeghly (2015). 7 As the psychologists Hahn and Harris write: “A reader venturing into the psychological literature about human biases soon realizes that the word “bias” means many things to many people…it seems fair to describe the use of the term “bias” within psychological research as varied, at times encompassing almost polar opposites…bias has been viewed variously as obviously irrational, as rational, or neither” (2014:59). 8 For example, in the context of a discussion of algorithmic bias, Danks and London comment that “…debates in this area have been hampered by different meanings and uses of the term, ‘bias.’ It is sometimes used as a purely descriptive term, sometimes as a pejorative term, and such variations can promote confusion and hamper discussions about when and how to respond to algorithmic bias” (2017:1). 9 Notwithstanding various objections, the traditional “but for” criterion remains central to American discrimination law. Notably, for example, it is discussed at length in Neil Gorsuch’s majority opinion in Bostock v. Clayton County, Georgia, the Supreme Court case which held that an employer who fires an individual for being gay or transgender violates the Civil Rights Act of 1964. 10 Compare and contrast a position that’s sometimes taken by university administrators (e.g. Harvard administrators in the Students for Fair Admissions case) in the context of defending their institution’s affirmative action policies against the charge that those policies are biased against Asian applicants. The view is that although belonging to certain traditionally underrepresented racial or ethnic groups does work to the advantage of some applicants, no applicant is ever discriminated against because they belong to the racial or ethnic group that they do. However, given that there are a finite number of spots, it’s not obvious that this is a stable and coherent view. Of course, even if it’s not a stable and coherent view, it doesn’t follow that the affirmative action policies in question are ultimately indefensible, only that some of the things said in defense of them aren’t defensible. 11 Are there cases in which neither the negative bias nor the positive bias is fundamental relative to the other? (Of course, there might be cases in which it’s overdetermined that a thing is biased in the ways that it is, but set those aside.) Notice that, in any such case, given the symmetry between the negative bias and the positive bias, and the asymmetry of the explanatory relation (generally speaking, if H explains E then E doesn’t explain H), neither the negative bias nor the positive bias will explain the other. A coin is biased in favor of heads if and only if it’s biased against tails, but here neither the positive bias nor the negative bias is more fundamental than the other, and one cannot explain why a coin is biased in favor of heads by pointing out the true fact that it’s biased against tails (or vice versa). 12 Emphasis added. For this definition, see https://www.fbi.gov/investigate/civil-rights/hate-crimes#Definition, where the following remark is added: “A hate crime is a traditional offense like murder, arson, or vandalism with an added element of bias.” 13 On the bias blind spot, see especially the papers cited in footnote 8 of the Introduction. Notice that despite the attention that the bias blind spot receives in the psychological literature—of which those papers represent a small sample—it is a special case of the much more general phenomenon at issue here, that of higher-order bias. Nevertheless, I believe that the attention that it receives is well deserved, and Chapter 4 is largely devoted to exploring it. 14 A clear example of the former would be the Media Research Center; a clear example of the latter would be Media
Matters for America. 15 However, although I believe that this is the right thing to say about this particular case, we shouldn’t assume in general that an organization’s pursuing its declared aim or aims in an unbiased manner is enough for it to count as unbiased. For in some cases, the fact that those aims were chosen (as opposed to some other aims that might have been chosen instead) might reflect some bias on the part of the organization or its members. And in some such cases, the organization might count as biased in virtue of pursuing those aims, even if it pursues them in an unbiased manner. 16 For a defense of the idea that there might be such an asymmetry between the two sides in such cases—and in particular, that strong forms of relativism about bias, according to which “bias is in the eye of the beholder,” are false—see especially Chapter 3, §3, “The Perspectival Character of Bias Attributions.” 17 Endorsements of this view include Berstler (2019), Siegler (1966), Chisholm and Feehan (1977), MacIntyre (1994), Adler (1997), Green (2001), Strudler (2010), and Fricker (2012). Kant is perhaps the patron saint of the idea; see especially his Lectures on Ethics (27:446). For denials, see especially Saul (2012) and also Williams (2002). Rees (2014) argues against the grain that lying is generally morally better than mere deliberate misleading. As this partial list suggests, the number of philosophers who have published work in favor of the view that lying is morally worse than mere intentional misleading significantly exceeds the number of philosophers who have denied this claim. Assuming that this is true, is this significant evidence that the view is more popular among philosophers who have views about the issue? No, it isn’t. For (even setting asides worries about the sample size), this is exactly the kind of context in which we’re likely to find a certain kind of publication bias. From a certain vantage point, the claim that lying to someone is morally worse than intentionally deceiving them without lying is a surprising claim. Offhand, one might very well have thought that what’s wrong with lying to someone is that it involves trying to intentionally mislead them, which would naturally suggest that lying and intentional-deception-without-lying are on a moral par. If it’s really true that lying is morally worse, then this is something that calls out for a philosophical explanation—exactly the kind of thing that makes a promising topic for a philosophy paper. In contrast, the negative thought that there isn’t any morally significant distinction here does not similarly cry out for a philosophical explanation (although there might still be work to be done in showing where arguments to the contrary go wrong). Because of this asymmetry, we might antecedently expect to find more published defenses of the claim that lying is morally worse than published denials of it. Given that this is in fact exactly what we do find in the literature, we should be wary of drawing any inferences that this is representative of actual philosophical opinion, including among philosophers who have considered views about the issue but who haven’t explicitly weighed in on it in print. On the general phenomenon of publication bias, including discussion of its importance in far more significant contexts than the one considered here, see Franco, Malholtra, and Simonovits (2014) and the further references cited there. Devito and Goldacre (2019) is an overview. The phenomenon was discovered by the statistician Theodore Sterling (1959). The economics journal Sure—the acronym stands for “Series of Unsurprising Results in Economics”—is explicitly devoted to combatting publication bias by providing a publication venue for well-executed studies that yield statistically insignificant or otherwise unsurprising results. 18 A theme that has long been emphasized by leading historians in their historiographical writings. See, e.g., Carr (1961), Collingwood (1956), and Novick (1988). 19 Compare Jorge L.A. Garcia (1996:33) on the “reciprocity of causal influence” between racism at the level of individual people and racism at the level of institutions. Note, however, that in our example, while the top-down explanation is naturally understood as a causal explanation (the fact that the organization is biased at the earlier time is causally responsible for its having biased members at the later time), the bottom-up explanation is better understood not as a causal explanation but rather as a constitutive explanation. (The fact that a critical mass of individual members have the bias at a given time does not causally explain why the organization has that bias at that very same time, given that causes precede their effects. Rather, in the case of the bottom-up explanation, the members having the relevant bias at that time is what the organization’s having that bias at that time consists in.) 20 For some evidence in favor of adversarial systems, see Thibaut and Walker (1975). 21 On these phenomena, see especially Odegaard, Wozny, and Shams (2015). 22 What about my brain? Someone might insist that, in any case in which it makes sense to attribute bias to a person it also makes sense to attribute bias to their brain. However, even if that claim is conceded for the sake of argument, it doesn’t ultimately help to save the principle that any biased whole has some biased proper part. For once bias is attributed to the brain itself, we can ask which part or parts of the brain are biased; and if it’s held that some proper part of the brain is biased whenever the brain is, then we can ask about which proper part of that proper part is biased…and so on. In general, so long
as it’s conceded that at least some physical objects are biased, it follows that there are some biased wholes without any biased parts, so long as the ultimate constituents of physical reality are not themselves biased. 23 Compare discussions of institutional racism or various kinds of institutional biases in the tradition of scholarship that descends from Ture and Hamilton’s seminal (1967). An important theme in this tradition is naturally understood as the denial of a certain kind of reductionism: the totality of racism (or sexism, etc.) in a society is not reducible to the racism or sexism of the individual people who make up that society; rather, one needs to take into account racism and sexism at the level of institutions, and this isn’t simply a matter of summing or aggregating the racism or sexism of individuals. I think that this species of anti-reductionism is true, and important. Note, however, that the claim in the main text goes considerably beyond the denial of reductionism. The claim about racism that is analogous to the claim about bias that I make in the main text would be this: a society could be racist, even if none of its individual members were. (Notably, Ture and Hamilton clearly seemed to think that the existence of institutional racism depends in part on racist attitudes and practices of individuals, see, e.g. at 5). Among recent writers who have explicitly addressed the stronger claim about race, Glasgow (2009:72–6) argues carefully that an institution might be racist at a time even if no individuals members are racist at that time, although he expresses sympathy for the view that in such cases there must have been at least past racism at the individual level. On the other side of things, the work of Garcia (1996, 1997a, 1999, 2001a, 2001b) consistently defends the idea that institutional racism must always derive from some sort of concurrent racism at the level of individuals. I take these broadly metaphysical questions about the relationship between bias at the level of the whole and at the level of the parts to be distinct from broadly practical questions about whether parts or wholes are prior when it comes to attempts to ameliorate or eliminate bias. For discussion of the latter issue with respect to social biases such as racism, see, e.g., Anderson (2010), Banks and Ford (2009, 2011), Haslanger (2015, 2016), Huebner (2016), and Madva (2016a). 24 In order avoid introducing any potentially confounding variables, let’s further stipulate that this lack of diversity was the result of chance: the processes by which these individuals were hired could just as easily have produced a group that was much more diverse with respect to political orientation, but as a matter of fact did not. 25 Compare the anti-reductionism of List and Pettit (2011). Although they do not address the phenomenon of bias, List and Pettit defend a view on which a group of people might hold a belief even if no individual member of the group holds the belief. For further considerations in support of the claim that a group might be biased even if none of its individual members are, see the discussion of aggregation and approximation in Chapter 5, §1.
2 Pluralism and Priority 1. Explanatory Priority Consider again the incomplete list of things that can be biased with which Chapter 1 began. A notable feature of the list is that it’s diverse, not only with respect to the range of items that it contains but also with respect to the fundamental categories to which those items belong. For example, on the one hand, we often predicate bias of objects, as when we predicate it of particular people, or of inanimate objects such as dice and coins. But on the other hand, we’re equally happy to predicate bias of things that aren’t objects at all. For example, when one says, “The judge arrived at his decision in a biased manner,” one attributes bias not to a person or object (at least in the first instance), but rather to the temporally extended process by which the judge arrived at his decision. Similarly, if the judge uses a general method to arrive at his decision, we can distinguish between the method itself and the psychological process that consists of his using it on this particular occasion. The psychological process occurs at a specific place and over a specific interval of time. On the other hand, the method itself isn’t located at any specific place or time. (The same method might be used by other people, or by the same judge at other times.) Nevertheless, despite these differences, we ordinarily wouldn’t hesitate to attribute bias to the general method, to the psychological process, or to both, should doing so seem to be in order. As this suggests, in everyday life we are equally happy to attribute bias both to things that philosophers would classify as “concrete” (such as individual people that exist or events that occur at particular locations in space and time) as well as to things that philosophers would classify as “abstract,” such as general methods that do not.1 What should we make of the fact that ordinary thought and talk attributes bias to such a radically diverse collection of things? Perhaps it’s simply a big jumble, and there isn’t much more to be said on this front beyond that. However, we might try to impose some conceptual order on the jumble by pursuing another possibility. It might be that although many different kinds of things can be biased, some of these are more fundamental than others, in the following sense: when one of the less fundamental things is biased, it has this property in virtue of the relationship that it stands in to something more fundamental, which also has the property of being biased. For example, consider a judge who is biased about some question that’s before him. Perhaps in at least some cases it’s like this: the reason why the judge counts as biased is that
the way in which he arrives at his verdict (or the way in which he is disposed to arrive at it) is a biased process. If so, then the fact that the judge is biased about a certain question is grounded in the more fundamental fact that the way in which he arrives at his verdict is a biased process. Can this kind of analysis be generalized? The following is, I think, a potentially fruitful project for philosophers and others to pursue in this area: given that many different types of things can be biased, are some of these more fundamental in the order of explanation? If so, which? Call this The Priority Problem. Here is a comparison for the kind of project that I have in mind. Consider the following question: What types of things can be true or false?
There is a common view about this among philosophers, a view that comes in two parts. According to the first part of the common view, the property of being true is exemplified by many different things, including some mental states and cognitive acts (paradigmatically, beliefs and judgments), some linguistic entities (paradigmatically, declarative sentences of a natural language), some token speech acts (e.g. your asserting, on a particular occasion, that snow is white), and so on. Notice that, as in the case of things that can be biased, even this incomplete inventory is diverse with respect to fundamental category, for it includes states (e.g. beliefs), events (e.g. token utterances), and things that are neither states nor events (e.g. sentences). However, according to the second part of the common philosophical view, although many different things can be true, one of these stands out from the rest as fundamental: in particular, propositions are the fundamental bearers of truth, and anything else that has the property of being true, has that property in virtue of the relationship that it stands in to some true proposition. For example, according to this view, to have a true belief is to stand in the believing relation to some proposition that has the property of being true. Similarly, the reason why both the English sentence “Snow is white” and the Japanese sentence which is its literal translation count as true is because both sentences are used to express a certain proposition, a proposition that itself has the property of being true. Thus, although the English sentence, the Japanese sentence, and the proposition they express are all true, it’s the proposition and its truth that are fundamental in the order of explanation. The fact that the English sentence and the Japanese sentence are true is therefore a derivative matter: they inherit their truth from the true proposition that they express. Does bias exhibit a similar structure? The most ambitious project that one might pursue in this area is to attempt to identify some notion that stands to bias as the notion of a true proposition stands to truth on the orthodox view. Does any notion have the same kind of exceptionless global priority in the case of bias? For reasons that will emerge, I am skeptical that any single notion has such priority. The view taken here is thus a kind of robust pluralism about bias. It’s pluralistic in two respects: if it’s true, then (i) many different types of things are genuinely biased, and (ii) no one of these types is fundamental in every context in which something is biased. (Again, compare the common view about truth, which is pluralistic in the first respect but not in the second, inasmuch as it holds that many different
types of things are true but one of these is fundamental in any context in which anything at all is true.) However, even if no single notion has the relevant kind of universal priority in the case of bias, that leaves open the possibility that, when bias is exemplified, there are characteristic and theoretically interesting patterns of metaphysical and explanatory dependence among the things that exemplify it, patterns that inform our practices of attributing bias, and to which those attributions are responsible. This chapter explores and argues for answers to a number of central questions in this area.
2. Are People (Ever) the Fundamental Carriers of Bias? We can begin by noting that, notwithstanding the obvious practical importance and theoretical interest of bias on the part of human beings, people do not seem like promising candidates for having the relevant kind of universal priority described in the last section. Consider, for example, the following case: A BIASED JUDGE: A judge regularly uses a biased procedure to arrive at his verdicts. If he didn’t use this procedure, he would use an unbiased procedure instead.
Here, the judge counts as biased because he uses a biased procedure. By contrast, it would not be correct to say: the procedure that the judge uses counts as biased because it’s used by a biased person, or by a biased judge. (If the same procedure were used by an otherwise unbiased person, it would still count as a biased procedure.) Thus, we should accept the following claim: √PERSONS NOT ALWAYS FUNDAMENTAL: In at least some cases, the fact that a person is biased is a derivative matter: it depends on the way they are related to something else that’s biased, and the fact that the person counts as biased is grounded in the more fundamental fact that this other thing is biased.
More generally, it’s often true that, when a person can accurately be described as biased, this is because of the way in which they are related to some other thing or things that are biased. For example, when a person counts as biased, this might be because they hold biased beliefs, or because the characteristic ways in which they arrive at their beliefs are biased ways of arriving at beliefs. Alternatively, a person might count as biased because they have biased preferences, or because the ways in which they decide what to do are biased ways of making decisions. In still other cases, it might be that they count as biased because they participate in certain biased social practices. Or perhaps it’s some combination of such things. In any such case, the fact that the person can accurately be described as biased is a matter of their being suitably related to some biased thing or things that belong to some fundamentally different category or categories. Indeed, I conjecture that what is often the case is always the case, and that the following generalization holds: √PERSONS NEVER FUNDAMENTAL: Whenever a person is biased, this is a derivative matter: it depends on the way in which they are related to some other thing or things that are biased, and the fact that the person is biased is
grounded in more fundamental facts about these other biased things.
If this conjecture is true, then a person’s being biased, or their being biased in a certain way, is fundamentally unlike their being a certain height. When a person is six feet tall, this is not because they are related to anything else that also has the property of being six feet tall. However, if the conjecture is true, then, when a person is biased, this is because they are related to something else which has the property of being biased. Generally speaking, when a person is biased, the question, “What’s biased about them?” will be in order. One might answer this question by talking about certain biased views that they hold, or certain biased preferences that they manifest, or the fact that they are disposed to engage in certain forms of biased reasoning under certain conditions, and so on. Someone who endorses the conjecture that people are never fundamental will think that, once one has specified all of these things, one will have exhaustively accounted for all of the facts that make it the case that the person herself is biased.2 Are there cases that put pressure on this conjecture? At least at first glance, cases like the following might seem to be counterexamples: ANOTHER BIASED JUDGE: A judge is biased. Because of this, he chooses a procedure for reaching a verdict because it’s biased in a certain way—he values the fact that it’s biased in such-and-such a way (even if not under that description), and any other process that he would choose to use instead would also have been biased in that same way. The biased judge then uses the biased procedure and, predictably, arrives at a biased verdict.
It might be thought that this is a case in which the judge’s bias is fundamental, as opposed to either the bias of the procedure or the biased verdict. After all, it’s the fact that the judge is biased that explains why a biased rather than an unbiased procedure is used, and (plausibly) it’s the fact that a biased procedure is used that explains why the verdict that it produces is biased. Indeed, it’s plausible that, in explaining why a biased verdict is reached, an explanation that appeals directly to the bias of the judge is superior to an explanation that emphasizes the bias of the particular procedure used, since the latter explanation threatens to make the fact that a biased verdict was reached look more contingent and fragile than it really was, whereas the former explanation doesn’t do this. (If the particular biased procedure that was actually used had not been, then, given the judge’s bias, some other procedure with the same bias would have been used instead, in which case the biased verdict would likely have resulted anyway; the fact that a biased verdict resulted thus depends on the identity of the judge and the fact that he’s a biased person in a way that it doesn’t depend on the identity of the particular procedure that was actually used.) However, here we should distinguish two issues. It’s true that if we’re seeking a causal explanation of what happens, then it’s the bias of the judge that’s fundamental (at least, compared to either the bias of the procedure or the bias of the judgment), for it’s the bias of the judge that causally explains why a biased procedure was used, and it’s the procedure that generates the verdict. More generally, it’s both true and important that, when we seek a causal explanation of why a particular biased outcome occurred, the best explanation will often invoke the biases of the people involved, for alternative explanations that do not do so will make the biased outcome look more contingent than it actually was. However, notice
that in ANOTHER BIASED JUDGE, even though the judge’s bias explains why he chose some biased procedure or other, it does not explain why whichever biased procedure he chooses counts as biased. Indeed, by hypothesis the procedure in question was a biased procedure before he chose it, and his choosing it was sensitive to this feature. Even in this case then, it’s not the fact that a procedure is chosen and used by an antecedently biased person that makes it a biased one, or that in virtue of which it counts as biased. Given that the procedure is biased, it is so independently of who employs it; even if the same procedure were used by an (otherwise) unbiased person, it would still count as biased. In contrast, the facts stipulated in ANOTHER BIASED JUDGE are perfectly consistent with the following possibility: the reason why the judge counts as biased is precisely the relationship that he stands in to biased procedures. That is, one might plausibly maintain that even in this case, the fact that the judge counts as biased is a derivative matter, inasmuch as it’s grounded in more fundamental facts such as these: he prefers biased procedures to unbiased ones, he both employs and is disposed to employ a biased procedure in reaching his decision, and so on. Thus, even in a case that at first glance seems to cast doubt on it, the conjecture stands.
3. Processes and Outcomes I take the considerations offered in the last section to be sufficient to establish the relatively modest claim that biases of people are not always fundamental in the order of explanation, but insufficient to establish the much stronger conjecture that they never are. If the modest claim is true but the strong conjecture turns out to be false, then that would strengthen, rather than weaken, the case for the kind of robust pluralism about bias that I favor. (Since in that event biases of people would be fundamental in some cases but not in others.) Regardless, let’s temporarily set aside the topic of biased people, which will receive extended consideration in later chapters. In addition to biased people, many of the other items on the list of things that can be biased (Chapter 1, §1) belong to one of two general categories. First, there are processes, practices, and procedures. We will construe this category broadly, so that it includes both psychological processes that are instantiated by individuals and also social practices instantiated by groups (e.g. a particular method of deliberating or reaching a collective decision employed by members of a court or an academic department). Second, and on the other hand, there are things that can naturally be viewed as an outcome or product of some process, practice, or procedure. Included in this category are beliefs, judgments, perceptions, verdicts, decisions, and evaluations. Similarly, linguistic phenomena (e.g. biased narratives, texts, descriptions, reports, interpretations, presentations, discussions, and biased testimony) can be viewed as the products of those cognitive and social processes which produce them. A collection of data or a sample belongs to the second category, while the data gathering or sampling procedure that was used to collect it belongs to the first. With this distinction in hand, imagine a judge who deliberates and reaches the verdict that a defendant is guilty of the crime of which they have been accused. What’s the relationship
between the verdict’s being biased or unbiased and the process’s being biased or unbiased? A natural thought is this: the verdict is biased if and only if the process that leads to it is biased. A further natural thought encodes the idea of explanatory priority: if the judge’s verdict is biased, this is because the way in which he reached it was biased. According to this line of thought, when someone claims that a judge’s verdict is biased, whether this claim is correct depends on whether the process or procedure that the judge employed to reach that verdict itself had the property of being biased. The judge’s verdict counts as biased (if it does) because it’s the outcome of a biased process or procedure. An alternative suggestion reverses that order of explanation: in a case in which the judge arrives at a verdict that’s biased, it’s the biased verdict that’s fundamental, and the process that leads to it counts as biased precisely because it delivers a biased verdict, as opposed to an unbiased one. Against this alternative suggestion, here’s a straightforward reason for thinking that it’s not the biased verdict that’s fundamental. Notice that, typically, there won’t be anything intrinsic to the content of the verdict that makes it biased: if the same verdict had been reached in an unbiased manner (i.e. by way of an unbiased process), then it wouldn’t count as biased. Thus, we can imagine two scenarios. In the first scenario, the judge arrives at the verdict that the defendant is guilty because the defendant has an Irish surname, and the judge tends to think that defendants with Irish surnames are guilty of the crimes of which they’ve been accused. In this scenario, the verdict that the defendant is guilty counts as a biased verdict. But suppose instead that the judge reaches the same verdict in a different way, by carefully considering all of the available evidence, and properly weighing that evidence. If the evidence really does support a guilty verdict, and it’s the evidence which is psychologically efficacious for the judge, then the verdict counts as unbiased. In short, the identical judgment (with respect to its content) might be made by the same individual in two scenarios, but whether it counts as biased or unbiased will differ depending on the way in which it was reached. In this respect, the status of the judgment is derivative or inherited from the status of the process. Here, whether the outcome of the process (the guilty verdict) counts as biased seems to depend more or less entirely on whether the process that produced it is biased. Consider another central case: the relationship between biased samples and biased sampling procedures. What is a biased sample? As the notion is standardly explicated in statistics and elsewhere, the notion of a biased sample presupposes the more fundamental notion of a biased sampling procedure: a biased sample is a sample that’s produced by a biased sampling procedure.3 A sample has the property of being biased because it inherits that property from the procedure by which it’s collected. In contrast, the notion of a biased sampling procedure does not presuppose the notion of a biased sample but can be characterized in independent terms. For example, a sampling procedure will count as biased if some members of the target population have a greater chance of being selected for inclusion in the sample than others. Here too, as in the case of the judge’s guilty verdict, the notion of a biased process (the sampling procedure) seems more conceptually and explanatorily fundamental than the notion of a biased outcome (the sample itself). Assuming that this is the right thing to say about these cases, can a more general lesson be drawn? Is the notion of a biased process basic or fundamental, at least compared to things that can be
understood as the outcome of some process? Significantly, however, the intuitive data in this area are messy, for other cases seem to work quite differently. For example, suppose that the judge believes that the black defendants who appear in my court are more likely to have actually committed the crimes of which they are accused than the white defendants. This looks like an absolute paradigm of a biased belief. Of course, the judge’s belief is itself presumably the result or outcome of some psychological process. However, it seems that we’re prepared to classify this belief as biased even before we’re told anything about the process that produced it. Is this because we think that certain beliefs such as this one are intrinsically biased, biased simply in virtue of having the contents that they do, independently of the process that led to them? (Notice that if one did think that some beliefs are intrinsically biased, then one might attempt to use that notion to help explicate the notion of a biased belief-forming process, as the kind of process that tends to produce beliefs with biased contents.) Or does our inclination to classify the judge’s belief as biased even without having been told anything about the process that led to it simply reflect our confidence that any belief-forming process that would have led to this belief must have been a biased process as opposed to an unbiased one; that any legitimate, unbiased process would not have generated this belief (something that would leave open, although not entail, the possibility that biased processes are more fundamental than biased outcomes in the order of explanation, even in this case)? Having taken note of some of the intuitive messiness of the data in this area, let’s consider the relationship between biased processes and biased outcomes more systematically.
4. Unbiased Outcomes from Biased Processes? In the last section, we noted that, when a judge arrives at a verdict by a certain deliberative process, the following is a natural idea: the judge’s verdict is biased if and only if the process that produced it is biased. Consider the generalization of that idea: (?) (1) An outcome of a process is biased if and only if the process that produces it is biased.4 According to this claim, being an outcome of a biased process is both necessary and sufficient for being a biased outcome. Let’s focus first on the sufficiency claim: (?) (2) An outcome of a process is biased if the process that produced it is biased. This sufficiency claim is false: in some cases, biased processes can give rise to unbiased outcomes. One important class of cases in which this occurs has the following structure. A process is biased in favor of producing one type of outcome over another; nevertheless, it sometimes produces the outcome against which it is biased. (Compare: a coin biased in favor of heads lands tails.) When this happens, it often seems intuitively wrong to count the
outcome as biased. Consider, for example: OVERCOMING A BIASED ADMISSIONS PROCESS. An admissions process is biased against applicants of a certain ethnicity. Among other disadvantages, members of this group are automatically eliminated if they fail to meet a certain standard that other applicants aren’t required to meet. Nevertheless, an applicant from the group gains admission because of the unusual strength of her application, which is strong enough to overcome the bias in the admissions process. (Although given the bias in the process it was nevertheless uncertain whether she would be admitted; that is, it isn’t generally true that applicants with the same profile will be admitted.) Indeed, given the strength of her application, she would have been admitted by any reasonable and unbiased admissions procedure that might have been used.
The student’s acceptance is an outcome of the admissions process, and that process is biased. But intuitively, her being admitted is not a biased outcome. More abstractly, imagine a procedure for classifying objects as F or as not-F. Suppose that the procedure is biased in favor of classifying things as Fs, in the following sense: it frequently misclassifies objects that are not-F as Fs, but it never misclassifies an F as a not-F. Indeed, let us suppose that it not only never mischaracterizes an F as a not-F, but whenever it encounters an F, it correctly classifies it as such. (That is, it never “suspends judgment” in response to encountering an F.) The procedure is biased, inasmuch as it systematically overcounts Fs and undercounts not-Fs. Moreover, given that the procedure is biased, so is the derivative process for arriving at beliefs, “Defer to the procedure about whether something is F or not-F,” since this way of arriving at beliefs will inherit the bias of the procedure. Suppose that on a given occasion a person who consistently defers to the procedure judges incorrectly that something that is not an F is an F, as a result of the characteristic bias built into the procedure. In these circumstances, the person’s incorrect judgment counts as a biased judgment. Nevertheless, even if the judgment that something is an F (when made as a result of deferring to the process) counts as a biased judgment, the judgment that something is not an F, when made as a result of deferring to the process, doesn’t seem like a biased judgment. After all, such judgments are perfectly reliable and guaranteed to be true. Compare the phenomenon of testimony against interest, by a testifier who is known to be biased in his own favor. When a person is known to be biased in his favor, the method of consistently deferring to that person’s self-reports is a biased process for arriving at beliefs about the relevant subject matter. However, when the testifier reports something unfavorable about himself and one believes him, the belief that one arrives at in this way need not be biased; indeed, such beliefs might have especially strong epistemic standing, even when compared to beliefs that are arrived at via completely unbiased (although less reliable) processes. Thus, we should reject the sufficiency claim (2) and accept instead: √(3)
UNBIASED OUTCOMES FROM BIASED PROCESSES: In some cases, biased processes can produce unbiased outcomes.
What can we learn from the counterexamples to (2)? Again, the counterexamples have the following structure: a process that’s biased in favor of one possible outcome over another produces the disfavored outcome, and that outcome would also have been produced by any
unbiased process that might have been used instead. When this happens, it can seem intuitively wrong to count the outcome as biased. Can we qualify the sufficiency claim (2) so as to block that kind of objection? The obvious fix runs as follows. Let’s say that an outcome of a biased process is bias congruent when it aligns with the content of that bias. (Compare: a coin that is biased in favor of heads lands heads.) And let’s say that an outcome of a biased process is bias incongruent when it fails to align with that bias. (Compare: a coin that is biased in favor of heads lands tails.) Then, we can put the lesson of the counterexamples like this: sometimes, when a biased process produces a bias-incongruent outcome, the outcome doesn’t count as biased. Consider then the following qualified claim: (?) (4) If a biased process produces an outcome that is biased-congruent, then that outcome is biased. Is this qualified claim true? No, it isn’t. Notice that, even when a biased process produces an outcome that aligns with its bias, the fact that the process is biased (or the fact that it’s biased in the way that it is) might play no role at all in producing an outcome of that type. Perhaps the same outcome would have occurred even if an unbiased process had been used instead, or even if a process had been used that’s biased against that outcome. In that case, it can seem incorrect to describe the outcome as biased. Consider, for example, the following variant admissions case: BIASED ADMISSIONS, CASE #2. As in the previous case of OVERCOMING A BIASED ADMISSIONS PROCESS, an applicant submits an unusually strong application. As in the previous case, the application is sufficiently strong that she would’ve been admitted by any reasonable and unbiased admissions process that might’ve been used, and indeed, even if the admissions procedure that had been employed had been significantly biased against her (e.g. by holding members of her ethnic group to a higher standard). In fact, however, the admissions process is biased in a way that favors her, in the following respect: the process tends to unduly favor applicants who write their admissions essays in a particular style, and she happens to have written her essay in that style. (But if she hadn’t, she would’ve still been an “easy admit.”) The applicant is admitted.
Is the applicant’s admission a biased outcome? Intuitively, it seems as though the answer is No. Even though the admissions process is biased, and biased in her favor, her gaining admission doesn’t count as a biased outcome given its robustness. The apparent lesson is this. Even when a biased process is causally responsible for producing an outcome that aligns with its bias, the outcome might still fail to count as biased, for the fact that the process was biased (or biased in the way that it was) might be utterly irrelevant to the fact that that outcome resulted. It seems then, that it’s not enough for a biased process to be causally responsible for producing the bias congruent outcome; it also matters whether the fact that the biased process is biased played any role in producing the outcome. Here is a principle that I think is true: √[5]
An outcome is biased if: (i) the process that produces it is biased, and (ii) the
outcome is bias-congruent, and (iii) the fact that the process is biased in the way it is is causally responsible for the fact that the outcome is bias-congruent. According to this principle, conditions (i)–(iii) are jointly sufficient for an outcome’s being biased. Of course, it’s unsurprising that one can arrive at a sufficient condition by conjoining further conditions in this way. Even if [5] is true then, we might wonder whether it’s more complicated than it needs to be, and whether there’s some significantly weaker—and therefore, simpler and more illuminating—set of jointly sufficient conditions in the same neighborhood. In particular, once we allow ourselves to appeal to the idea of a process’s bias being causally responsible for the outcome’s turning out the way that it does, does it still matter that the outcome is bias-congruent? What if the fact that a process is biased in a certain way is causally responsible for producing a bias-incongruent outcome?5 In order to pursue this question, let’s make things more concrete. Consider one of the mechanisms once employed by Harvard University to hold down the total number of Jewish students it admitted. Because of the perception that explicitly and directly discriminating against Jewish applicants wouldn’t have been socially acceptable given the milieu, one of the mechanisms employed to achieve the desired end was to give a great deal of weight to “geographical diversity” in the admissions process.6 In context, this aspect of the admissions process was an effective way of holding down the number of Jewish applicants admitted, since it was known that Jewish applicants were disproportionately clustered in a relatively small number of large cities, especially New York. Given that an admissions process with this feature effectively holds down the number of Jewish students admitted, and it’s used for this reason, I assume that it’s correct to describe it as an admissions process that’s biased against Jewish applicants. Notice, however, that inasmuch as such a process doesn’t directly discriminate against Jewish students, but rather is set up to discriminate against them indirectly or by proxy, by way of discriminating in favor of geographical diversity, such a biased policy might very well work in favor of particular Jewish students, so long as it’s consistently applied.7 Consider, for example, the following hypothetical case: THE COUNTERSTEREOTYPICAL JEWISH APPLICANT. A particular Jewish applicant hails from a rural community that’s nowhere near a major city. In terms of his academic credentials, he’s a relatively weak applicant who wouldn’t be admitted if an unbiased admissions process were used. However, because a biased admissions process is used, he is admitted, even though the admissions process is biased against Jewish applicants (both in fact and in conception), and he’s a Jewish applicant.
Given that the applicant wouldn’t have been admitted if an unbiased procedure had been used, and the process’s being biased in the way that it is both causally responsible and necessary for his being admitted, does his being admitted count as a biased outcome? I think that it does. If that judgment is correct, then the perhaps surprising lesson might be stated as follows: in some cases, an outcome counts as biased in virtue of being produced by a biased process, even though that outcome is a type of outcome that the biased process selects against. Of course, in the case of the counterstereotypical Jewish applicant, there is a complication: the admissions process is both biased against Jewish applicants and biased in
favor of rural applicants, and the applicant benefits, decisively, from the latter bias. Indeed, this fact might lead someone to deny that the process really is biased against Jewish applicants after all: rather, it’s biased-against-Jewish-applicants-from-cities and biased-infavor-of-Jewish-applicants-from-rural-areas. However, it would be fallacious to conclude from the fact that some Jewish applicants aren’t harmed by or even benefit from the biases of the process that it isn’t biased against Jewish applicants. (Compare: a “literacy requirement” for voting eligibility that intentionally and effectively disenfranchises members of a given race at a disproportionate rate can correctly be described as biased against members of that race; and this is true even if the requirement is consistently applied across races, so that some members of the targeted race aren’t disenfranchised by it, and some members of non-targeted races are.) The correct description of the case, I think, is this: the process is both biased against Jewish applicants and biased in favor of rural applicants, and the second bias is the means by which the first bias is realized, effectively though imperfectly.8
5. Biased Outcomes from Unbiased Processes? In the last section, I argued that biased processes sometimes produce unbiased outcomes. Even if that’s correct, and so the fact that an outcome is produced by a biased process is not sufficient for the outcome’s being biased, that leaves open the possibility that it’s necessary. If an outcome is biased, does it follow that the process that produces it is biased? Or do unbiased processes sometimes produce biased outcomes? What is at issue is the truth of the following principle: (?) (6) An outcome is biased only if the process that produces it is biased. Are there counterexamples to this principle? Let’s consider some cases designed to put pressure on it. One relatively common and practically important class of cases in which an unbiased process directly contributes to the production of a biased outcome is when an unbiased process is fed biased inputs. For example, a method for aggregating data might be unbiased but nevertheless yield biased outputs or judgments if the data that it’s fed are biased data.9 However, when this type of example is used in an attempt to show that biased outcomes can emerge from unbiased processes, it seems not to go to the heart of the matter. For if (as will at least often be true) the original biased data or inputs that are fed into the unbiased method are themselves products or artifacts of some earlier biased process, then the example will still be one in which a biased process plays an essential role in the eventual emergence of a biased outcome. A more convincing example would be one in which a biased outcome is wholly produced by a paradigmatically unbiased process. As the first step toward generating a candidate example, recall a case discussed earlier, in which a biased process generates a biased outcome in a predictable way:
BIASED DESCRIPTION. I describe Jim to you. Although everything that I tell you about Jim is true, I’m careful to include anything that I know that reflects negatively on him, while carefully filtering out any information that casts him in a favorable light, or which would tend to mitigate or provide context for the negative information. The description that I produce in this way thus presents Jim in a misleadingly negative way.
Of course, BIASED DESCRIPTION is not itself a candidate to show that a biased outcome can arise from an unbiased process, inasmuch as the process that I use to generate the description is a paradigmatically biased process. However, we can also imagine that the identical, overly negative description of Jim is produced in a paradigmatically unbiased way —say, as the result of a random or stochastic process. Consider the old saw that an infinite number of monkeys typing away on an infinite number of typewriters would eventually produce the complete works of Shakespeare. If, in addition to the works of Shakespeare, the monkeys also produce the same description of Jim that I produce, have they too produced a biased description of him? Of course, one might reasonably deny that a string of letters unintentionally produced by monkeys could actually count as a description of Jim. In order to sidestep such issues, consider the following case: RANDOMLY GENERATED DESCRIPTION: Those who know Jim intentionally produce a description of him. They commit ahead of time to using a rather eccentric, two-step process to arrive at the description. First, they compile an enormous, comprehensive list of the known facts about Jim—including the good, the bad, and the neutral. Then, with an eye towards producing a more manageable description, they employ a stochastic process that randomly selects a limited number of facts from the list to be included in the description. As it happens, the facts selected are exactly the same facts as those included in the intentionally and meticulously biased description of Jim that I offer you in BIASED DESCRIPTION.
Given that the process of random selection was no more likely to produce a misleadingly negative account of Jim than a misleadingly positive or neutral one, the process itself isn’t biased against Jim, or in favor of producing a negative description of him. In these respects, it differs from the process that I use to arrive at that description. Nevertheless, inasmuch as the paradigmatically unbiased process and the paradigmatically biased process produce the same description, and that description clearly counts as biased when it’s produced by the biased process, there is at least some pull to thinking that it counts as biased when it’s produced by the random process as well. Intuitively, someone who knows Jim well, and who is presented with the relevant description, will judge that it’s a biased description, even if she’s ignorant of its provenance.10 If she later learns that it was constructed by someone who intended to produce a biased description, she won’t be surprised. If, on the other hand, she later learns that the description was produced in a random way, she will certainly be surprised, but is it incumbent upon her to retract her earlier judgment that the description is a biased one? Suppose that she sticks to her original judgment that the description is biased against Jim, on the grounds that it consistently portrays him in a misleadingly negative way. Is that a mistake on her part? If not, then the case is one in which a paradigmatically unbiased process gives rise to a biased outcome. Given that both the paradigmatically biased and the paradigmatically unbiased process produce the same description, it seems as though we can also ask about the description itself, in abstraction from the way in which it’s produced on any particular occasion. Intuitively, the description itself has certain properties. For example, given that it’s made up exclusively of
true propositions about Jim, it seems that the description as a whole will also have the property of truth. If the description as a whole can be a true representation of Jim, can it also be a biased representation of him, in virtue of the fact that it consistently misrepresents him in a particular way? Someone who thinks that biased descriptions can only be produced by biased processes should insist, I think, that strictly speaking it’s not the description itself that has the property of being biased (in anything like the same way it has the property of being true), since only when it’s produced in certain ways will anything count as biased. Rather, what has the property of being biased are certain tokenings or concrete instances of the description: for example, what is written down in ink on certain pages, or what is spoken aloud on a particular occasion. As against this, suppose that Jim’s friend once again insists that the description itself is biased against Jim, given the way in which it consistently presents him in an overly negative way. This seems perfectly intelligible, I think. There is, I think, at least some intuitive pull to the idea that the description itself can count as biased, regardless of how it’s produced. However, even if there is some pull to that idea, there are also countervailing considerations that pull in the opposite direction. Compare the RANDOMLY GENERATED DESCRIPTION case to the following example, which is in some respects structurally similar: UNBIASED COIN. A fair coin is flipped repeatedly in an unbiased way. Improbably, the coin repeatedly lands heads, in a very long run of flips.
Question: is anything in this story biased? I don’t think so. By hypothesis, neither the coin nor the manner in which it’s flipped is biased, and there is certainly no reason to attribute bias to the outcome of any particular flip. The only remotely plausible candidate in the picture for something which might count as biased is the collection of outcomes. But on reflection, there seems little reason to predicate bias of the collection. Although the long run of heads is antecedently improbable and surprising when it occurs, there is no reason to describe that collection of outcomes as biased, given that it was produced in a random way. But if that’s correct, then it seems like we should similarly say: although the random description generator’s repeatedly selecting facts that cast Jim in a negative light is both antecedently improbable and surprising when it happens, there is no reason to go beyond this and attribute bias to the resulting description, given that it was produced in a random way. Given their similarities, it seems that the RANDOMLY GENERATED DESCRIPTION case and the UNBIASED COIN case should receive the same verdicts. In fact, however, I want to suggest that we should pull the two cases apart—the description of Jim counts as biased, even though nothing counts as biased in the coin case —and that this reflects something deep about the nature of bias and the circumstances in which we attribute it. The reason why even a randomly generated set of propositions can count as a biased description of Jim in the right circumstances is precisely because we can consider it as a description of Jim, as opposed to a mere set of random propositions. When we consider those propositions as a description of Jim, we make salient or bring into play certain norms or standards of correctness that apply to descriptions—including a norm of accurate or non-misleading representation. The description of Jim counts as biased (regardless of how it was produced) because it departs from that standard of correctness in a
consistent and patterned way. In contrast, the same set of propositions considered not as a description of Jim but simply as a randomly generated collection does not count as biased, just as the collection of outcomes that consists of the long run of heads doesn’t count as biased in the coin case.11 The RANDOMLY GENERATED DESCRIPTION case is, I submit, a case in which a biased description emerges from a paradigmatically unbiased process. More generally, although biased outcomes typically emerge from biased processes, we should accept: √(7)
BIASED OUTCOMES FROM UNBIASED PROCESSES: Unbiased processes can sometimes produce biased outcomes.
This parallels the fact, argued for above, that although biased processes typically generate biased outcomes, in some cases they can generate unbiased outcomes. (Compare: although biased wholes typically have some biased parts, and bias at the level of the parts typically generates bias at the level of the whole, there is no necessity in either direction.)
6. Pluralism As we’ve seen, in some cases, the notion of a biased process is fundamental. For example, whether a sample counts as biased depends on whether it was produced by a biased sampling procedure, and whether a judge’s guilty verdict counts as biased depends on whether the process by which it was reached is biased. On the other hand, I’ve suggested that there are also cases in which we’re prepared to count something as biased where this isn’t a matter of the relationship that it stands in to some biased process. This is true, for example, of certain kinds of systematically misleading descriptions, or the judge’s false belief that black defendants are more likely than white defendants to be guilty of the crimes of which they are accused. If these claims are correct, then that would vindicate the kind of robust pluralism about bias described and endorsed at the beginning of this chapter: √ROBUST PLURALISM ABOUT BIAS: (i) many different types of things are genuinely biased, and (ii) no one of these types is fundamental in every context. If in fact robust pluralism about bias is true, we might wonder why it’s true. Given that bias attaches to many different types of things, and no one of these is privileged or foundational for the rest, why is the property of bias as promiscuous as it is? In general, pluralists about this or that notion often reject the demand for an explanation for why the pluralism that they favor holds, on the grounds that no such explanation is necessary, or to be expected: anti-pluralism should not be assumed to be the default. However, if the general line of thought sketched at the end of the last section is on the right track, then I think that there is a promising candidate explanation for why pluralism about bias would be true, a speculative explanation that runs as follows. As suggested there, biases
characteristically involve systematic departures from norms or standards of correctness. The reason why many different kinds of things can count as biased (regardless of whether some of these are more fundamental than others) is that many different kinds of things are subject to and can systematically depart from norms or standards of correctness in a way that’s characteristic of bias. (For example, this is true of people, processes, and outcomes, in addition to things of other types as well.) Moreover, the reason why no single type of thing is always fundamental in the order of explanation is that the norms or standards relative to which something can count as biased do not all apply to the same type of thing in the first instance. For example, if some such norms apply most directly or in the first instance to processes, while other norms apply most directly or in the first instance to outcomes, then this would make sense of the fact that neither processes nor outcomes are always prior to the other in explaining why biased processes and biased outcomes are biased. Of course, this speculative hypothesis depends on the more basic idea that biases characteristically involve systematic departures from norms or standards of correctness. This idea is central to the account of bias developed in Part II of this book. Bias: A Philosophical Study. Thomas Kelly, Oxford University Press. © Thomas Kelly 2022. DOI: 10.1093/oso/9780192842954.003.0003
1 Consider also the currently widespread discussion of biased algorithms. Considered as a set of well-defined rules or instructions, an algorithm itself, as opposed to its concrete implementations, is naturally understood as an abstract entity. 2 In discussions of this conjecture, I’ve sometimes had the sense that one source of resistance to it is a worry that it threatens to let people off the hook for their pernicious biases. While there might very well be good reasons to reject the conjecture, I don’t think that this is one of them. Compare the property of being biased with the property of being evil. Someone might think that Hitler was evil while denying that this is a brute or fundamental fact about him, as opposed to one that’s grounded in more fundamental facts about certain evil things that he did, certain evil intentions and views that he held, and so on. Of course, a person who take this view need not be in any way minimizing the significance of the fact that Hitler himself was evil. 3 See, e.g., Lane et al. 235, 660. 4 Notice that this biconditional is perfectly consistent with (and therefore, neutral between) the view that (i) biased outcomes are biased in virtue of having been produced by biased processes, and the rival view that (ii) biased processes are biased in virtue of producing biased outcomes, notwithstanding the fact that it is more naturally heard as suggesting the former. Compare: as is sometimes noted, the moral-theological claim that an action is right if and only if it is approved of by God is neutral between the view that (iii) right actions are right in virtue of being approved of by God, and the rival view that (iv) God approves of right actions because they are right. 5 It might be thought that examples like OVERCOMING A BIASED ADMISSIONS PROCESS suggest that, whenever a biased process generates an outcome that’s contrary to the bias of the process, the outcome counts as unbiased, or at least, not biased. However, such examples don’t have any direct bearing on the question at issue. For intuitively, OVERCOMING A BIASED ADMISSIONS PROCESS is not a case in which the fact that the admissions process is biased is causally responsible for (or plays any causal role in) the applicant’s being admitted. In this respect, it differs from the kind of case that I consider next. 6 On this aspect of “The Harvard Plan,” see Karabel (2005) and Pollak (1983). 7 I’m not sufficiently knowledgeable about the historical details of the actual biased admissions policies used to know if the structural possibility described in the main text ever actually occurred, or even whether it could have occurred given the more specific features of those policies. Suffice it to say that given only what has been stipulated thus far, the suggested possibility is a genuine one.
8 Compare: one might pursue the goal of having true rather than false beliefs by following a policy of believing what’s well-supported by one’s evidence and not believing what isn’t well-supported by one’s evidence. This might be an effective means of achieving the goal (and more effective than any other means available) even though it’s a fallible means, and even though one knows in advance that by consistently following the policy one will thereby sometimes end up believing what is false (in cases in which one’s evidence is misleading). For more on the general phenomenon of one bias being realized by another, distinct bias, see Chapter 5, §1. 9 On one view, this is actually the phenomenon that’s at work in cases of so-called “biased algorithms.” For this suggestion in the case of algorithms used for risk assessment in the criminal justice system, see Begby (2021:Ch. 8). 10 Compare the example discussed in §3: upon learning that the judge believes that the black defendants who appear in my court are more likely to have actually committed the crimes of which they’re accused than the white defendants, we classify it as a biased belief, even before learning about its provenance. 11 If it’s true that the same collection of propositions can count as biased when considered as a description of Jim, but not when it’s considered simply as a random collection, then this is an instance of the kind of relativity discussed in Chapter 1, §2 above.
PART II
BIAS AND NORMS
3 The Norm-Theoretic Account of Bias In both everyday life and in the sciences, a great deal of our thought and talk about bias seems to be captured by the following idea: a bias involves a systematic departure from a norm or standard of correctness. This is the fundamental idea of what I will call the normtheoretic account of bias.
1. The Diversity of Norms If a bias involves a systematic departure from a norm, which norm is relevant? That varies from case to case. Let’s begin by briefly surveying some of the more important norms, relative to which a person or thing might count as biased. (1) Often, the relevant norm is truth or accuracy. Statisticians divide error into two types, random and systematic. Intuitively, random error is a matter of missing all over the map, while systematic error is a matter of missing in some direction. A weather forecasting model whose predictions of the average daily temperature are regularly off the mark because they are sometimes too high and sometimes too low errs randomly as opposed to systematically; the model is thus unbiased even though it’s unreliable. In contrast, a weather forecasting model that’s regularly off the mark because it consistently predicts that the average daily temperature will be higher than it turns out to be errs systematically; it’s thus a biased model. Similarly, in measurement theory, bias is the difference between the expectation of a measurement and the true underlying value. In these and in many other cases, the relevant norm is truth or accurate estimation. On the view that a bias involves a systematic departure from a norm, this gets subsumed as a special case, albeit a particularly important and common one. (2) In other cases, the relevant norm is practical: for example, the norm of acting in a way that maximizes expected utility or expected value. Consider an agent who exhibits status quo bias.1 Someone who exhibits status quo bias treats the fact that a state of affairs is the status quo as a reason to prefer it, and they act accordingly. Of course, sometimes one maximizes expected value by acting in a way that preserves the status quo, but an agent who favored the status quo on all and only those occasions would not be someone to whom status quo bias
could be correctly attributed. Rather, the agent with status quo bias is disposed to prefer the status quo even when some other option would maximize expected value. In some cases, they fail to comply with the norm of maximizing expected value; moreover, when they fail to comply, their failures aren’t random but rather exhibit a characteristic pattern. It’s this that makes an agent in the grips of status quo bias a biased agent, as opposed to an agent who is merely imperfectly rational when judged by the usual decision-theoretic standards. (3) Sometimes the norm is epistemic: for example, the norm of believing in accordance with the evidence. Suppose that each week I confidently believe that my favorite team will win its game, even when the evidence available to me suggests that it won’t. In such cases, I depart from the norm of believing in accordance with the evidence. Moreover, my departures are not unsystematic, as would be the case if I was sometimes too optimistic and sometimes too pessimistic in more or less equal measure. I am thus a biased believer. Similarly, a believer might count as biased in virtue of systematically departing from epistemic norms involving probabilistic coherence. In the “heuristics and biases” tradition inaugurated by the seminal work of Tversky and Kahneman (1974), it’s assumed that norms of probabilistic coherence are genuine norms of rational belief, ones that an ideally rational believer would respect. For example, if in a certain experimental set-up we assign a higher probability to the conjunctive proposition that Linda is a bank teller and is active in the feminist movement than to the proposition that Linda is a bank teller, then we’re to that extent irrational, inasmuch as no conjunction can have a higher probability than one of its conjuncts. The classification of the case as one that involves not only irrationality but also bias depends on Tversky and Kahneman’s diagnosis: the deviation from probabilistic coherence isn’t a matter of random error, but rather a systematic one, inasmuch as it’s generated by our employment of “the representativeness heuristic.”2 (4) Sometimes the norm in question is a moral norm. Consider a businessman who often cheats his customers. In doing so, he violates an important moral norm, but that doesn’t mean that he’s biased: perhaps he’s a thoroughly unbiased crook, who cheats whenever the opportunity presents itself. However, if he’s more likely to cheat his black customers than his white ones, or his older customers as opposed to his younger ones, then his departures from the norm exhibit a systematic pattern, and he’s not only corrupt but also biased. (5) In still other cases, the norm might be a norm of justice. Consider the norm according to which Only the guilty should be punished. When an innocent person is punished for a crime that they didn’t commit, that’s an injustice; and a system of criminal justice in which such occurrences are relatively common is to that extent an unjust system. Still, it doesn’t yet follow that a system that’s unjust for this reason is a biased system, for if its mistakes in the administration of punishment are sufficiently random, it will escape the latter charge. But if, on the other hand, the ranks of the innocent who are unjustly punished are disproportionately of a certain race, or class, or political point of view, then the system is not only unjust but biased. Having offered these examples to illustrate the core idea that biases involve systematic departures from norms, let me offer some further commentary on that idea. First, as the examples suggest, the sense of “systematic” in play here is the sense that contrasts with
“random”—as in “systematic versus random error”—as opposed to the related but different sense of “systematic” in which it means “done according to a fixed plan or system.” Of course, some paradigmatic cases of bias do involve agents acting according to a fixed plan or system. (Consider, for example, the ways in which the institutions and practices of explicit racial segregation were maintained in the American South throughout much of the 20th century.) But other paradigmatic cases of bias conspicuously lack this feature: this is true, for example, of many of the unconscious biases that have received much attention from psychologists and others in recent years. Systematic errors have a consistency that random errors lack. But talk of consistency suggests that multiple instances are involved, as opposed to single instances. Does the idea of trying to understand bias in terms of systematic departures from norms founder on the fact that bias can be displayed in one-off cases, cases that are not part of any larger pattern? Imagine a judge who regularly reaches their decisions in an unbiased manner, except for a single, isolated occasion when they allow considerations having to do with race to unduly influence their decision; after that, they return forevermore to being their usual unbiased self. Intuitively, the judge’s decision is biased, notwithstanding the fact that it isn’t part of any larger systematic pattern, and no account of bias should tell us otherwise. What might a proponent of the norm-theoretic account say about the apparent possibility of one-off cases of bias? According to the analysis offered in the last chapter, the judge’s decision counts as biased because it’s the result of a biased process. Suppose that as it happens this particular process of reasoning is only ever used once, by this judge or by anyone else. Even in that case, we can consider its employment across a range of cases, including merely possible ones. More generally, even in one-off cases of genuine bias, systematicity will be exhibited across relevantly similar possible cases. (Compare the way in which a coin that’s biased in favor of heads will exhibit its bias across a range of relevantly similar possible cases, even if as it happens it’s only ever flipped once in the actual world.) Turn next from “systematic” to “norm.” As noted in the Introduction, the term “norm” is multiply ambiguous. What sense of the term is relevant here? Not the sense of norm which is in play when one talks about “statistical norms”: even if the members of a family are all unusually tall, and so are outside the statistical norm in that respect, this has no tendency to show that the family, or any of its individual members, is biased. Nor is the relevant sense of norm that which figures in the academic literatures on “social norms” or “gender norms,” according to which norms are, roughly, a society or group’s unwritten rules, conventions, or expectations about how people should conduct themselves.3 An eccentric person who habitually wears graduation attire to the beach violates a social norm about proper dress,4 but that doesn’t mean that they are biased. As the five examples given at the beginning of this section suggest, the sense of “norm” that’s relevant to the norm-theoretic account of bias is neither the statistical nor the socialexpectation sense. Maximize expected value might be a genuine norm that governs rational action, even if, as a statistical matter, agents frequently fail to follow to it, and even if there is no social expectation that they will. Similarly, proportion your beliefs to the evidence, or
have beliefs that are probabilistically coherent, might be genuine norms of belief, even if, as a statistical matter, believers seldom if ever meet these normative ideals, and there is no social expectation that they will. Of course, the claim that any of these putative norms is in fact a genuine norm is itself contestable and contested: there are those who deny that maximizing expected value is a genuine norm on action, just as there are those who deny that proportioning one’s beliefs to the evidence, or having probabilistically coherent beliefs, are genuine norms of belief. Whether a putative norm is a genuine norm is always a substantive question. Because in practice more or less any principle that’s claimed to be a genuine norm will be a controversial case, any example that I use to illustrate the norm-theoretic account will itself be subject to controversy. As I understand it, a proponent of the norm-theoretic account is not committed to the claim that any specific principle is a genuine norm in the relevant sense. (Two theorists might both accept the norm-theoretic account while radically disagreeing about what the genuine norms are.) For the norm-theoretic account, what’s crucial is this: when we’re concerned with the bias in the pejorative sense (in the sense in which it’s at least prima facie objectionable), what matters are systematic departures from genuine norms, in the sense in which some think, while others deny, that maximizing expected value, proportioning one’s beliefs to the evidence, or having probabilistically coherent beliefs are genuine norms.5 Just how neutral is the norm-theoretic account with respect to substantive issues about which philosophers disagree? Consider a particularist who denies that there are any general norms or principles that play a significant role in normative thought. According to the particularist, although the virtuous person will recognize what they have most reason to do on particular occasions and act accordingly, the virtuous person does not arrive at this knowledge by applying some general principle or principles to the concrete circumstances in which they find themselves; nor is the correctness of their acting in this way ultimately underwritten by the truth of any such principle. Is the particularist denial that there are any such general principles or norms consistent with the norm-theoretic account? Yes, it is. Suppose that the particularist is correct in denying that there are any substantive norms or general principles that play the kind of foundational role that others have taken them to play. Still, here is one general norm that even the particularist should be happy to endorse: Do whatever you have most reason to do, given your particular circumstances. Once even this weak norm is admitted, the norm-theoretic framework can be applied straightforwardly. Specifically, given an agent who sometimes fails to do what they have most reason to do, we can ask: are their failures random, or do they betray some systematic pattern? Suppose that the agent tends to fail to do what they have most reason to do when the interests of their children are involved. In that case, the agent counts as biased because of the way in which they are disposed to depart from the norm of doing what they have most reason to do, and this is not something that the particularist need or should deny.6 The norm-theoretic account is an account of bias, in the pejorative sense of “bias.”7 As such, it is not an account of our practices of attributing bias in the pejorative sense, as opposed to the phenomenon itself, as it exists in the world, independently of our practices. However, as we will see, although the norm-theoretic account is not itself an account of our
practices of attributing bias, it naturally gives rise to certain ideas about how bias attributions work, and much of its theoretical utility consists in these connections. The norm-theoretic account is not offered as a reductive analysis of the notion of bias. Thus, a proponent of the account isn’t committed to the strong identity thesis that biases just are systematic departures from norms, or dispositions to systematically depart from norms. Indeed, as we’ll see, there are reasons to doubt that there is so much as an extensional equivalence here: there are some cases involving systematic departures from norms that we wouldn’t ordinarily classify as cases of bias, and there are also contexts in which we’re happy to talk of bias even though we don’t believe that any genuine norms have been violated, systematically or otherwise. Questions about such anomalous cases are taken up in Chapter 7, where they are used to fuel further development and refinement of the basic idea. The burden of this chapter and the immediately following ones is to show that such further development and refinement is worth the trouble: that the general framework provided by thinking of biases in terms of systematic departures from norms is an illuminating and fruitful one.
2. Disagreement It’s a familiar fact that people often disagree about whether a particular person or thing is biased. Employing the general framework provided by the norm-theoretic account, let’s distinguish a number of different forms that such disagreements can take. (1) First, you and I might agree about the relevant norm, and agree that some departure from it has occurred, but disagree about whether that departure is systematic, or sufficiently systematic to warrant the charge of bias. Suppose that I claim that the Post has a progressive bias in the way that it covers the news but you disagree. In attempting to substantiate the charge of bias, I present to you exhibits p1…pn, cases in which, I claim, the Post has departed from the norms of good journalism, in ways that unduly favor progressive positions. You agree with my assessment of those cases but nevertheless deny the charge of bias. This is a perfectly consistent response to the charge on your part. After all, you never claimed that the Post is infallible in conforming to the norms of good journalism, only that it’s unbiased. Moreover, even if the Post sometimes departs from the norms of good journalism in a way that unduly favors progressive views, it doesn’t yet follow that it’s biased. Suppose that in an attempt to rebut my charge, you produce cases c1…cn, cases in which, you claim, the Post has departed from the norms of good journalism in a way that’s unduly unfavorable to progressive views. In responding in this way, you dispute the charge of bias by providing evidence against systematicity.8 (2) A second way in which we might end up disagreeing about whether bias is present is the following. Even if we agree that departures from some putative norm occurred, and also that those departures were systematic, one of us might deny that this is a genuine case of bias on the grounds that the putative norm isn’t actually a genuine norm. Consider, for example,
the following case: SPECIEISM. You stand accused of “speciesism,” or bias in favor of human beings, on the grounds that you’re more concerned about the suffering of innocent human beings than about the suffering of innocent non-human animals. You freely admit that you care more about human beings, but you deny that this amounts to a bias on the grounds that there is good reason for your pattern of concern, and therefore no genuine norm that would require you to be indifferent between the suffering of human beings and the suffering of non-human animals.9
In some cases then, we might find ourselves in disagreement about whether an agent is biased because of an underlying disagreement about which putative norms are genuine norms. Interestingly, however, even if we do disagree about which putative norms are genuine, we might nevertheless still agree that an agent is biased in the relevant way. You judge that Jim is guilty of status quo bias, because he often chooses to stick with the status quo even when departing from it would maximize expected utility. I deny that maximizing expected utility is a genuine norm, but I nonetheless agree that Jim exemplifies status quo bias because of the way his actions systematically depart from what I take to be the correct practical norm(s). That seems perfectly coherent, and when we agree about Jim it seems as though this is something that we might both do clearheadedly. Nevertheless, there is a residual question or puzzle in the vicinity: if you and I would invoke different (and perhaps incompatible) norms in explaining why Jim counts as biased, why would we attribute the same bias to him? Here there are at least two possibilities. First, it might be that we really do agree about the relevant norm, after all. For example, perhaps the relevant norm in this case is the relatively trivial and more encompassing perform the most choiceworthy action available, and that norm is endorsed by both of us—although we would offer incompatible accounts of what makes an action choiceworthy if we were invited to theorize about it. Alternatively, perhaps our willingness to attribute the same bias is due to the fact that we agree about the cause of Jim’s systematically departing from whatever norm each of us takes to be that which should govern his actions. (You think that the relevant norm is N while I think that the relevant norm is the incompatible N,* but we agree that he’s guilty of status quo bias because we agree that it’s his excessive attachment to the status quo that leads him to depart from the norm that he should be following.) (3) A third and final way we might disagree about whether a given attribution of bias is warranted is this: we might agree about the content of the relevant norm but disagree about whether there has in fact been a departure from it, systematic or otherwise. Consider the norm of treating like cases alike. A progressive might claim that the significant and persistent gap between male and female earnings is indicative of large-scale sexist biases. A certain kind of conservative disputes the charge by arguing that women don’t actually earn significantly less than men, once all of the relevant variables are fully taken into account.10 In sum, one might argue that a charge of bias is misplaced, either because the person or thing accused of bias didn’t actually depart from the relevant norm or because, insofar as they did depart from the norm, their departure was insufficiently systematic; or else on the grounds that the alleged norm isn’t a genuine norm at all. Of course, one might also think that the charge of bias is unwarranted on multiple grounds.
3. The Perspectival Character of Bias Attributions Bias and impartiality is in the eye of the beholder—Samuel Johnson
Often, accusations of bias inspire not only denials but also countercharges of bias. In such cases, a person who denies the original accusation of bias claims that it’s actually the accuser who is biased, and it’s this that explains why they mistakenly think that bias is present even though it isn’t. In this section, I want to show how thinking of biases in terms of systematic departures from norms provides insight into this phenomenon. First, let’s consider a couple of typical—and extremely common—examples of the phenomenon in question: • Immediately after a loss in a close game in which several crucial calls went against their team, outraged supporters of the losing team post messages on an online discussion board claiming that the officiating was biased against their team. Others who watched the game reply by denying those allegations and suggest that those who make them do so only because they themselves are partisan fans whose judgments are biased in favor of their own team: if they were disinterested observers rather than biased partisans, they would not think that the officiating was biased. • Political conservatives often charge that the New York Times manifests a progressive bias in the way that it covers the news. Progressives often reply not only by denying the allegation but also by claiming that, to the extent that the allegation is sincere and not made in bad faith, it’s rooted in the fact that the conservative critics who make it are themselves in the grips of a conservative bias, which leads them to perceive a progressive bias where none actually exists. Unsurprisingly, the conservative critics are unimpressed by the fact that progressive defenders of the Times deny that it has a progressive bias. Indeed, as the conservative critics see it, the progressive defenders’ inability to perceive the bias in the paper, and their claim that a conservative bias lies behind the charges of progressive bias, is rooted in (and perhaps, is further evidence of) the critics’ own progressive bias. Notice that, as the second example illustrates, the process might iterate to ever higher levels. Presumably, if the progressive defenders were asked what, from their point of view, explains why the conservative critics mistakenly think that it’s the progressives’ (alleged) progressive bias which explains why the progressives think that the conservatives’ original charge of bias is rooted in the fact that the conservatives are biased, we would naturally expect the progressives to once again invoke the conservatives’ conservative bias (as they see it) in order to explain this further, higher-order fact, and so on. Why should charges of bias naturally give rise to countercharges of bias in this way? After all, it’s not generally or even usually the case that, when a person is accused of being F (where F is some negative characteristic or feature), and they deny the accusation, it’s natural for them to reply by claiming that it’s actually the accuser who is F, and it’s this that accounts for why the accuser incorrectly claims that they are F. For example, a person accused of
cowardice will not generally reply by accusing her accusers of being cowards. Of course, if her accusers are cowards (or might be suspected of cowardice by some relevant audience), then it might be advantageous for the person accused of cowardice to point this out. However, even in that case, the person accused of cowardice will not typically insist that it’s the cowardice of her accusers that explains why they accuse her of cowardice. (Although projection is a genuine phenomenon, it is not the default interpretation.) Similarly, someone accused of being greedy won’t generally defend himself against the charge by saying that his accusers only think so because they themselves are greedy, and so on. Again, why should things be different in the case of bias? The answer, I think, has to do with the fact that attributions of bias are often fundamentally perspectival, in a sense to be explored. Of course, at one level that answer is unsurprising, even obvious—“Who and what you count as biased depends on where you yourself sit!” But it’s worth trying to bring the intuitive idea into full or at least better view. As I’ll attempt to show, there are philosophically interesting wrinkles here, subtleties that are illuminated by the norm-theoretic framework. Consider first a case that doesn’t involve human subjects. Imagine two weatherforecasting models, M1 and M2, that consistently differ in their predictions. More specifically, let’s suppose that model M1’s predictions of the temperature at a given future time are consistently 10 degrees warmer than the predictions made by model M2. If M1 is reliably accurate in its predictions, it follows immediately not only that M2 is unreliable but also that it’s a biased model, for in that case M2 departs from the norm of accuracy in a systematic way: it’s biased in the direction of lower temperatures. On the other hand, if M2 is reliably accurate, it follows immediately not only that M1 is unreliable but also that it is a biased model, for in that case it’s M1 that systematically departs from the norm of accuracy: it’s biased in favor of higher temperatures. Which of the two models is biased depends on which of the two is in fact accurate. (Of course, given what has been stipulated about the case, it’s possible that neither of the two is accurate and both are biased.) Now let’s replace the weather-forecasting models with human beings, who have not only the capacity to make predictions about the world itself but also the capacity to (i) make judgments about the accuracy of predictions, and to (ii) attribute bias. Setting aside pathological cases in which one is alienated from one’s own beliefs, one takes one’s current beliefs to be true: if one didn’t, one wouldn’t continue to hold them but would believe something else instead. So long as one holds a belief, one will think that those who deny that belief, or who hold beliefs that are incompatible with it, are wrong to think as they do. Indeed, so long as one holds a belief, one is rationally committed to thinking that those who disagree with it are in error. This is a familiar point.11 However, although holding a belief commits one to thinking that a person who disagrees with it is in error, one is not thereby committed to thinking that either the person or their belief is biased. (One might think that their having arrived at a mistaken view on this occasion is the result of random error; or one might hold no view one way or the other about whether this is a case involving random error or bias.) Similarly, even if one finds that one disagrees with another person about multiple issues, one is committed qua believer to thinking that they are wrong about each of those issues, but one isn’t thereby committed to thinking that they are biased about those issues, as
opposed to merely unreliable. However, someone whose beliefs about a topic depart systematically from one’s own— say, by consistently assigning a higher value to some estimated quantity—is systematically departing from the truth, as one sees it. From one’s own point of view, they are biased. Indeed, in whatever sense believing something rationally commits one to thinking that anyone who disagrees is departing from the truth, one is similarly rationally committed to thinking that anyone who systematically disagrees with one’s beliefs about some topic is systematically departing from the truth. One is rationally committed to attributing not only error and unreliability to them but also bias.12 Now for the twist. Suppose that someone accuses you of being biased in your views about some topic. Assuming that the charge is sincere—it isn’t simply a malicious smear designed to discredit you or cast doubt on your credibility in the eyes of others—what must be true of them, such that it would appear to them that your views are biased? Answer: from their perspective, your views about the topic depart systematically from the truth. But then, if their views are such that given the truth of their views, your views systematically depart from the truth, it will also be true that given the truth of your views, it’s their views that systematically depart from the truth. Given that you endorse as true what you yourself believe, you’re committed to thinking not only that they’re wrong but also that they’re biased. More generally, in certain kinds of systematic disagreements, one is more or less forced to see those on the other side not only as mistaken but also as biased. When partisans see each other as biased, and respond to the other side’s allegations of bias by alleging that it’s really the other side that’s biased (and that their bias is what accounts for the unjust allegations of bias), it is not, I think, merely out of a desire to “go on the offensive” or “fight fire with fire” by accusing the other side of the very sins of which they stand accused. Rather, the phenomenon reflects a deep fact about the nature of bias attributions: their perspectival character. Disagreements about first-order questions—for example, about the merits of various political policies, or about the quality of officiating in some recently concluded athletic contest, or even about what the weather will be like—naturally bleed into disagreements about who is biased and who isn’t. Indeed, participants in such disputes are rationally committed to seeing those on the other side as biased, so long as their disagreements are sufficiently systematic. Because we appreciate this on some level, we naturally diagnose sincere but by our lights mistaken attributions of bias as artifacts of the attributor’s own, opposite biases. Notice that, to the extent that the points made in this section are correct, they hold not only for norms of accuracy or norms that apply to belief (as might be suggested by the examples considered so far) but also for other norms as well. Consider, for example, the practical norm of maximizing expected value. Suppose that a particular agent repeatedly acts in a way that has the effect of preserving the status quo; on that basis, you attribute status quo bias to them. Someone (perhaps the agent herself, or else a third party) takes issue with your assessment; according to them, acting in a way that preserved the status quo really was the best thing to do, in at least many of the cases in question. (If the agent had acted in some
other way, then she would have failed to maximize expected value.) From their perspective, it’s natural to see the readiness with which you attribute status quo bias as reflecting an opposite bias—a bias against the status quo, or in favor of gratuitous novelty, or “change for the sake of change.”13 If attributions of bias are fundamentally perspectival in this way, does that mean that who counts as biased and who counts as unbiased is relative to a perspective? Is bias, like beauty on some accounts, in the eye of the beholder?14 No! Consider again the rival weatherforecasting models, M1 and M2. So long as we abstract away from questions about which of the two models (if either) is in fact accurate, there is a symmetry between the two: given the accuracy of M1, M2 counts as a biased model; and given the accuracy of M2, M1 counts as a biased model. However, the truth conditions of judgments like “M2 is biased” or “M1 is unbiased” are not determined independently of facts about which of the two models (if either) is accurate and which is inaccurate. So long as one of the two models is accurate, the other is systematically inaccurate and therefore biased. The same holds for human believers. When two people systematically diverge in their judgments about some topic, we should expect each of them to regard the other not only as wrong in her first-order views but also as biased. In these respects, there is a symmetry between the two perspectives. But who is actually correct in thinking that the other party is biased on the basis of their first-order disagreements will generally depend on who is and who isn’t getting things right at that level. Correctness breaks the symmetry, even though there is a symmetry between the way things look from each of the two perspectives, and a symmetry between how each of the two parties would describe what’s true of the other party. Claims about bias are thus no more of a relative matter than are ordinary, first-order claims about the world itself. No more, but also no less. It’s a platitude that, even if what’s true is not in any interesting sense a relative matter that might vary from person to person, what it’s reasonable to think is true might vary, even radically, from person to person, inasmuch as different people might differ radically in the evidence that they have to go on. Here again, what holds for first-order claims about the world holds also for attributions of bias. You and I might differ radically in who or what we classify as biased or unbiased, and might be fully reasonable in doing so, given sufficiently large differences in our evidence.15 Let’s conclude this section by noting an implication of the analysis that might seem quite radical from certain vantage points. If the analysis is correct, then in many cases involving systematic disagreement over first-order questions, we’re in effect rationally committed to viewing those who disagree with us not only as mistaken but also as biased, even if we know nothing about how they arrived at their views, or why they currently hold those views. With respect to this point, it’s helpful to compare again the kind of (wide-scope) rational commitment that a believer incurs to thinking that those who hold recognizably incompatible views are mistaken. Imagine a student in a comparative religion class, who sincerely affirms the central doctrines of his own religion, but who cultivates a studied, non-judgmental agnosticism towards the central claims of other religions, including claims that conflict with the doctrines that he affirms.16 Although the student affirms the correctness of his own religion, he refuses to go so far as to judge that religious believers in other traditions who
believe incompatible things are wrong—he regards any such judgment as a rationally optional extra step—and so he instead suspends judgment about such questions. Although the student’s stance might seem to reflect an admirable humility towards others, it also betrays a certain lack of clearheadedness. His overall stance is a rationally unstable one: so long as he continues to believe the doctrines of his own religion are true, he’s bound to think that those who hold incompatible views are mistaken. Moreover, the following is also clear: he’s rationally bound to think that they are mistaken even if he knows nothing about how they arrived at their views, or why they currently hold those views. On the analysis offered here, an analogous point holds when it comes to attributions of bias. Suppose that you and I share our views about politics. It turns out that we disagree about many issues. Moreover, these conflicts have a systematic character: you would describe many of my political judgments as Too Far to the Left, while I would describe many of yours as Too Far to the Right. Even once it’s acknowledged that each of us is committed by our own first-order views to thinking that the other’s views are mistaken, there is some temptation to think that we needn’t go so far as to attribute bias to one another. The additional judgment that the other person is not only mistaken but also biased seems to involve a rationally optional extra step, and one that might in many cases seem unwarranted, or at least unnecessarily uncharitable. The sense that an attribution of bias would be uncharitable might seem especially strong in a case in which one lacks knowledge about relevant aspects of the other’s psychology, for example how they arrived at, or why they currently hold the views that they do. (Imagine that although we’ve shared our political views with one another, we haven’t shared our reasons for holding those views.) One might thus view those on the other side of some cluster of issues as mistaken while prescinding from attributing bias to them. (Perhaps one suspends judgment about the further question of whether they’re biased.) However, if the analysis offered here is correct, then that tempting stance is in fact an unstable position. One who takes the Other to be systematically mistaken while suspending judgment about whether they’re biased is making a mistake that’s analogous to the kind of facile pluralism which insists that, although the claims of one’s own religion are true, one doesn’t go so far as to say that incompatible claims of other religions are false, or that those who believe such claims are wrong.17,18 Consider the case in which you and I systematically disagree about politics, but where neither of us knows why the other believes as they do. In those circumstances, one reason why we might hesitate to attribute bias to the other person, even if we remain confident of our own beliefs, is the following: given that we don’t know why the other person believes as they do, it’s entirely possible that the other person is perfectly reasonable in believing as they do, given their evidence. That is, although we’re rationally committed to thinking that the other is mistaken so long as we continue to hold our original views, we’re not in the same way committed to thinking that the other person or their beliefs are unreasonable. For all we know, the beliefs in question might be perfectly reasonable things for them to think, given the epistemic position that they occupy. Of course, inasmuch as our own views commit us to thinking that the conflicting views of the other person are mistaken, we’re similarly committed to thinking that whatever evidence they might have for those views is misleading evidence, but nothing that we believe or are committed to believing is inconsistent with the
possibility that this is a case in which a person reasonably believes on the basis of genuine but misleading evidence. And the realization that one isn’t in a position to rule out the possibility that the other person is perfectly reasonable in believing as they do might seem to be in tension with a judgment to the effect that they’re biased in believing as they do. In fact, I think that this apparent tension is merely apparent: there is no genuine conflict between the judgment that the person and their beliefs are biased, and the judgment that, for all one knows, both the person and their beliefs are perfectly reasonable. In judging that the other person is biased, one is assessing their beliefs in terms of the norm of truth or accuracy in the light of one’s own beliefs about what is true. On the other hand, in recognizing that, for all one knows, those same beliefs might very well be perfectly reasonable given the other person’s evidence, one is invoking a different norm of belief: the norm according to which a person’s beliefs should reflect their evidence. Cases of this kind thus raise questions about the connections between bias and norms when multiple and potentially competing norms are in play. Let’s look at this phenomenon more closely.
4. When Norms Conflict According to the norm-theoretic account of bias, cases of bias typically involve systematic departures from norms. But in some cases, a person or thing might systematically depart from one norm in virtue of successfully complying with another. Indeed, in some circumstances, the only way of successfully complying with one salient norm might guarantee that one systematically departs from another salient norm. A paradigm of this general phenomenon is when the available evidence about some topic is systematically misleading. When the evidence is systematically misleading, successfully complying with the norm “Believe in accordance with the evidence” will lead us to systematically depart from the norm of truth or accuracy. (Although in that case we won’t know or be in a position to recognize that we’re departing from the truth norm.) What are the implications of this general phenomenon for the topic of bias? Consider: MISLEADING EVIDENCE: I don’t know Frank, but I know that you know him very well, and I have no reason to distrust you as a source of information. Unfortunately, however, my trust in you is misplaced, for you regularly pass along seemingly credible but false or misleading information about Frank that paints him in an extremely negative light. (In fact, Frank is a perfectly unobjectionable person.) I respond rationally to this evidence by forming strongly negative opinions about him. These fully rational but overly negative opinions about Frank influence my behavior towards him, in predictable ways. For example, when we consider who we should invite to our parties, I always quickly eliminate Frank from consideration in favor of others, and I go out of my way to avoid being in his presence more generally. My anti-Frank behavior would be fully warranted and appropriate if what I reasonably but mistakenly believe about him was actually true. Indeed, given what I reasonably but mistakenly take myself to know about Frank, it would be irrational on my part to act in any other way.
In these circumstances, it would be correct to describe me as biased against Frank.19 Many of my opinions about him are false; moreover, inasmuch as these false opinions are inaccurate in virtue of being overly negative, they depart from the norm of truth or accuracy in a
systematic way. On the other hand, my biased opinions about Frank perfectly reflect my evidence: the fact that I’m biased against him is due to the fact that I’ve impeccably followed the norm of believing in accordance with the evidence. Moreover, I exhibit an anti-Frank bias not only in my opinions but in also in my overt behavior. Given that Frank is actually a good person, but I behave towards him as though he isn’t, I systematically depart from practical norms of appropriate behavior in my interactions with him. However, I depart from these norms in virtue of complying with other practical norms, norms which concern how it’s rational for me to act given what I (rationally) believe. Thus, at the level of action as well as belief, I systematically depart from more objective norms (e.g. “believe what is true”) by successfully following other genuine norms that are relativized to what I believe and what it’s rational for me to believe given my information. As is so often the case when it comes to bias, my bias is rooted in ignorance. But in this case, it’s fully rational ignorance. At the level of both thought and action, rationality sometimes requires bias, in the pejorative sense. As that suggests, just as there are cases in which a believer or agent ends up biased because they behave rationally, as in MISLEADING EVIDENCE, so too there are cases in which a person escapes a charge of bias that would otherwise apply to them only because they depart from norms of rationality. Consider, for example: RACE SCIENCE: Claire is a non-scientist living during the heyday of 19th-century race science. She’s strongly inclined to believe that there is no ‘racial hierarchy’ with respect to intelligence, but she also thinks that this isn’t a matter that can be decided a priori. Because she lives in a segregated environment, she has no real contact with members of other races. All of the scientific authorities of whom she’s aware—for example, eminent scientists at Harvard University—claim that there is such a hierarchy, with whites at the top and blacks at the bottom, and that this is what the best available scientific evidence supports.20 Claire isn’t in a position to examine this putative evidence herself; what she has to go on is the testimony of the scientific experts. Indeed, we can imagine that her epistemic position with respect to the racial hierarchy thesis is much like the epistemic position of many contemporary nonscientists who aren’t in a position to evaluate the putative evidence in favor of anthropogenic global warming for themselves, and for whom the rational response is thus to defer to scientific authorities who have studied the question.21 Claire judges, correctly, that on balance her evidence favors the racial hierarchy thesis, and that it would be more rational for her to believe that thesis than to either disbelieve or remain neutral about it. Despite this, however, her egalitarian impulses win out, and she refrains from believing the thesis. If she had followed her evidence, she would have believed not only the racial hierarchy thesis but also a host of closely related claims about race differences that follow from it. Given that both the racial hierarchy thesis and the claims that follow from it are false, if she had held these racial beliefs, she would have systemically departed from the norm of truth in a way that’s characteristic of a biased believer. The fact that she isn’t biased in this way is due to the fact that she departs from the norm of believing in accordance with the evidence. Moreover, her departure from this norm is itself systematic, inasmuch as her deviations from it follow a consistent pattern in the direction of racial egalitarianism.
What should we say about Claire, with respect to the question of whether she’s biased? There is an obvious respect in which Claire is unbiased, or at least, less biased than she might have been: she refrains from holding biased beliefs about race. Does she nevertheless also qualify as a biased believer, in virtue of systematically departing from the relevant epistemic norm? More generally, in cases where a person systematically departs from one norm in virtue of complying with another, what determines whether they count as biased or unbiased? If bias typically involves systematically departing from norms, what happens when the norms conflict, or pull in opposite directions? Let’s survey a number of possible approaches to such questions. On a liberal approach, a
person counts as biased so long as there is at least one salient norm from which they systematically depart, even if doing so is the inevitable cost of compliance with some other genuine norm. According to this view, in the example above Claire counts as biased because she systematically departs from the norm of believing in accordance with the evidence in the direction of racial egalitarianism; in effect, she’s biased by her commitment to racial egalitarianism. On the other hand, if Claire had abandoned her racial egalitarianism in order to follow her evidence, she would also count as biased, for in that case she would hold various false racial beliefs (e.g. belief in the racial hierarchy thesis) that would systematically depart from the norm of truth in the direction of racial inegalitarianism. Because it regards systematically departing from even one salient norm as a sufficient condition for being biased, this view makes it relatively easy to count as biased. Indeed, one might think that this liberal approach entails the existence of what we might call “bias dilemmas,” situations that are analogous to the contested possibility of moral dilemmas. A genuine moral dilemma would be a case in which, no matter what an agent does, they act wrongly.22 In a parallel way, a bias dilemma would be one in which no matter what an agent or believer does, they won’t escape the charge of bias: given their circumstances, there is simply no way to avoid acting (or believing) in a biased manner. This might seem to be Claire’s position in RACE SCIENCE, according to the liberal approach currently under consideration. However, this isn’t quite right. For even if one is in circumstances in which one can successfully comply with a norm N only by systematically departing from norm N*, and one can successfully comply with N* only by systematically departing from N, there is another possibility: one might depart from both norms, albeit unsystematically. In terms of the RACE SCIENCE case: although on the view in question Claire will count as biased if her commitment to racial egalitarianism causes her to systematically depart from her evidence, and she will likewise count as biased if she systematically departs from her racial egalitarianism and follows her evidence, she can escape the charge of bias by holding various views about race, some of which are false and some of which are irrational, so long as the various false and irrational beliefs that she harbors about the topic are sufficiently scattershot, and don’t lean too much either in the direction of racial egalitarianism or in the direction of racial inegalitarianism. A small consolation! Without yet passing judgment on this liberal approach, let’s consider some alternatives to it. On a relativist approach, it’s a mistake to think that, in cases in which different norms pull in different directions, there is some fact of the matter or interesting question about whether the person counts as biased or unbiased tout court. Rather, the same person might count as both biased and unbiased, even with respect to the same issues at the same time, depending on which norm or standard they’re judged by. According to this view, in determining whether an agent counts as biased, we need to cut things up more finely than the first approach allows, in a way that’s akin to disambiguation. Thus, the believer who systematically departs from Norm N in virtue of consistently complying with Norm N* is unbiased in one way (relative to Norm N*, and from perspectives in which that standard is relevant) and biased in another way (relative to Norm N, and from perspectives in which it’s the relevant standard)—and, the relativist adds, there is simply nothing interesting to be said
about whether the believer counts as biased or unbiased simpliciter. In general, relativist views are naturally understood as non-hierarchical: things are relative to some parameter, which varies its value, and no value is privileged over the others (Compare Nozick 2001:17–19). The present relativist view holds that no norm is privileged over the others in determining whether something counts as biased. This non-hierarchical aspect of the relativist view is shared by the first, liberal view, according to which systematically departing from any salient norm is enough for a person or thing to count as biased. A third approach rejects this non-hierarchical aspect of the first two. A theorist who pursues this approach attempts to identify some norm or norms that have priority over the others (or some ordering among norms) in determining whether something counts as biased. According to this picture, a norm N might have priority over another norm N* in the following way: if the norms pull in opposite directions (such that one can only comply with N by systemically departing from N*, and vice versa), then if one systematically departs from N in order to comply with N*, one counts as biased; but it’s not the case that one counts as biased if one systematically departs from N* in order to comply with N. Call this approach the Priority View. Why might some norms have priority over others in this way? A natural thought, and one worth exploring, is this: perhaps more important norms have priority over less important norms. Even when issues about bias are not in view, it’s generally agreed that some norms are more important than others. For example, it’s generally agreed (at least by those who are prepared to recognize moral norms at all) that moral norms are more important than genuine norms of politeness. Indeed, some hold that genuine moral norms are always of overriding importance; moral norms always have “the last word” over considerations of other kinds. Given the plausible thesis that some norms are more important than others, an obvious possibility is that, when it comes to the determination of bias, more important norms have priority over less important norms. A related but different suggestion, also worthy of consideration, is that more fundamental norms have priority over less fundamental norms. Consider the relationship between the epistemic norm “believe in accordance with the evidence” and the norm of believing the truth. Epistemologists often claim that the importance of following one’s evidence (or more generally, following epistemic norms) is a derivative matter, something that derives from the importance or value of believing the truth. What ultimately matters is that we believe what’s true, or that we have true rather than false beliefs. But because we’re not always in a position to achieve this goal directly, we pursue it indirectly, by way of pursuing the ostensibly easier to achieve goal of believing what our evidence supports. Believing in accordance with our evidence thus often serves as an effective (albeit fallible) means to the relevant end. Against this background picture, cases in which the available evidence is systematically misleading are in effect pathological cases, in which what is usually our most effective means to our ultimate goal is not only ineffective but positively counterproductive. This common picture might seem to provide a basis for privileging the truth norm over the norm of believing in accordance with the evidence. Interestingly, however, these suggestions turn out to be false: whether a person counts as biased does not track the importance or the fundamentality of the norm from which they
depart. Notice first that there are cases in which we would unhesitatingly describe an agent as biased, even though the fact that they depart from a norm is motivated by their need to comply with another, comparatively fundamental norm that’s of overriding importance. This seems to be especially true in cases in which an agent is acting in a certain social role, and that social role is closely connected with norms of objectivity or freedom from bias. For example, consider the following case: SELF-INTERESTED REFEREE: A referee is officiating a basketball game. Unfortunately, at some point in the past, as a result of a series of bad gambles that he now deeply regrets, the referee has become deeply indebted to the mob and owes them an amount of money that he has no realistic hope of repaying. The mob has credibly threatened to kill him after the game, unless he sees to it that Team A defeats Team B. (The offer of the conditional reprieve is also credible.) Because of his awareness of these facts, the referee consistently favors Team A over Team B in his calls throughout the game.
Given that his life is at stake, the referee has extremely strong self-interested reasons to make calls favorable to Team A. Indeed, it’s natural to understand the case in such a way that if he failed to do this, he would be positively irrational. Nevertheless, the referee counts as biased. Indeed, it would perhaps be difficult to come up with a more paradigmatic case of a biased agent than a mob-indebted referee whose calls favor one team over the other because he knows that this is what the mob wants him to do. Even if it’s more important for him to act so as to save his life, the systematic way in which he departs from the correct calls ensures that he counts as biased in a context in which those actions are under discussion.23 Is our readiness to describe the referee as biased due to the fact that it’s a concern for his own self-interest that causes him to depart from the correct calls in the way that he does? Is it relevant that the referee is responsible for his current predicament because of his own past mistakes? (Perhaps we’re reluctant to let the referee escape the charge of bias given that he would not now have such strong prudential reasons to favor one of the two teams if not for his own past bad decisions.) No, for we still attribute bias to the referee even when we change the case so that he’s made no past mistakes, and when it’s the overriding demands of impersonal morality as opposed to self-interest that lead him to favor one of the two teams. Consider, for example: MORAL REFEREE: A referee is officiating a basketball game. Although his own well-being is not at stake, the mob has credibly informed him that they will kill a large number of innocent people that they’ve taken hostage unless he sees to it that Team A defeats Team B. Because of his awareness of these facts, the referee consistently favors Team A over Team B in his calls throughout the game; indeed, he makes the same calls as his counterpart in SELFINTERESTED REFEREE.
Let’s assume that, on any plausible theory of morality, the case can be described so that the referee’s moral reasons to ensure that Team A triumphs outweigh whatever reasons he has to call the game in the manner of an impartial official. Even so, here too it seems clearly right to describe the official as biased in favor of Team A. (Imagine that moments before the game begins, you and I are in Las Vegas contemplating a bet on its outcome. We know all of the relevant facts related in MORAL REFEREE, although unfortunately we’re not in a position to help the hostages, or to otherwise interfere with what’s taking place. In deliberating about which team to bet on, we review all of the considerations that favor Team A, and all of the
considerations that favor Team B. In this context, we ask: are any of the referees for the game biased in favor of either team? Here the answer is clearly yes: The moral referee is biased in favor Team A.) It’s also worth considering the following variant case: IMMORAL REFEREE: As in the case of MORAL REFEREE, a referee has been credibly informed by the mob that they will kill a large number of innocent hostages unless he sees to it that Team A defeats Team B. However, the referee’s commitment to being an unbiased official is such that he doesn’t allow this knowledge to influence his officiating in any way: indeed, he calls the game with scrupulous impartiality. Because of this, a number of close ‘judgment calls’ towards the end of the game go in favor of Team B, which triumphs by a single point in the final seconds. (Let’s suppose that, notwithstanding their closeness, all of these calls not only correspond to what the referee saw but were also objectively correct given what actually took place on the court plus the rules of basketball.) The mob follows through on its threat and executes the hostages in a particularly gruesome way.
I assume that, given the circumstances, when the referee refuses to favor Team A over Team B in the close calls towards the end of the game, he systematically departs from what the operative moral norms require of him. Moreover, I assume that following the operative moral norms is of overriding importance, and in particular that (at least in this case) morality trumps the norms of objectivity with which the referee complies. Nevertheless, notwithstanding his systematic departure from norms of overriding importance, it doesn’t seem correct to describe the referee as biased as opposed to unbiased. (If, after he retires, he perversely boasts to his grandchildren—“Although people were always after me to favor one team or another for various reasons, I never once was anything other than an unbiased official”—this game will not serve as a counterexample to his boast.) Indeed, the referee’s lack of bias is a central explanatory factor, along with the wickedness of the mob, in explaining why the moral catastrophe ensued. One lesson of MORAL REFEREE and IMMORAL REFEREE then, is this. Just as one might be required by either theoretical or practical rationality to be biased in the pejorative sense (as in the cases of MISLEADING EVIDENCE and SELF-INTERESTED REFEREE, respectively) so too one might be required by the demands of impersonal morality to be biased. We have then, the following two principles: √RATIONALITY SOMETIMES REQUIRES BIAS. In some circumstances, one can only be rational if one is biased, in the pejorative sense. √MORALITY SOMETIMES REQUIRES BIAS. In some circumstances, one is morally required to be biased, in the pejorative sense.
According to the priority view, some norms have priority over others in determining whether an agent counts as biased. On one version of that view, facts about which norms have priority track facts about the relative importance of the norms in question, or which norms are overriding. On another version of the view, facts about which norms have priority in the determination of bias track facts about which norms are more fundamental. In the light of the considerations adduced here, neither of these ways of spelling out the priority view seems promising. Of course, this leaves open the possibility that some alternative version of the priority view is more defensible. What should we say about the choice between the liberal, relativist, and priority views? In
fact, I believe that there is some truth in the neighborhood of each. In order to appreciate the truth in the neighborhood of the liberal view, consider again the case of Claire. Given the evidence available to her, there are two relevant (non-exhaustive) possibilities to consider: (i) in the first case, she allows her beliefs to be determined by the evidence, and so comes to believe the racial hierarchy thesis and a host of related, false claims about black racial inferiority that depart from the norm of truth in the direction of racial inegalitarianism; (ii) in the second case, she retains her egalitarian convictions by departing from her evidence in a way that systematically favors those convictions. Of each case, we can ask: Is Claire biased? In context, that question seems to be more or less equivalent to the following questions: Does Claire have a bias? Do her views reflect some bias? In either case (i) or (ii), it seems like the answer to these questions is Yes. In case (i), Claire is biased (and it’s true to say that her views about race reflect a bias) inasmuch as she holds false and racially biased views about the relative superiority of whites to blacks. It’s true that those are reasonable things for her to think, given her impoverished epistemic situation. But that doesn’t mean that she’s free of racial bias, any more than the rationality of my overly negative attitudes and hostile actions towards Frank mean that I’m not biased against him, in the original MISLEADING EVIDENCE case. However, an affirmative answer to the same questions also seems to be in order in case (ii). Although in that case Claire is free of racial bias, the fact that she systematically deviates from the evidence in the way that she does makes her susceptible to the charge that she’s biased in a different way. For example, consider a context in which what’s up for discussion is who among us best exemplifies and lives up to the ancient Socratic ideal of “following the argument wherever it leads.”24 Here it seems that it would be true to say: “For better or for worse, you shouldn’t expect Claire to follow the argument wherever it leads when doing so conflicts with her egalitarian commitments, she’s biased by those commitments.” Notice that, as predicted by the liberal view, given the systematically misleading character of Claire’s evidence, and the resulting fact that the truth norm and the evidence norm pull in different directions, the only case in which Claire completely avoids the charge of bias is one in which she’s unsystematically unreliable with respect to both norms. More generally, the liberal view correctly captures the idea that, when a systematic departure from a norm would otherwise amount to a bias, the mere fact that departing in that way is an inevitable consequence of complying with some other genuine norm doesn’t make the original charge of bias inapplicable or inapposite. Next, here is what I take to be the truth in the neighborhood of the relativist view. Sometimes, we evaluate whether a person counts as biased from a certain perspective, a perspective that’s defined by a particular set of norms, and so whether they count as biased depends on whether they systematically depart from that set of norms as opposed to any other; but we could equally well take up an alternative perspective defined by a different set
of norms, in which case the correct thing to say about whether the person counts as biased or unbiased might very well reverse. Indeed, sometimes we in effect make the relevant parameter or perspective explicit, as when we ask, not whether some individual is unbiased tout court, but whether they are an unbiased judge or juror, or an unbiased referee, or an unbiased witness. As noted in Chapter 1, no one who has already made up their mind prior to the start of a criminal trial that the accused really did do what he’s accused of doing qualifies as an unbiased juror. If that’s right, then it’s a constitutive norm of being an unbiased juror that one hasn’t already made up one’s mind about such things prior to the start of the trial. However, there is no such norm when it comes to being an unbiased witness: someone who genuinely saw the accused commit the crime might be thoroughly convinced that he did what he’s accused of doing prior to the start of the trial, but that doesn’t disqualify her from being an unbiased witness. (Indeed, it does not even count against her being an unbiased witness.) When we ask whether such a person is unbiased, we might be asking whether she is or would make an unbiased juror, in which case the correct answer to our question is “No.” On the other hand, we might be asking whether she is or would make an unbiased witness, in which case the answer might very well be “Yes.” In a context in which both potential disambiguations are equally salient, the unrelativized question, “Yes, but is she biased?” has no answer. This is the truth in relativism. Yet the same sorts of considerations suggest that there is also a truth in the neighborhood of the priority view. Although there are cases in which (as the liberal view suggests) systematically departing from any contextually salient norm is enough to count as biased; and there are cases in which (as the relativist view suggests) whether someone counts as biased is relative to the perspective from which they’re viewed or evaluated; there are also contexts in which some norms have a kind of de facto privileged status in determining whether a person counts as biased. A paradigm of the last possibility is when an agent is acting in some official capacity, and the social role in question is understood to be closely connected to certain norms of objectivity, as in the case of a judge presiding over a trial, or a referee who is officiating some competition. As the cases of MORAL REFEREE and IMMORAL REFEREE suggest, in such cases the relevant norms can have priority in determining whether an agent counts as biased or unbiased even over genuine norms that are overriding in other more important respects, such as moral norms that enjoin us to act so as to prevent the deaths of innocent people. We will explore the general category of norms of objectivity in detail in Chapter 6. In general, a good theory of some phenomenon will be both explanatorily powerful and fruitful: it will offer compelling explanations of striking facts in the relevant domain, and it will also have theoretically interesting and provocative further implications. In this chapter, I began to make the case that the norm-theoretic account of bias possesses both of these theoretical virtues. With respect to explanatory power, the norm-theoretic account offers a compelling explanation of the striking fact that charges of bias frequently inspire countercharges of bias. Moreover, the account has theoretically interesting and provocative further implications, as witnessed by the fact that it suggests that both morality and rationality sometimes require bias. In the next chapter, I want to continue to build the case that the account possesses these explanatory virtues to a high degree by exploring some of its
further applications and implications. With respect to explanatory power, I’ll argue that this general way of thinking about bias offers particularly good explanations of the “bias blind spot” and a number of related psychological phenomena. With respect to its fruitfulness, I’ll show that it has provocative further implications for introspection as a way of attempting to detect bias, as well as for the apparent inevitability of the bias blind spot. Bias: A Philosophical Study. Thomas Kelly, Oxford University Press. © Thomas Kelly 2022. DOI: 10.1093/oso/9780192842954.003.0004
1 Classic discussions of status quo bias include Kahneman, Knetsch, and Thaler (1991) and Samuelson and Zechauser (1988). Nebel (2015) is a recent philosophical treatment. 2 The classic presentation of the case and the diagnosis is Tversky and Kahneman (1983). 3 For a book-length philosophical exploration of norms in this sense, see Brennan et al. (2013). 4 I borrow the example from Uttich and Lombrozo (2010). 5 The fact that there can be substantive disagreement about what the genuine norms are is among the issues that I take up in the next section. 6 For similar reasons, the norm-theoretic account of bias is perfectly compatible with “reasons first” views (e.g. Scanlon 1998) on which the fundamental normative notion is that of a normative reason as opposed to some general normative principle or principles. Thanks to Ernest Sosa for raising these questions about whether the norm-theoretic framework is compatible with both particularism and the “reasons first” program. On particularism, see especially Dancy (2017); on the reasons-first program, see especially Schroeder (2021). 7 On the distinction between pejorative and non-pejorative uses of “bias,” see Chapter 1, §1 and Chapter 7, §5. What, if anything, should a proponent of the norm-theoretic account say about non-pejorative uses of bias? I discuss this issue in Chapter 7. 8 Notice that if our dispute plays out in this way, then it’s natural to construe your response to my charge of bias as involving a countercharge of bias: in presenting only cases that favor my view, I’m guilty of inviting others to draw a conclusion from a biased sample of the evidence. In the next section, I explore why charges of bias so frequently prompt charges of bias in response. 9 Alternatively, you might respond to the accusation of speciesism not by denying the charge of bias but rather by owning it. Consider, for example, the following possible reply: “Yes, I’m biased in favor of human beings. I’m much more concerned about their suffering than about the suffering of non-human animals. But we should be biased in favor of our fellow human beings!” For a discussion of such cases and how the norm-theoretic framework accommodates them, see Chapter 7, §5. 10 For an example of the former, see Scheider and Gould (2016); for an example of the latter, see Sowell (2011). 11 “Neutrality is not an option for believers. One is bound to think any given belief of one’s own superior in truth-value to the contrary beliefs of others” (Williamson 2007:247). We should note that the rational commitment that applies to a believer is best understood in terms of a wide-scope requirement of rationality, as opposed to a narrow-scope requirement. That is, what’s rationally required of the person is that they not have a certain combination of attitudes: that they not believe p while suspending judgment about the truth-values of the contrary beliefs of others. 12 Again, one’s rational commitment in such circumstances should be understood in terms of a wide-scope norm of rationality as opposed to a narrow-scope requirement. What’s rationally disallowed in such cases is a certain combination of attitudes: (i) continuing to believe as one does, while (ii) recognizing that the judgments of the other party systematically diverge from one’s own, but (iii) either denying or suspending judgment about whether the other party is biased. 13 Compare the online discussion, among corporate culture types, of the putative norm “Don’t Change for the Sake of Change.” (e.g. https://medium.com/@mikef.design/dont-change-for-the-sake-of-change-d03f73edba9e). 14 As the epigraph for this section indicates, this was apparently Samuel Johnson’s view. 15 Of course, things grow potentially more complicated once we learn about these disagreements. In keeping with the
general trajectory of the immediately preceding remarks, one might think that, once again, there is nothing different here: whatever the correct epistemological story is about how we should respond to known disagreement when the disagreement concerns first-order claims about the world, that story can simply be applied straightaway to known disagreements about who is biased and who isn’t. While I think that this is correct as far as it goes, I also think that judgments about bias often play a special role in the epistemology of disagreement, for reasons that I’ll explore in Chapter 10. 16 Compare the stance of many of Richard Feldman’s undergraduate students, as reported in Feldman (2007). 17 Again, and in parallel to the cases discussed above, the rational requirement that applies in such cases should be understood as a wide-scope requirement: what’s rationally required is that one not hold a certain combination of attitudes, as opposed to that one believe that the Other is biased. In a given case, it might be that what one should do is to continue to suspend judgment about whether the Other is biased while relinquishing the belief that their views diverge systematically from the truth. 18 In this section, I’ve argued that just as disagreement requires us to think that the other party is mistaken, sufficiently systematic disagreement requires us to think that the other party is biased. However, it’s also possible to overstate the parallel between the two cases, in the following way. Given that you and I disagree about some issue, you are committed to thinking that I’m mistaken as a matter of logic alone: your belief logically entails that my incompatible belief is false. However, the rational requirement to think of another person as biased given certain patterns of disagreement will generally not be a matter of logic alone. Suppose that you and I repeatedly disagree about politics, and in each case my belief is Too Far to the Right given the truth of your belief about the issue in question. Here, logic alone does not require you to think of me as biased, for it’s logically consistent for you to view these not as systematic errors on my part but rather as a series of random mistakes that just happened to accidentally cluster in a certain way. Suppose, for example, that despite this past history, you think that the next time I make a mistake about politics, I am no more likely to miss to the right than to miss to the left. Compare: even if a coin repeatedly lands heads, logic alone will never entail that the coin is biased, or that it is disposed to land heads as opposed to tails. Similarly, even if I repeatedly miss to the right, logic alone will never entail that I am disposed to miss to the right, or that I’m biased in that way. But one can be rationally required to conclude things that are not dictated by logic alone. The rational pressure on you to conclude that I’m biased in the light of our past disagreements is like the rational pressure to conclude that a coin that repeatedly lands heads is biased in favor of heads. (On biases of people as dispositions, see especially Chapter 5, §1.) 19 Similarly, it would be correct to describe you as having biased me against Frank. 20 On this point, see especially Menand, “Morton, Agassiz, and the Origins of Scientific Racism in the United States.” Compare Appiah’s (1990:17) assessment: “In mid-nineteenth-century America…the pervasiveness of the institutional support for the prevailing system of racist belief—the fact that it was reinforced by religion and state and defended by people in the universities and colleges, who had the greatest cognitive authority—meant that it would have been appropriate to insist on a substantial body of evidence and argument before giving up assent to racist propositions.” 21 Here I take some liberties with relevant historical facts, for purposes of the hypothetical example. For recent philosophical discussion of the status of racist or prejudiced beliefs when those beliefs are held in social environments that are in some respects epistemically favorable to them, see especially Gendler (2011), Siegel (2017: especially Ch. 10), and Begby (2021). Some moral encroachment theorists might deny that the RACE SCIENCE case is coherent on the grounds that, given the potential moral costs of holding the racist beliefs, Claire’s evidence, although substantial, is insufficient to warrant holding those beliefs. If the view in question entails that the relevant beliefs wouldn’t be warranted regardless of how strong Claire’s misleading evidence is stipulated to be, then I think that it’s too strong to be defensible (Kelly, in preparation). However, even if some such view is correct, the general issue about the relationship between bias and conflicting norms with which I’m concerned here would still arise. For the alleged phenomenon of moral encroachment exclusively concerns the interaction of moral and epistemic factors, and that’s a special case of the more general phenomenon which gives rise to the issue of concern here. In short, those sympathetic to strong forms of moral encroachment might consider a different example. Prominent moral encroachment theorists (not all of whom would accept the strong form of the view mentioned here) include Basu (2019, 2020, 2021), Bolinger (2018), Fritz (2017), Moss (2018), Pace (2011), and Schroeder (2018). For criticism, see especially Begby (2018), Gardiner (2018), and Leary (2021). Bollinger (2020) is a useful overview. 22 For defenses of this possibility, see (e.g.) Marcus (1980), Williams (1985) and Tessman (2015, 2017). Critics include Conee (1982), Zimmerman (1996), and Sayre-McCord. For an overview of the debate, see especially McConnell’s (2018) survey. 23 Notice that in the case of MISLEADING EVIDENCE, it’s theoretical rationality which requires the protagonist to be
biased, while in the case of SELF-INTERESTED REFEREE, it’s practical rationality which does so. 24 For a detailed exploration of this ideal, see Chapter 6, §3.
4 The Bias Blind Spot and the Biases of Introspection Generally speaking, we tend to see ourselves and our judgments as more objective and less biased than other people and their judgments. This is the bias blind spot, which has been extensively documented and discussed by social psychologists.1 In this chapter, I want to show how the general ideas about bias and bias attributions introduced in the last chapter illuminate the bias blind spot and a number of closely related psychological phenomena. Building on those ideas, I’ll offer an explanation for the bias blind spot that supplements and improves upon the kinds of explanations that are popular among the psychologists who have studied it. On my account, the standard explanations fail to do justice to the perspectival character of bias attributions, or the ways in which a person’s judgments about who or what is biased about a topic are rationally influenced or constrained by their own first-order views about that topic. The account also has a number of surprising implications. For example, consider the notorious and well-documented fact that human beings are highly unreliable when it comes to detecting their own biases through introspection. I’ll argue that this isn’t a contingent fact, or something that depends on the finer details of human psychology. Rather, it holds as a matter of necessity. In more picturesque terms: even God could not have made us creatures who reliably detect our own biases through introspection.
1. The Introspection Illusion as a Source of the Bias Blind Spot What accounts for the fact that people tend to see themselves as less biased than their peers?2 Among psychologists who have studied the bias blind spot, it’s generally agreed that the following mechanism plays an important role in producing it. It turns out that, when we consider whether other people are biased, we frequently reason in a way that’s fundamentally different from the way we think about the possibility that we ourselves are biased. When we consider the possibility that someone else is biased, we often rely on our lay theories about the circumstances in which people tend to be biased, and we apply those theories to the case at hand. For example, perhaps I think that, when someone’s financial interests are at stake, this can distort their judgment. I then notice that your financial interests are at stake, and that you believe in a way that aligns with those interests; on that basis, I conclude that you are (or might very well be) biased. However, when I consider whether I’m biased about some issue,
I’ll typically forget all about my general theories about the circumstances in which people tend to be biased (even when those theories would be relevant and applicable) and rely instead on introspection. I consider the question of whether I’m biased; in response, I introspect and conclude that I’m not. More generally, people rarely conclude that they themselves or their beliefs are biased on the basis of introspection.3 The net effect of this asymmetry is that we end up thinking that we’re less biased than other people.4 The idea that this mechanism plays at least an important role in generating the biased blind spot is endorsed by more or less every psychologist who discusses the bias blind spot.5 To be clear, I don’t deny that we do sometimes reason in this way and that our doing so contributes to the bias blind spot. Rather, at this point I want to offer three observations about this standard story involving introspection. Observation #1: Even if proceeding in this way works terribly in the case of bias, there is nothing inherently objectionable about relying on different methods in the first person case and the third person case when it comes to attributing a given property to oneself and to others, or even about ignoring relevant evidence in order to rely on introspection in one’s own case.
In these respects, it’s worth comparing attributions of bias to attributions of sadness. When I make a judgment about whether you’re sad, I rely on certain behavioral evidence, including your linguistic behavior. On the other hand, when I make judgments about whether I’m sad, I generally don’t rely on (or even pay much attention to) relevant behavioral evidence of this kind, even when it’s readily available to me. Rather, I introspect. (Typically, when I know that I’m sad, I know this on the basis of introspection.) And this way of proceeding seems perfectly defensible. By contrast, it does not seem similarly defensible to rely on introspection in judging whether I’m biased while relying on circumstantial evidence when I attribute bias to you. At a minimum then, a sufficiently deep explanation of the bias blind spot within this basic framework should say something about why this general type of procedure—rely on introspection in one’s own case, while relying on circumstantial evidence when it comes to other people—works so poorly in the case of bias, given that it seems to work well enough when it comes to things like sadness. Perhaps the most obvious thought that one might have about this difference is the following: while introspection is a generally reliable (albeit fallible) way of determining whether one is sad, it’s not a generally reliable way of determining whether one is biased. However, this thought leads to a second observation: Observation #2: Insofar as introspection contributes to the bias blind spot, the important point is not so much that it’s an unreliable method for detecting bias, but rather that it’s a biased method for detecting bias.
Crucially, the kind of unreliability that introspection displays in this context is not a propensity to commit random errors, but rather errors that are patterned, predictable, and systematic. More specifically, the unreliability of introspection as a means of detecting bias is a matter of its tendency to systematically depart from the truth by generating “Type 2” errors —that is, false negatives—as opposed to “Type 1” errors, or false positives.6 On the one hand, introspection rarely leads people to mistakenly conclude that they’re biased even
though they’re actually unbiased. On the other hand, introspection often leads people to make the opposite mistake. That is, it often leads people who are in fact biased to conclude, mistakenly, that they’re not. Thus, as a way of arriving at beliefs about whether one is biased, introspection is itself a heavily biased method: it’s strongly biased in favor of false negatives as opposed to false positives. The fact that introspection contributes to the bias blind spot— we’re less likely to see ourselves as biased than other people—derives from this more fundamental bias that it exhibits even before other people are on the scene. Indeed, although theorists often emphasize the unreliability of introspection in this and other contexts, I believe that when it comes to understanding the kinds of asymmetries involved in the bias blind spot, the characteristic biases of introspection are more important than its unreliability. After all, suppose that, contrary to fact, introspection was an unreliable but unbiased way of attempting to determine whether one is biased: that is, it was just as likely to mislead us by making us think, incorrectly, that we’re biased even when we’re actually unbiased, as it is to make us incorrectly think that we’re unbiased even when we’re actually biased. Or suppose, even more radically, that introspection had the opposite bias, and that it was more likely to generate “false positives” than “false negatives.” In either of those scenarios, introspection would still be an unreliable way of judging whether we’re biased (we would arrive at many false beliefs by relying on it), but it would not encourage us to think that we’re less biased than other people. To the extent that introspection contributes to the bias blind spot, the central fact is not its unreliability but the fact that it’s biased in favor of making certain mistakes as opposed to others. Thus, insofar as introspection contributes to the bias blind spot, a sufficiently deep explanation of the bias blind spot will explain this characteristic bias of introspection, something that is not itself explained by the standard story sketched above. Finally: Observation #3: Even if the standard story about how introspection contributes to the bias blind spot is true as far as it goes, it can’t be the whole story about the bias blind spot, for there are important facts in this area that it’s unable to explain.
For example, here is another empirically well documented fact that I take to be quite significant in this context: AN ADDITIONAL FACT: we tend to see people with whom we disagree as more biased than people with whom we agree.7
While this fact is perhaps not particularly surprising, notice that this asymmetry isn’t something that will be explained by the standard story about introspection, at least in any straightforward way. After all, when it comes to other people, I’m no more in a position to introspect how those who agree with me arrive at their views than I’m in a position to introspect how those who disagree with me arrive at their views. As far as introspection is concerned, it seems like the important distinction is between me and everyone else (regardless of whether they agree with me or not). Thus, while the standard story can account for why we’re more likely to attribute bias to other people than to ourselves, it doesn’t
explain why we’re more likely to attribute bias to other people depending on whether their beliefs match our own. Why is this significant in the present context? I think that we should expect that whatever ultimately explains the bias blind spot also explains (or at least, sheds light upon) this additional asymmetry. For it’s implausible that these are two independent things that both turned out to be true of us: (i) on the one hand, we tend to think that we’re less biased than other people; and (ii) on the other hand, we tend to think that people who agree with us are less biased than people who disagree with us. (It’s not, after all, as though some other possible combination was just as likely as this one.) Here is one route to that conclusion. As soon as I hold views about a given topic—for example, politics—we can divide up the world into people who agree with me and people who don’t. (Here I use “agree” in a weak sense, according to which the people who agree with you are just those whose beliefs match your own, regardless of whether there is any awareness of this.) Now, among the group of people who hold the same views that I do about the topic, one member of that class is me. Indeed, in this context I’m a very special case, for there is no one else with whom my agreement is as complete and thorough as it is with my current self. If then a theorist were to propose giving one explanation for why I tend to think that I’m less biased than other people and a completely distinct, non-overlapping explanation for why I tend to think that people who agree with me are less biased than people who disagree, we should, I submit, find that an unattractive suggestion.8
2. Why We’re More Likely to See People as Biased When They Disagree with Us Consider again the ADDITONAL FACT listed above. What accounts for why we’re more likely to attribute bias to those with whom we disagree? (Is it simply that we find people who disagree with us more disagreeable [!] and so we’re quicker to attribute other vices and unattractive features to them, including bias?) Recall the perspectival account of bias attributions developed in the previous chapter. According to that account, a person’s judgments about which people are biased about a given topic will typically depend on their own views about that topic. As we saw there, one implication of the account is this: when one finds oneself in a sufficiently systematic disagreement with another person, one is rationally required to see them not only as mistaken but as biased. Of course, in a case in which one takes the other person to be in error, one might be uncertain as to whether the error is random or the kind of systematic error that’s characteristic of bias. One might then divide one’s credence among these possibilities. But generally speaking, the phenomenon of disagreement at the first-order level will tend to create rational pressures for thinking of those on the other side as biased, pressures that will increase the more the disagreements take on a patterned form. In contrast, the perspectival character of bias attributions creates no similar rational or psychological pressures to see those with whom one agrees as biased. Indeed, so long as one
believes as one does, one is rationally required to believe that anyone who agrees is correct in their beliefs (although one isn’t rationally required to view them as either biased or unbiased). Even though the perspectival account of bias attributions allows, correctly, that one can recognize a person who shares one’s view about some issue as biased about it (as when the principled opponent of affirmative action recognizes bias in the racist opponent of affirmative action), the mere fact that the person believes as they do won’t itself be counted as a reason to think that they’re biased, given one’s own perspective on the issue. Generally speaking, one will attribute bias to a person with whom one agrees only if one has some independent reason to think that they’re biased. Indeed, in the absence of any prior reason to think that they’re biased or unbiased, it will be natural to treat the fact that they believe as they do as a piece of evidence that they’re unbiased, that is, not disposed to systematically depart from the truth. (Compare: although one might have independent reason to think that a coin that lands heads on a given occasion is biased in favor of tails, the fact that it lands heads isn’t itself a reason to think that it’s biased in favor of tails and is in fact a piece of evidence against that hypothesis.) On the other hand, when it comes to a person who systematically disagrees with what one takes to be true, the mere fact that they believe as they do is itself a reason to think that they are biased, given one’s own perspective. The perspectival account of bias attributions thus explains the asymmetry involving other parties that the psychological hypothesis involving introspection doesn’t explain.9
3. Is It a Contingent Fact That Introspection is an Unreliable Way of Telling Whether You’re Biased? In addition to explaining the asymmetry in our readiness to attribute bias to other people depending on whether their beliefs match our own, the perspectival account of bias attributions also sheds light on the unreliability of introspection itself. It’s a commonplace opinion among psychologists that introspection is, as an empirical matter of fact, an unreliable way of detecting one’s own biases. Even once that fact is duly acknowledged, we might wonder: how contingent a fact is it? Is the unreliability of introspection as a way of detecting bias an empirical discovery of psychology that might have turned out otherwise? It’s at least somewhat natural to think that, although introspection turned out to be an unreliable way of detecting bias, it could have been reliable, just like it’s a generally reliable way of determining whether you’re feeling sad. After all, it seems clearly possible that human eyesight, hearing, or our sense of smell might have been significantly stronger than they actually are: evolution or God might have given us much better tools than we actually have. So too, it seems, our faculty of introspection might have been significantly stronger than it actually is, in which case it might have been a generally reliable detector of bias after all, in much the same way that it’s a generally reliable detector of sadness. Against this natural thought, the norm-theoretic account of bias, and the perspectival account of bias attributions to which it gives rise, suggest that there are very substantial limits on how reliable introspection might have been when it comes to detecting bias, even in
principle. Let’s see why this is so. First, some clarification about what’s at issue. Of course, as a comparative matter, I don’t deny that introspection might have been somewhat more reliable than it actually is, inasmuch as certain common biases might have been more amenable to detection via introspection than they actually are. Consider, for example, the phenomenon of wishful thinking. To a first approximation, the wishful thinker is someone whose desires that certain things be true unduly influence their beliefs about whether those things are true. The wishful thinker is thus a biased thinker, inasmuch as they systematically depart from the truth in the direction of what they desire. There is, as far as I can see, no particular reason why introspection might not have been more reliable than it actually is when it comes to detecting that species of biased thinking that we call “wishful thinking.” However, not all biases are relevantly similar to wishful thinking, and the perspectival account of bias attributions suggests that there are very substantial limitations on how reliable introspection might have been as a way of detecting one’s own biases more generally. For example, there are principled reasons, I think, for why introspection simply could not have been as reliable when it comes to detecting bias, as it actually is when it comes to detecting sadness. For consider: according to the norm-theoretic account, it’s characteristic of the biased thinker to systematically depart from the norm of truth or accuracy in their judgments. However, whether one’s judgments depart systematically from the truth is generally speaking not something that might have been accessible via introspection, inasmuch as it depends on how things are in the external world. What one takes to be true of the world is encoded in one’s current beliefs; therefore, setting aside pathological cases in which one is alienated from one’s current beliefs, it’s no accident that one will generally fail to detect the kind of systematic divergences between one’s beliefs and reality that are characteristic of the biased believer. And the same point would still hold even if our faculty of introspection was much more powerful than it actually is. Here it’s worth contrasting the wishful thinker with another paradigm of a biased believer: the person who has, unwittingly, come to robustly rely on what are in fact biased sources of information for his beliefs about some topic. Such a thinker systematically departs from the truth because his beliefs are influenced by the putative information with which he’s presented, and that putative information is itself systematically misleading with respect to the truth. Here, introspection (no matter how carefully or skillfully undertaken) does him no good, for unlike the wishful thinker, his inner psychological processes might very well be impeccable: when he introspects, he concludes correctly that his reasoning from his evidence is flawless, and that he believes exactly as his evidence suggests he should. Given that everything that he has to go on suggests that he does believe the truth, he has no evidence that his beliefs depart systematically from the truth. Introspection therefore affords him no evidence for so much as suspecting that he’s biased, and the same would be true even if we granted him introspective powers with arbitrarily enhanced acuity. In short, the idea that introspection could have been a generally reliable way of detecting bias—in anything like the way it’s a generally reliable detector of sadness—depends, I think, on an overly internalistic conception of bias, as though all bias is relevantly similar to wishful thinking. More generally, whenever a norm is an externalist norm (e.g. Don’t believe false things), whether one is biased will be partially an externalist matter: one’s biases do not supervene on
one’s internal states and the causal relationships among those states. Thus, even if one could perfectly introspect one’s internal states and the causal relationships among them, this wouldn’t suffice to detect one’s biases.
4. How the Perspectival Account Explains the Bias Blind Spot, as Well as the Biases of Introspection In the previous two sections, I showed how the perspectival account of bias attributions can explain both why we’re more likely to see people who disagree with us as biased (§2), and also why introspection is—and indeed, had to be—an unreliable detector of bias (§3). Although the essential points are perhaps implicit in those discussions, let’s make fully explicit how the perspectival account explains the bias blind spot itself. Again, for reasons given in the initial discussion of introspection in §1, in order to explain the bias blind spot, it’s not enough to explain why we would be unreliable judges of whether we’re biased (something that would also be true of us if our mistaken judgments about whether we’re biased were random errors). Rather, what needs to be explained is the fact that we’re biased detectors of bias in our own case, beings whose mistakes tend to be overwhelmingly “false negatives” (mistakes to the effect that we’re unbiased even when we’re actually biased) as opposed to “false positives” (mistakes to the effect that we’re biased when we’re not). That explanation runs as follows. According to the norm-theoretic account of bias, something is biased if it systematically departs from the truth, or is disposed to do so. However, in judging whether something systematically departs from the truth, one will in practice rely on one’s beliefs about what’s true; in effect, agreement with one’s beliefs becomes the standard by which one judges truth and departures from it. When one turns the usual process upon oneself, this way of proceeding gives rise to the following asymmetry. When one’s beliefs do not systematically depart from the truth, they won’t appear to do so, when judged from one’s own perspective. Hence, relying on one’s own perspective won’t yield a false positive: one won’t judge, incorrectly, that bias is present even though it actually isn’t. On the other hand, when one’s beliefs do systematically depart from the truth, it will still appear as though they don’t, when judged from the same perspective. Hence, judging things from one’s own perspective in such circumstances will generate a false negative: one will judge, incorrectly, that no bias is present, even though it is. Relying on one’s own first-order beliefs in judging whether one is biased will thus be a biased method for arriving at such judgments, in the following sense: when one does end up with false beliefs about the presence or absence of bias by relying on this method, these mistakes will have a characteristic pattern, as opposed to being random mistakes. In this way, the perspectival character of bias attributions provides a compelling explanation for the bias blind spot. Similarly, and for parallel reasons, the general norm-theoretic framework can explain not only why introspection is and had to be an unreliable method for detecting bias in one’s own case, as argued in the previous section, but also why it is and had to be a biased method for
detecting bias in one’s own case. Generally speaking, and setting aside pathological cases in which one is alienated from one’s own current beliefs, the kind of systematic divergence between one’s current beliefs and reality that’s characteristic of the biased thinker is simply not the right kind of thing for introspection to detect. (In this respect, this kind of systematic divergence differs from facts about whether one is sad, which might be more or less difficult to detect via introspection on particular occasions, but which are at least always the right kind of thing to be detected by introspection.) For in the usual case, detecting a systematic divergence between a set of beliefs and reality will involve an independent comparison of the two. However, when it comes to one’s own current beliefs, introspection provides no basis for such an independent comparison. Rather, what one has access to via introspection is (at most) one’s current beliefs; inasmuch as there’s a sense in which one can access facts about reality via introspection, that access is not independent of one’s current beliefs, but rather mediated by them. It’s thus no surprise that consulting introspection will encourage one to think that such systematic divergences aren’t present even when they are, but not lead one to mistakenly conclude that such divergences are present even when they’re not. In this way, the general norm-theoretic framework can also explain why introspection is, and had to be, a biased method for detecting one’s biases. Notice, however, that inasmuch as the perspectival character of bias attributions provides a compelling explanation of the bias blind spot, it would be a mistake to think that that explanation is primarily a point about the shortcomings of introspection. Rather, given the perspectival character of bias attributions, the relative invisibility of my own biases to me is of a piece with the relative difficulty of my perceiving the biases of those with whom I’m in more or less complete agreement, as compared to the apparent ease with which I seem to perceive the biases of those with whom I systematically disagree. Although my knowledge of what I believe typically involves introspection, introspection doesn’t play any similar role in my coming to know that other people who share my beliefs believe as they do. Nevertheless, insofar as the relative difficulty of detecting bias in either case is due to the perspectival character of bias attributions, the essential mechanism is the same in both cases, even though introspection plays a dominant role in the first-person case but not the third person case.
5. Against “Naïve Realism”, For Inevitability Let me conclude this chapter by saying something more about how the perspectival account of the bias blind spot that I’ve sketched relates to the standard explanations offered by psychologists. Consider first the standard story about introspection described in §1. As noted, the perspectival account is able to explain facts that the standard story doesn’t purport to explain, such as the fact that we’re more likely to attribute bias to people who disagree with us than to people who agree with us. In this respect, it potentially supplements the standard story. In addition, the perspectival account allows for a deeper explanation of those facts that the standard story does purport to explain. In particular, for reasons given above, the standard story only makes sense if introspection is not only an unreliable but also a biased method for
making judgments about whether one is biased, and this is something that the perspectival account explains. When psychologists attempting to explain the bias blind spot supplement the standard story involving introspection, they frequently invoke “naïve realism,” in the social psychologists’ sense of that term.10 Here is the relevant definition from the American Psychological Association’s online Dictionary of Psychology: Naïve realism: in social psychology, the tendency to assume that one’s perspective of events is a natural, unbiased reflection of objective reality and to infer bias on the part of anyone who disagrees with one’s views (Emphases added).
The term “naïve realism” was introduced into social psychology by Lee Ross and his collaborators in the 1990s. In a seminal paper on the topic, Ross and Ward (1996) defined naïve realism in terms of three “tacitly held convictions.” The first of these involves the idea that “…my beliefs…follow from a relatively unbiased and essentially ‘unmediated’ apprehension of the information or evidence at hand” (p.110, emphasis added). The third of these convictions involves the idea that, when a given person or group fails to share my view about some issue, one possible source of their failure to do so is that “the individual or group in question may be biased…by ideology, self-interest, or some other distorting personal influence” (p.111, emphasis added). At first glance, appealing to “naïve realism” in order to explain the bias blind spot might seem quite similar to the explanation that I’ve offered, which invokes the perspectival account of bias attributions developed in Chapter 3. However, I believe that an explanation of the bias blind spot along the lines I’ve sketched is superior to one that invokes “naïve realism,” for the following reason. Notice that, as in the representative definitions provided here, “naïve realism” is typically defined in terms of assumptions about bias on the part of the believer: we tend to assume that our own views are unbiased, that those who disagree with us are biased, and so on. However, what we as theorists are trying to explain when we set out to explain the bias blind spot is why people so frequently see bias in others but not in themselves. Thus, in invoking “naïve realism” in this context, the theorist is in effect postulating a tendency to think that our own beliefs are unbiased and that the beliefs of people who disagree with us are biased, in order to explain why we so often see bias in people who disagree with us but not in ourselves. But that explanation isn’t very explanatory or illuminating, because the fact that the theorist is attempting to explain is so close to the fact that’s being invoked in order to explain it. On reflection, it should strike us as an unsatisfying explanation, in the way that explanations that appeal to tendencies often are, when the postulated tendency is characterized in terms that are too similar to the observed behavior itself. Compare the weakness of the following explanation: the fact that I frequently support candidates from political party X over candidates from political party Y is because I have a tendency to support candidates from party X over those from party Y, or because of a “tacit conviction” that candidates from X are more worthy of support than candidates from Y. The weakness of such an explanation is not a matter of its being false, but rather of its being insufficiently illuminating—especially if there is a more illuminating explanation to be had, as there almost certainly will be, in any realistic case involving political behavior of the
relevant kind. The same is true of that part of the standard psychological explanation of the bias blind spot that invokes “naïve realism,” since naïve realism in effect already simply takes for granted or postulates a self-other asymmetry in our thinking about who is biased and who isn’t. By contrast, the perspectival account of bias attributions developed here does not do this. Rather, it shows how the relevant self-other asymmetry emerges from the ground up, given the rational connections between one’s first-order views about a given topic and one’s higherorder judgments about who is and who isn’t biased about that same topic. Again, this isn’t to deny that human beings have a tendency to think that their own beliefs are unbiased reflections of reality and that the beliefs of those with whom they disagree might be biased, as the doctrine of naïve realism posits. Surely we do have such a tendency. (Compare: if I support the same political party election after election, surely I do have a tendency to support them, notwithstanding the unilluminating character of an explanation that simply cites the tendency in accounting for my voting behavior.) The perspectival account of bias attributions can be viewed as an explanation of the tendency itself, as well as an explanation of its manifestations. Indeed, of the three mechanisms that are frequently cited by psychologists in their explanations the bias blind spot—motives to self-enhancement, naïve realism, and our putative tendency to rely on our folk theories in attributing bias to others while relying on introspection in our own case—I don’t deny that any of these is a genuine psychological phenomenon, or that it contributes to some extent to the bias blind spot. Plausibly, as the social psychologists think, multiple factors contribute to the bias blind spot. Of course, questions about the respective contributions of the various mechanisms are ultimately empirical questions, just as it’s ultimately an empirical question to what extent the perspectival character of bias attributions contributes to the realization of the bias blind spot in particular human subjects. No such empirical inquiry will be undertaken here. But what I think can be safely said even in the absence of any such inquiry is encapsulated in the following inevitability thesis: √INEVITABILITY THESIS: Even if no other psychological mechanism were operative, the perspectival character of bias attributions guarantees that human beings would still suffer from “a bias blind spot.”
The bias blind spot is best understood as a disposition or tendency: we tend to see ourselves and our judgments as more objective and less biased than other people and their judgments. As we’ve seen, there are a number of different psychological mechanisms that would give rise to this tendency. Indeed, in principle, two different people could each have the bias, even if two entirely different psychological mechanisms were responsible for it— say, a motive to self-enhancement in the one case, and an illusion involving introspection in the other. In general, anyone who has the relevant tendency counts as having the bias blind spot, regardless of the psychological details of how the tendency is realized in her case. The bias blind spot is thus a multiply realizable disposition. I believe that in this respect, it’s representative: in general, biases of people are best understood as multiply realizable dispositions. Having looked in detail at this specific bias, which is in some respects paradigmatic, the next chapter zooms out and explores the more general topic of biased people.
Bias: A Philosophical Study. Thomas Kelly, Oxford University Press. © Thomas Kelly 2022. DOI: 10.1093/oso/9780192842954.003.0005
1 Seminal work in this tradition of research includes Armor (1999), Pronin, Lin, and Ross (2002), Pronin, Gilovich, and Ross (2004), and Ehrlinger, Gilovich, and Ross (2005). Ross, Ehrlinger, and Gilovich (2016) is a useful overview. Among recent philosophical treatments, the bias blind spot is central to Ballantyne’s (2019) discussion of disagreement. Sorensen (1988) is an ingenious philosophical exploration of the general topic of blindspots. 2 One might think that there is nothing especially interesting here, on the grounds that the bias blind spot is simply a special case of a much more general bias, the so-called “better than average effect.” After all, it’s long been known that large majorities of people rate themselves as above average in numerous ways. As Gilovich (1991:171) summarizes: “One of the most documented findings in psychology is that the average person purports to believe extremely flattering things about him or herself…For example, a large majority of the general public think that they are more intelligent, less-prejudiced, and more skilled behind the wheel of an automobile than the average person.” Seen in this wider context, isn’t the bias blind spot just more of the same? Interestingly, however, psychologists who have studied the phenomenon do not believe that the bias blind spot is simply a special case of the better than average effect, but that it’s in large part a sui generis phenomenon. For discussion of this point, see especially Scopelliti et al. (2015), who argue that the bias blind spot is a “distinct metabias” and Pronin, Gilovich, and Ross (2004). 3 Indeed, there is evidence that suggests that the very act of explicitly considering the question of whether one’s belief is biased and trying to answer that question by introspection tends to leave people even more confident that they’re actually unbiased or objective (Ehrlinger, Gilovich, and Ross 2005). One considers the question of whether one’s belief is biased; in response, one introspects and finds no evidence that it is; one then treats this absence of evidence as a positive reason to think that one is unbiased. Moreover, one might also take the very fact that one considered the question and sincerely inquired into its answer as further evidence that one is unbiased—after all, is that the sort of thing that a biased person would do? 4 Summarizing the phenomenon, Ross, Ehringer, and Gilovich (2016:5) write: “In short, people appear to consult abstract theories about bias when assessing bias in others, but rely on introspection when assessing bias in themselves— especially with respect to specific judgments they have made. This asymmetry, in turn, encourages people to see themselves as more objective than their peers.” 5 For representative endorsements, see especially Ehrlinger, Gilovich, and Ross (2005), Pronin, Gilovich, and Ross (2004), Pronin and Kugler (2007), and Ross, Ehrlinger, and Gilovich (2016). As the formulation in the text suggests, a common view is that multiple mechanisms contribute to the bias blind spot, including the introspection illusion, general motives to self-enhancement, and our alleged commitment to “naïve realism.” I discuss these other alleged mechanisms below. On the general unreliability of introspection as a source of accurate information about one’s true motives, personality, decisions, and so on, see Nisbett and Wilson (1977) and Wilson and Brekke (1994). 6 Compare: when a doctor attempts to determine whether you have some disease, there are two types of mistakes to worry about. First, they might tell you that you don’t have the disease, even though you actually have it. Second, they might tell you that you do have the disease, even though you don’t. In our case, the disease is being biased, and the task is to accurately diagnose oneself on the basis of introspection. 7 See especially Robinson et al. (1995), as well as Pronin, Gilovich, and Ross (2004:789) and the further references cited there. Indeed, the phenomenon admits of degrees: the more people disagree with us, the more we view them as biased (Pronin 2007:39). I note in passing that, although this phenomenon isn’t especially surprising, it’s also not completely trivial, inasmuch as the mere fact that one holds a view doesn’t guarantee that one won’t recognize that others who hold that same view are biased about the issue. (For example, a person who opposes affirmative action for principled reasons might recognize that a racist who shares his opposition is biased about the question.) 8 Indeed, it’s tempting to push things as far as possible in the other direction, in the following way. Perhaps the bias blind spot (the fact that we tend to see ourselves as less biased than other people) is the limiting case of the more general phenomenon, namely, the fact that we see those who agree with us as less biased than those who disagree with us, since our agreement with ourselves is the most perfect of all. But this is only one possibility, and I don’t want to insist upon that picture. The important point for my purposes is the more modest point that we shouldn’t expect the two phenomena to receive wholly distinct explanations. In recognition of this point, psychologists who endorse the standard story about the way in which introspection
contributes to the bias blind spot often supplement that story by appealing to our alleged commitment to “naïve realism.” I discuss this idea below. 9 Notice also that the perspectival account of bias attributions seems well-positioned to explain another important aspect of the phenomena mentioned earlier: the fact that the greater our disagreement with someone about an issue, the more biased we consider them about that issue. 10 As the term is used by social psychologists in this context, the meaning of “naïve realism” resembles, but differs from, the meaning of “naïve realism” as that term was originally used by philosophers. Among other things, as it has traditionally and continues to be used in philosophy, the term refers to a thesis about sense perception in particular. But as the representative definitions that follow make clear, this isn’t true of the term as it’s used in social psychology.
5 Biased People According to the norm-theoretic account of bias, biases typically involve systematic departures from norms or standards of correctness. This core idea isn’t limited in its application to understanding biased people. For example, as we’ve seen, it also naturally applies to, among much else, biased models (which systematically over- or under-predict some value); biased texts (which systematically mislead about their subject matters); and biased societies (which depart from the norms of justice in systematic ways). Nevertheless, the case of people is clearly a particularly central one, as witnessed by the fact that many of the norms that we’ve had occasion to discuss apply exclusively to either believers (as in the case of epistemic norms) or agents (as in the case of moral or prudential norms). This chapter is devoted to exploring it.
1. Biases as Dispositions Typically, when one attributes bias to an individual person or to a group of people, one attributes to that individual or group a certain tendency or disposition.1 For example, a judge or court that’s biased against people of a certain race is disposed to rule against people of that race. Similarly, to claim that most human beings have status quo bias is to attribute to human beings a disposition to favor the status quo merely because it’s the status quo. (Compare: to say that a coin is biased in favor of heads is to attribute to the coin a certain disposition or dispositional property. Contrast: to say that a particular belief, judgment, or verdict is biased is not to attribute a disposition or dispositional property to the belief, judgment, or verdict.) When the thesis that biases of people are dispositions is combined with the norm-theoretic account of bias, we arrive at the following formulation: a biased person is disposed to systematically depart from a norm or standard of correctness. Consider the case of the basketball referee. We have the ideal of the calls that the referee ought to make, given what actually occurs in the game that she’s officiating and the rules of the sport. Similarly, there are the verdicts that the judge should reach, given the facts presented in court plus the relevant pieces of law. The calls that ought to be made in the basketball game, or the verdicts that should be made in court, provide a standard of correctness which the efforts of actual referees and actual judges might either meet or fail to meet, to varying degrees. The biased
referee or judge is disposed to depart from that standard: in a range of possible cases, they will judge in a way that systematically differs from the way in which they should have. Similarly, when status quo bias can correctly be attributed to an agent, the agent is disposed to systematically depart from the practical norms that govern choice and action. When a person counts as biased in virtue of having a certain disposition, the disposition in question might be a relatively stable and enduring character trait, a robust aspect of their psychology. On the other hand, it might not be any such thing. The disposition might instead be fleeting and ephemeral, and vanish as soon as the external circumstances in which the biased agent is embedded are altered. Recall a character introduced in Chapter 3, the selfinterested referee who has been credibly threatened by the mob, to the effect that his end is near unless he sees to it that Team A triumphs over Team B in tonight’s game. If the mob’s threats are effective, then the referee will be disposed to make calls that favor Team A over Team B, to depart from the relevant norm of correctness in a systematic way.2 He thus counts as biased in favor of Team A in any context in which tonight’s game is salient. However, his disposition to favor Team A might vanish as soon as the game concludes and he’s no longer under threat. In contrast, in other cases of bias, the disposition in question might be deeply ingrained. For example, perhaps it’s hard-wired into the very structure of our minds, as part of our innate genetic endowment. Or perhaps it’s the result of living in a society in which the bias is pervasive and runs deep. A bias can be real even during periods when it’s not manifested, and indeed, even if it’s never manifested. In these respects, biases are like dispositions more generally. (Stock example: it can be true to say that a cup is fragile, even if it’s never dropped and so never breaks.) As with dispositions more generally, so too with biases: even if a judge is biased against people of a certain kind, this bias might never manifest itself—if, for example, no one of that kind ever appears in his court, or if all of the cases in which they do appear are so clear-cut that the judge’s bias is never triggered, in a way that it would be in less clear-cut cases. In that case, the judge is disposed to systematically depart from a norm, although no such departures actually occur. When someone or something has a certain disposition, it’s generally not a brute fact about them that they have it. For example, it’s not a brute fact about the cup that it’s fragile and so disposed to break when struck; rather, the fragility of the cup depends on its underlying chemical composition. Once again, as with dispositions more generally, so too with biases: when a person is biased at a given moment or during some interval of time, there will generally be something else that’s true of them that underwrites their being in that state, regardless of whether they are then manifesting the relevant bias. Indeed, in some cases, the fact that a person is biased in a certain way at a particular time might be a matter of their being biased in other ways at that time. Consider, for example, the project of trying to explain status quo bias in terms of other, allegedly more fundamental biases that also tend to have a broadly conservative upshot for behavior and choice. Some theorists, for example, hypothesize that status quo bias is grounded in loss aversion, our tendency to disvalue losses more heavily than we positively value corresponding gains; or in omission bias, our tendency to rate the harms of omissions (possible actions that we don’t perform) as less severe than the harms caused by actions that we do perform; or by our
tendency to commit the sunk cost fallacy; or by a conservative bias in favor of valuable things already in existence over valuable things that aren’t yet in existence.3 Although some biases of people might be underwritten by other, more fundamental biases that they also possess, presumably this isn’t true for all biases, on pain of an implausible regress. But even in the case of biases that aren’t underwritten by other biases, it’s presumably not a brute fact that the person is biased in that way, just as the fragility of the cup is not a brute fact about it. Biases of people are typically multiply realizable, at least in principle. In this respect too, they resemble dispositions more generally. The property of being poisonous is a dispositional property, but there is no underlying property that is had in common by every member of the class of poisonous substances: we count as poisonous any substance that produces ill effects on those who consume it, regardless of its constitutive ingredients. (The property of being poisonous is thus not only multiply realizable but also multiply realized.) Compare again the example of status quo bias. Any agent who is disposed to systematically depart from the correct practical norms by favoring options that preserve the status quo over more choiceworthy alternatives counts as having status quo bias, regardless of the details about how that disposition is realized or instantiated. (Indeed, such an agent counts as having status quo bias even in the unlikely event that it turns out that nothing instantiates or realizes that bias.) Impressed with the well-documented tendency of human beings to prefer the status quo, one question that we might ask is this: How is status quo bias realized in human beings? That question has a certain presupposition, namely, that status quo bias is realized in the same way (or more or less the same way) in all human beings (or nearly all human beings) who possess it. That assumption might very well be a reasonable one to adopt as a working hypothesis, but it’s an assumption that could in principle turn out to be false. As noted above, theorists have offered a variety of hypotheses about what explains why human beings tend to be biased in favor of the status quo, hypotheses that invoke allegedly more fundamental cognitive biases. Perhaps in some human beings status quo bias is realized in one way while in other human beings it’s realized in another. Even if status quo bias is similarly realized in every human being who has it, it needn’t be realized in that way in all possible agents. Perhaps the Alpha Centaurians resemble us in that they too are disposed to systematically depart from the correct practical norms by unduly favoring the status quo. If so, that’s enough for us to share the bias. It might later be discovered that the disposition to choose in this way has a radically different basis in their case than in ours. Indeed, it might even be discovered that in their case the disposition has no basis at all—it’s a so-called “bare disposition,” and so it’s simply a metaphysically brute fact about the Alpha Centaurians that they are disposed in this way. But no such discovery should lead us to conclude that the Alpha Centaurians are actually free from status quo bias, after all. In fact, I think that a human being could share one and the same bias with something that has no beliefs, or preferences, or any mental states at all. Consider the human weather forecaster and the weather forecasting model. Notice first that, on anyone’s view, the person and the model might have much in common, as potential bearers of bias. They might both be biased about the same type of questions, and in the same way, and to exactly the same extent. Moreover, the explanation for why each counts as biased (at least in one, perfectly good, noncausal sense of “explanation”) might be identical in the two cases: they both count as biased
in virtue of departing from the norm of accuracy in a systematic way. (Of course, one realistic way in which all of these similarities could arise is if the weather forecaster arrives at his own predictions by consistently and uncritically relying on the biased model.) On any plausible view then, the bias of the person and the bias of the model are at least very similar biases. But I see no good reason not to take the final step and say that in these circumstances they share one and the same bias. If that’s right, then it supplies a good reason not to identify the person’s bias with any mental state or combination of mental states that the model obviously does not possess. Compare the ongoing debate among philosophers and psychologists about how to model the phenomenon of implicit bias, as it’s observed among human beings. Here we find a remarkable bevy of options. Is implicit bias best understood in terms of certain associations among concepts? (See, e.g., Gawronski and Bodenhausen 2006, Holroyd 2016, Byrd 2019.) Or is it better understood in terms of holding certain beliefs? (Mandelbaum 2016, Frankish 2016, Karlan 2020). Other prominent contenders include imaginings (Sullivan-Bissett 2019, Welpinghus 2020); “aliefs” (Gendler 2008, 2011, Madva 2016b, Brownstein 2018); “patchy endorsements” (Levy 2015); and “in-between beliefs” (Schwitzgebel 2010). At this stage of the investigation, the multiplication of models or theoretical options is a good thing, and in accordance with sound methodology. Insofar as any model provides a coherent potential explanation of the relevant phenomena, it’s a potential realizer of implicit bias. The question of how implicit bias is typically instantiated in human beings—or the question of how this or that more specific implicit bias is instantiated in human beings4—is an excellent question to pursue, for multiple reasons. First, our learning the correct answer might have significant practical implications. For example, which answer turns out to be correct might make a difference to whether, or the extent to which, we should hold people morally responsible for their implicit biases, and the behavior to which those biases give rise.5 Similarly, which answer turns out to be correct might have significant implications for which policies we should adopt in attempting to compensate for or minimize the negative effects of implicit bias. Even in the unlikely event that which answer turns out to be correct has no practical implications, the question of how (this or that) implicit bias is typically instantiated in human beings would still be well-worth pursuing, for it is, I think, a theoretically interesting question in its own right. But although they are certainly worthwhile questions in their own right, questions about how various implicit biases are instantiated in human beings, or questions about their characteristic psychological bases, are distinct from the question of what implicit bias is. It would thus be a mistake to identify implicit bias itself, or the more specific implicit biases had by human beings, with their psychological bases in human beings, even on the assumption that there is some unique psychological basis for each implicit bias. More generally, we should distinguish sharply between questions about what realizes a given bias in human beings and questions about what a given bias is.6 Biases, like most other dispositions, are gradable: they admit of degrees.7 As noted in Chapter 1, a bias might be more or less severe. A basketball referee might be egregiously biased against one of the two teams, or only slightly and subtly biased against it. In the latter case, he might for the most part officiate the game in the manner of an unbiased referee and
be disposed to depart from that standard only in extremely marginal cases in which the actual fact of the matter is difficult to discern for all involved, cases in which he disproportionately favors one of the two teams over the other. Similarly, a coin might be ever so slightly biased in favor of heads, or more significantly biased. Any coin which is biased departs from the standard provided by the perfectly unbiased coin, and the extent to which a given coin is biased depends on the extent of the departure from that ideal. Of course, it’s vastly improbable that any actual coin is perfectly unbiased, given sufficiently demanding standards of precision. Assuming that that’s the case, does it follow that there really aren’t any unbiased coins after all? I think that it would be a mistake to draw that conclusion, for the same reason that it would be a mistake to conclude there really aren’t any flat surfaces on the grounds that every surface will turn out to have some bumps on it if examined under a sufficiently powerful microscope. In both cases, close enough is good enough; and in both cases, what counts as “close enough” is plausibly a matter that’s both vague and context-sensitive.8 Similarly—and importantly—we might correctly count some actual people (e.g. some judges) as unbiased if they approximate the salient ideal sufficiently closely, and even if the way that they fall short of the ideal is exactly the kind of departure that would justify a charge of bias if it were more pronounced than it actually is. Here as elsewhere, one can count as a genuine instance even if one falls short of the Platonic form. In Chapter 1, I endorsed the claim that a group of people might be biased even if each of its constituent members is unbiased. Notice that that claim seems to follow more or less immediately from the current point, namely that an individual might be correctly classified as unbiased with respect to some cluster of issues in virtue of approximating the relevant ideal sufficiently closely. Imagine an organization all of whose members qualify as unbiased in virtue of approximating the relevant ideal sufficiently closely: the extent to which any individual falls short of the ideal is less than or at least no greater than individuals outside the organization who would correctly be described as unbiased. Suppose, however, that the individuals within the organization all fall short of the relevant ideal in the same way, or in the same direction. Even if each of the individuals counts as unbiased because their departure from the ideal of perfect objectivity is within the relevant threshold, their individual departures might aggregate in such a way that the group as a whole counts as biased, because the extent to which it departs from the salient ideal of perfect objectivity is outside of any reasonable precisification of “unbiased.”9
2. Bias as a Thick Evaluative Concept In Chapter 1, I noted that attributions of bias frequently involve negative evaluations. In any normal context, the claim that a judge, a referee, or a journalist is biased would be taken as a criticism as opposed to an evaluatively neutral description. The norm-theoretic account explains this as follows: in paradigmatic cases, to describe a person as “biased” is to attribute to them a disposition to systematically depart from a genuine norm, and the negative evaluation reflects their falling short with respect to that standard.
Indeed, in both ordinary and academic discourse, “bias” and its cognates often function like thick evaluative terms, in the ethical theorist’s sense.10 As traditionally understood, thick evaluative terms have two characteristic aspects: (1) First, it’s taken to be characteristic of a thick term like “kind,” “courageous,” or “selfish” that it has relatively rich descriptive content, of a sort that often allows for attributions of the corresponding property to be empirically confirmed or disconfirmed in relatively straightforward ways. For example, the claim that a person is courageous would be strongly disconfirmed upon observing that they run away whenever danger is present.11 In this respect, thick evaluative terms seem to resemble paradigmatic non-evaluative descriptive terms such as “fragile” and “red” and to contrast with more abstract, “thin” evaluative terms such as “right” or “impermissible.” (Typically, one can confirm or disconfirm that something is red by looking at it in good viewing conditions, and that something is fragile by dropping it on a hard surface. On the other hand, the claim that a given action is right, or alternatively, that it’s impermissible doesn’t seem to similarly admit of straightforward empirical confirmation or disconfirmation.) Notice that in this respect, “bias” and its cognates seem to resemble paradigmatic thick terms. Thus, the claim that a judge is biased against members of a certain group is a claim that seems to admit of empirical confirmation or disconfirmation. For example, the claim would be disconfirmed (albeit not conclusively) if it turned out that the judge had frequently ruled in favor of members of the group in past cases, including in cases where they might have been expected not to do so. On the norm-theoretic account, the fact that we’re sometimes in a position to empirically confirm or disconfirm charges of bias in relatively straightforward ways is due to the fact that we’re sometimes in a position to empirically confirm or disconfirm the claim that the person is disposed to systematically depart from a genuine norm. (2) Secondly, and on the other hand, it’s taken to be characteristic of thick evaluative terms that they resemble thin evaluative terms and contrast with paradigmatic descriptive terms precisely in that, like the former but unlike they latter, they at least appear to have evaluative content. And as we’ve emphasized, “bias” and its cognates are often used to make evaluations. Indeed, not only are attributions of bias frequently used to make evaluations, but they are also frequently used to make or implicate normative claims: claims about what we should think, prefer, or do. For example, in a conversation about whether a defendant should receive a new trial or have his conviction overturned, the claim that the judge or a member of the jury was biased against him seems to speak directly to what’s at issue. How fundamental is the charge of bias, as a negative evaluation? Notice that particular biases are often characterized in explicitly evaluative, non-neutral terms. For example, hindsight bias is the tendency to overestimate the extent to which an outcome could have been foreseen ahead of time, given the evidence that was then available.12 Cases of anchoring bias involve giving excessive weight to an initial value or estimate, and modifying one’s initial judgment or estimate insufficiently in the light of later information.13 Recency bias is the tendency to overweight the current time or recent past compared to the more distant past.14 The bias blind spot is the tendency to underestimate the possibility that
one is biased compared to one’s peers. A judge who is biased against African-American defendants is disposed to rule against such defendants even in cases in which he should rule in their favor, given the facts of the case plus the relevant pieces of law. And so on. Moreover, notice that when a bias is characterized in explicitly evaluative or normative terms in this way, the type of error that’s the characteristic upshot of the bias might very well be committed by someone who is not manifesting that bias, as in cases of random error. Thus, a person might overestimate the prior predictability of a past event because he’s manifesting hindsight bias, but someone else might make the very same error of overestimating its predictability, even though she does not suffer from hindsight bias. (Suppose that although the second person is of course fallible when it comes to estimating the prior predictability of past events, she’s in general just as likely to go wrong by underestimating an event’s predictability as by overestimating it.) Similarly, although one’s ultimate estimate of some quantity or value might be too close to one’s initial estimate because of anchoring bias, the very same mistake might be made by a person who isn’t in the grips of any bias, but as a result of random error. A judge might mistakenly rule against an African-American defendant in a case in which he should have ruled in their favor because he’s biased against African-Americans, but the same mistaken ruling might be made by an unbiased judge. Of course, when the unbiased judge mistakenly rules against the AfricanAmerican defendant, it’s no less true that he ought not to have ruled in the way that he did than when the same ruling is handed down by the biased judge. Although the fact that a judge mistakenly rules against African-Americans in some cases in which he should rule in their favor doesn’t entail that he’s biased against AfricanAmericans, a judge counts as biased against African-Americans only if he’s disposed to make such mistakes, a type of mistake that can be characterized in independent terms, without invoking the notion of bias. Thus, there is a sense in which the charge of bias, when it amounts to a negative evaluation of the agent or thinker, is typically not fundamental. That is, when a charge of bias is in order, there is typically some other failing of which the agent or thinker is guilty, which can be characterized independently of the bias, and which the agent might have been guilty of, at least in principle, even if the bias hadn’t been operative (even though, as things actually happened, his being biased is in fact what led him to commit the error, or fall short of the relevant standard). One upshot of this is that the claim that someone is biased will often presuppose substantive and potentially controversial evaluative or normative claims about how it’s appropriate to think or act in given circumstances, claims that aren’t themselves about bias. Similarly, one way of disputing the charge of bias is by disputing these more conceptually fundamental evaluative or normative claims.15 In Chapter 3, we noted that attributions of bias are frequently contentious and explored one common source of this: their fundamentally perspectival character. Consistent with what was said there, the current line of thought points to another, complementary source for their contentiousness. Namely, attributions of bias in the pejorative sense typically presuppose more fundamental substantive evaluative or normative claims that aren’t themselves about bias; and notoriously, substantive evaluative or normative claims are often (although not always) more difficult to adjudicate than purely descriptive, non-evaluative, non-normative claims. Claims of bias will thus inherit whatever contentiousness attaches to the conceptually
more fundamental evaluative or normative claims on which their truth depends. When an agent is biased in the pejorative sense, they are typically guilty of some other failing or shortcoming that in some respects is more fundamental. However, this need not diminish the significance (moral or otherwise) of the fact that the agent is biased as opposed to merely guilty of the more fundamental failure or shortcoming. Indeed, it’s perfectly consistent with what’s been argued here that the bias and the failure to which it leads are morally significant, even if the characteristic failure that is its manifestation would not be, if that mistake had occurred as a result of random error. Compare the case of lying. In paradigmatic cases of lying, the liar misleads his audience by telling them false things. The liar thus fails to comply with the norm of truth-telling, but the same failure might be manifested by someone who isn’t a liar, but who tells his audience false things unintentionally. Lying is in this respect like being biased in the pejorative sense: when the charge is in order, there is typically another failing on the part of the agent which might very well have been committed by someone who isn’t lying (or not biased). The person who unintentionally misleads his audience stands to the liar as the agent who departs from a norm by committing a random error stands to the biased agent. Still, the liar might be an appropriate object of moral censure or blame, even if the person who unintentionally misleads his audience through false testimony isn’t. Similarly, even in a case in which an unbiased agent who departs from a genuine norm would be an inappropriate object of censure or blame, it doesn’t follow that the agent who commits the same error out of bias is.
3. Biased Believers, Biased Agents When status quo bias manifests itself, it affects our preferences among options, and our behavior. It’s therefore a bias of agency, one that afflicts us in our capacities as agents. On the other hand, other biases—for example, the bias blind spot—are biases of belief: they primarily afflict us, at least in the first instance, as believers. Still other biases—for example, some racist and sexist biases—might manifest themselves either in ways characteristic of biases of agency, or in ways characteristic of biases of belief, or both at once. For this reason, the categories “biases of agency” and “biases of belief” aren’t mutually exclusive, since some biases might be best understood as both, with neither prior to the other. Are the two categories collectively exhaustive, at least when we’re concerned with biases of human beings? No, for human beings also exhibit various affective biases, which relate in the first instance neither to our actions nor to our beliefs but rather to our emotions and other affective states. A person might count as biased against Xs because of how they act, what they think, or how they feel. The paradigmatic profile of a biased person is that of someone who is both a biased thinker and a biased agent, as well as a biased “feeler.” Similarly, the paradigmatically unbiased person is also a “pure” case: someone who is unbiased in her thoughts, her deeds, and her affective states. But mixed, impure cases are also possible. Imagine someone who tends to believe the worst about Xs, but who treats the Xs impeccably as far as her
observable behavior is concerned. (Perhaps she fears that if she did treat Xs badly, this would be detected, and she would be sanctioned. Or perhaps she’s alienated from her tendency to believe the worst about Xs, which she regards as a regrettable and irrational artifact of having grown up in a deeply prejudiced environment, and so she bends over backwards in a successful attempt to not allow the negative opinions engendered by that tendency to influence her overt behavior.16) As a believer, she resembles the paradigmatically biased person, but as an agent, she resembles the paradigmatically unbiased person. On the other hand, someone might treat Xs in ways that fall short of what the correct norms of behavior demand, and worse than he treats non-Xs, without harboring biased beliefs about Xs. (Perhaps he does so out of visceral, non-cognitive dislike; or because his social environment is so hostile to Xs that he fears that he’ll be punished or sanctioned if he fails to treat them badly.17) He thus resembles the paradigmatically biased person as an agent while resembling the unbiased person as a believer. Given that the paradigm of the biased person is both a biased thinker and a biased agent, which of the two mixed cases—if either—counts as a person who is “biased against Xs”? In general, either of the impure cases will count as a person who is biased against the Xs. In general, but not always. Here again context is important. In a context in which behavior matters but belief doesn’t, the person who is biased qua agent but unbiased qua believer counts as biased, while the person who is unbiased qua agent although biased qua believer counts as unbiased. On the other hand, in another context it might be belief and not behavior that’s of interest and importance, and in that context, we will classify the same two people in the opposite way. Consider, for example, the following two cases: THE FIRST OLYMPIC JUDGE: An Olympic judge consistently gives his fellow countrymen scores that are higher than they deserve, and higher than the scores that he gives to their competitors from other countries for performances of the same objective quality. He does this not because he’s under any illusions about the relative quality of his countrymen’s performances—in particular, he doesn’t believe them to be stronger than they actually are —but because he fears the repercussions that he’ll face upon returning home from the country’s authoritarian rulers if he doesn’t award his countrymen high scores. THE SECOND OLYMPIC JUDGE: An Olympic judge is disposed to believe that her fellow countrymen’s performances are better than they actually are, and because of this she consistently overestimates the quality of their performances relative to those of their competitors. (Her patriotism leads her to perceive performances by her countrymen as better than they actually are.) However, because she’s aware of this bias, she deliberately lowers the scores that she would otherwise give to her countrymen. As a result of this practice, they are at no advantage over their competitors in any competition that she judges. But they are also at no disadvantage: in fact, the second Olympic judge is so skillful at compensating for her biased beliefs when it comes time to write down her scores that she manages to fully compensate for bias at the level of belief without overcompensating for it. The athletes that she judges thus receive exactly the same scores that they would have received in the counterfactual scenario in which her initial judgments are unbiased and she doesn’t subsequently alter those initial judgments.
Consider a context in which we’re asking who among the judges is biased, where our purpose in asking that question is to gather information in order to predict which competitors are likely to win a medal. In that context, the first judge counts as biased while the second one doesn’t. Indeed, in that context, the fact that the first judge has unbiased beliefs about his countrymen’s performances seems not to matter at all: far from being a marginal case, he counts as biased every bit as much as those judges who are biased in both their beliefs and
their actions, given the way that he scores the competition. But we can also imagine a different context in which the second judge but not the first counts as biased. Suppose, for example, that we’re psychologists investigating how often being from the same country causes judges to perceive their countrymen’s performances as stronger than those performances actually are. When we ask the question “Who among the judges is biased?” in this context, the second judge, but not the first, should be included in a complete answer to our question. There are then at least some special contexts in which belief counts for so little compared to behavior that we’re prepared to give the biased believer a pass for his beliefs and count him as “unbiased” on the basis of his behavior. And similarly, there are some special contexts in which behavior counts for so little compared to belief that being an unbiased believer is enough to count as “unbiased,” regardless of how the believer acts. But in most contexts in which we’re concerned with bias at all, we will care about both the subject’s beliefs and their behavior, at least to some extent (even if not always in equal measure). Perhaps at least in part for this reason, it seems that in most contexts we would without hesitation apply the label “biased” to both someone who is biased in her beliefs (regardless of how things are with her behavior) and also to someone who is biased in her behavior (regardless of how things are with her beliefs)—although here as elsewhere, the fact that we would readily attach the same label to both shouldn’t cause us to lose sight of relevant differences in the underlying phenomena. Thus, there are at least some contexts in which one person might count as biased against the Xs in virtue of being a biased agent (even if she is impeccably unbiased as a believer), while another person might count as biased against the Xs in virtue of being a biased believer (even if she is impeccably unbiased as an agent). This fact has the following consequence: ✓ RADICAL HETEROGENEITY: In principle, two people might both count as biased against Xs (alternatively: biased in favor of Xs), even though they share none of the same Xrelated mental states, perform none of the same X-related actions, and share none of the same X-related behavioral or cognitive dispositions. Plausibly, the case for heterogeneity claims of this general sort is further strengthened, and the extent of the heterogeneity at issue is even more pronounced, when we take into account possibilities involving biased affect as well as biased belief and action. For it seems as though one could construct cases, parallel to those given above, in which a person counts as biased against Xs in virtue of her affective states or affective dispositions, even though for one reason or another she perfectly resembles the completely unbiased person with respect to her beliefs and actions. Consider, for example, someone who inwardly becomes extremely upset or nervous whenever she’s in close proximity to people of a certain race, even though this isn’t reflected in either her beliefs or behavior, or even in her dispositions to belief or behavior. I think that in many contexts and for many purposes, we would readily count such a person as biased against members of that race, notwithstanding her perfect resemblance to the thoroughly unbiased person at the levels of both belief and action.18
4. Biased Agents, Unreliable Agents The biased referee or judge is disposed to depart from a norm or standard of correctness: in a range of possible cases, they will judge in a way that differs from the way in which they should have. This is a feature that they share with unbiased but incompetent referees and judges who are scrupulously impartial but nevertheless still frequently arrive at the wrong judgments. We have then three agents: (i)
the unbiased and reliable agent, who competently adheres to the relevant norm.
(ii)
the unbiased and unreliable agent who regularly departs from the norm because of their incompetence, but who escapes the charge of bias because of the unsystematic character of their departures; and
(iii)
the biased and unreliable agent, who regularly departs from the norm in a systematic way.
I note in passing that (i)–(iii) don’t exhaust the possibilities, for there is also the reliable but biased agent: RELIABLE THOUGH BIASED: A person is unusually good at complying with a norm that it’s difficult to consistently comply with. (Perhaps she successfully complies with it 95 percent of the time, while no one else successfully complies with it even 90 percent of the time.) She’s thus significantly more reliable than others who count as reliable given the contextually relevant standards for reliability, and therefore counts as reliable herself. However, whenever she does fail to comply with the norm, she invariably misses by overshooting as opposed to undershooting (or vice versa) or in some other way that’s patterned, predictable,19 and systematic. In contrast, when her less reliable peers depart from the norm, they depart randomly as opposed to systematically. She thus counts as biased, notwithstanding her impressive reliability.
But let’s set aside the case of the biased and reliable agent for now,20 in order to focus on the contrasts between agents (i)–(iii). Suppose that we’re offered a choice among the three. Whom do we choose? When the question is posed at that level of abstraction, it has no generally true answer: we might prefer any one of the three to the other two, depending on the context and on our interests. Often, our clear first choice will be for the agent who is both unbiased and reliable. When, years after George Washington’s death, Thomas Jefferson (1814) praised him for what Jefferson took to be his lack of bias, we can be sure that it wasn’t a mere lack of bias that so impressed Jefferson, and that his praise would have been much less effusive if Washington’s lack of bias had been a matter of unsystematic unreliability. And in the usual case, we too will share Jefferson’s preference for the unbiased and reliable agent over the alternatives. Of course, the usual case isn’t the only case: in cases like MORAL REFEREE and IMMORAL REFEREE considered in Chapter 3, we would and should strongly prefer the biased referee to the unbiased and reliable referee, given that innocent lives are at stake depending on the outcome of the game. Even when an unbiased and reliable agent would be best of all, it makes sense to ask: if an
agent does regularly depart from the relevant norm or standard, is it better for her to do so systematically, in a way that’s characteristic of a biased agent, or unsystematically, in a way that’s characteristic of the unbiased agent? In a context in which we’d strongly prefer an unbiased and reliable agent but that option is for one reason or another unavailable, whom do we prefer in a pairwise comparison, the biased agent or the unbiased though incompetent agent? Once again, there is no general answer to this question: either might be preferable to the other, depending on the circumstances. In a context in which concerns about fairness are paramount—for example, in the context of a rule-governed competition, or one in which we’re concerned with the allocation of some scarce good—we might have strong reasons to prefer the unbiased though unreliable agent to one who is biased. But in a context in which we’re particularly concerned with the predictability of the agent, we might have good reasons to prefer the biased agent to the unbiased one, whose errors are random and therefore unpredictable. Let’s first examine a paradigmatic context in which the agent who is both unbiased and unreliable is preferable to the agent who is biased and unreliable. Consider again the fundamental distinction between random error and systematic error. In statistics and elsewhere, random error is treated as error that’s due to chance alone. Here is a representative characterization from the American Psychological Association’s Dictionary of Psychology: Random errors occur arbitrarily when unknown or uncontrolled factors affect the variable being measured or the process of measurement. Such errors are generally assumed to form a normal distribution around a true score, and thus, to cancel out over the long run.21
Consider the implications of the second sentence of the entry for the example of the referee. To the extent that an incompetent referee is genuinely unbiased, his frequent errors will, over the long run, tend to cancel each other out: although he sometimes errs in a way that advantages Team A over Team B, on other occasions he errs in a way that advantages Team B over Team A. More generally, in the context of a basketball game, we’ll typically have reason to prefer an unbiased although incompetent referee to a referee that’s biased in favor of one of the two teams: although the former’s incompetence might very well detract from the quality of the game and the experience of the participants and spectators in various ways, the very lack of systematicity of those errors means that the competition isn’t unfair to either of the two teams, in anything like the way it would be if it was officiated by a biased referee. Indeed, given the importance that we attach to fairness in the context of a competitive contest, we might have good reason to prefer the unreliable but unbiased referee to the biased one, even if the former’s incompetence is so great that the biased referee is significantly more reliable. On the other hand, there are also cases in which the biased agent is preferable, for there are circumstances in which we positively value the predictability of his errors, in contrast to the unpredictability of the random errors committed by the unbiased agent. Indeed, competitive sports provides paradigmatic examples of this phenomenon as well. For example, a basketball referee might either be biased in favor of overly physical play (plays
that should have drawn foul calls do not) or alternatively, biased against physical play (plays that shouldn’t have drawn foul calls do). Either bias will advantage certain players and teams and disadvantage others, depending on their preferred playing styles. Nevertheless, inasmuch as it’s a genuine bias, it will have a certain consistency over time, and this allows for the formation of rational expectations on the part of both players and coaches, who can adjust their playing styles and strategies accordingly. Thus, it’s often said that what players most care about is that a game is called consistently; and what really sparks outrage is when a game is decided by a foul call in the final moments on a play that’s relevantly similar to previous plays that were not called fouls.
5. Overcompensation It’s a significant fact about bias that one common way in which a person can end up biased is by trying not to be. Consider, for example, the phenomenon of overcompensation for some perceived bias or possible bias. In a typical case of overcompensation, a person is aware that they are biased in favor of X; or else they are concerned about the possibility that they might be biased in favor of X; or at least, they are concerned to avoid the appearance of being biased in favor of X. Because of this, they bend over backwards in order not to exhibit bias in favor of X. But they overshoot and end up treating X less favorably than they would if they were unbiased. In addition to more serious contexts, the phenomenon is a familiar one in the context of little league sports, in which parents often coach teams on which their own children play. Although some coaches are straightforwardly biased in favor of their own kids, a phenomenon that’s perhaps equally common is that of a coach who is so concerned not to (appear to) favor his own child that he bends over backwards in such a way that he ends up biased against his child (when it comes to allocating the most desirable positions, or determining who gets to start the game on the field as opposed to on the bench, and so on). Consider also debates about media bias in the United States. It’s long been known that, with respect to their personal political views, journalists working for mainstream publications in the United States are significantly left-of-center compared to the general population of the country. A longstanding and frequent charge among conservative critics of the media is that this leads, perhaps in conjunction with other factors, to coverage of the news that’s biased in a left-of-center direction. Of course, members of the media are well aware of this critique, and they are often concerned to defend against it, or at least, not to give their critics ammunition by providing apparent examples of left-wing bias. Some on the left contend that this situation leads members of the media to lean too far in the other direction, and to introduce a kind of “false evenhandedness” in their coverage of the news, in which conservative views and political actors are given a relatively free pass compared to what they would otherwise enjoy. In effect, the claim is that fear of being (or accused of being) biased in a left-of-center direction leads to a right-of-center bias.22 Setting aside the primarily empirical question of which political biases are characteristic of the American mainstream media, this general structural possibility—that of being biased
as a result of overcompensating for some contrary bias—is clearly an important one. In terms of the norm-theoretic account of bias, the general phenomenon of interest can be understood as follows: an agent acquires a disposition to depart from a norm in a certain direction out of a concern that they have or will depart from that norm in the opposite direction. Consider a concept familiar to sports fans, that of the make-up call. A referee makes an incorrect call; he is immediately criticized by the players and coaches of the aggrieved team and (particularly if the call was at the expense of the home team) booed lustily and at length by the fans. The next time down the court or field, the referee makes an at least somewhat questionable call in favor of the aggrieved team. He is immediately suspected of having made “a make-up call”: one that’s influenced, whether consciously or unconsciously, by a desire to make up for or balance out the previous call with respect to its potential impact on the outcome of the game. While a perfectly unbiased and objective referee would judge each play solely by what actually happens on that play, without regard to their calls on previous plays, a referee who issues a make-up call allows his knowledge of past calls, and his desire to avoid (the appearance of) bias in favor of one of the two teams, to bias his judgment against that team on the next play. It’s an interesting empirical question how often genuine make-up calls actually occur, as opposed to how often they are thought to occur by fans, players, and coaches. Notice that, even if referees were perfectly objective in calling each play, and gave no weight at all to their previous calls, there would in the natural course of things be many sequences in which a borderline call that favors one of the two teams is followed in relatively short order by a borderline call favoring the opposing team. Given this, it wouldn’t be surprising if there was a tendency to overestimate how often genuine make-up calls occur, and to suspect, incorrectly, that even innocent cases that instantiate the relevant sequence involve make-up calls.23 However, and consistent with that possibility, there is significant empirical evidence that the make-up call is a genuine phenomenon.24 In any case, regardless of its reality, the make-up call is certainly believed to be a significant phenomenon by fans, players, and coaches, to the point of actually influencing game strategy. (It’s not unheard of for basketball coaches, in the immediate aftermath of a controversial call that went against their team, to instruct their players to take the ball aggressively to the basket in order to draw contact and force the referee to call a foul, on the theory that they’re due for a make-up call, and that the referee will be reluctant to decide back-to-back controversial calls against the same team.) The general phenomenon of philosophical and psychological interest that’s exemplified by the make-up call occurs frequently outside the context of competitive sports. For example, it occurs whenever a parent or other authority figure suspects that they’ve treated misbehavior too leniently in the past, and as a result ends up treating later misbehavior too severely, or vice versa. It’s also exemplified whenever a teacher compromises her usual standards for grades in an attempt to compensate for perceived deviations from those standards in the opposite direction in the past. Nevertheless, because the case of the referee and the make-up call provides a particularly clean and clear illustration of the phenomenon of interest, let us continue to work with it.
Consider an extreme case: EGREGIOUS MAKE-UP CALL: Immediately after making a crucial call that favors Team A over Team B, the referee realizes that his call was incorrect, and he’s criticized by Team B’s players, coaches, and fans. Concerned that his call will unduly influence the outcome of the game, the next time down the court he makes a call that favors Team B, a call that he wouldn’t have made otherwise, and which is objectively incorrect given what occurred on that play. (The psychological influence of his concern might be either conscious or unconscious.). In fact, not only is the second call incorrect, but it’s clearly incorrect, and recognized as such by observers. Given the egregiousness of the second call and its proximity to the first, at least some observers are in a position to recognize it as a make-up call.
Question: In these circumstances, is the fact that the referee issued a make-up call evidence that he’s biased or evidence that he’s unbiased? Offhand, it might seem that it’s evidence of both. After all, it’s characteristic of the unbiased referee to “call each play as they see it,” without regard to any of their past decisions, and this the referee manifestly fails to do. On the other hand, the fact that he offers a make-up call provides some evidence against the possibility that he’s biased in favor of Team A, something of which he might very well have been suspected, and even reasonably suspected, if in fact his first missed call in favor of Team A was itself sufficiently egregious. Here paradox seems to threaten, since one and the same piece of evidence cannot confirm both the claim that the referee is biased as well as the claim that he’s unbiased.25 As is often the case elsewhere, the appearance of paradox dissipates once finer distinctions are drawn. Consider first the question of whether the referee’s second call, the make-up call, is biased. Here the answer is clear: Yes, it’s a biased call. Indeed, the call counts as biased even if we alter the case so that it’s objectively correct, so long as we hold fixed the process by which it was arrived at. Regardless of its correctness, the call counts as biased in virtue of being produced by a biased process, a process that was decisively influenced (whether consciously or unconsciously) by the referee’s desire to make a call that was favorable to Team B. Consider next the question of whether the referee himself is biased. In asking this question, one thing that we might be asking is whether the referee is biased in favor of one of the two teams. And this question admits of further precisification. In asking it, one thing that we might be asking is whether the referee is biased in favor of Team B at the time he issued the make-up call, or during some interval of time which culminates in his making the makeup call (e.g., the interval of time which is bounded at one end by the moment he realized that he made a mistake in favor of Team A and in response developed a disposition to favor Team B, and on the other end by the make-up call itself.) Here again, the answer is clearly Yes. During that interval of time, he’s disposed to depart from the relevant norm or standard of correctness in the direction of Team B. On the other hand, we might be asking whether the referee is biased in favor of Team B during the game as a whole. And the answer to this question isn’t settled by the fact that he’s biased in favor of Team B during some part or parts of the game. (Compare the discussion of parts and wholes in Chapter 1.) Indeed, in a given case, it’s possible that an important aspect of a referee’s being unbiased between the teams for the game as a whole consists in the way that he awards make-up calls. Suppose, for example, that whenever he takes himself to have
made an honest mistake in favor of Team A, he develops a disposition to award the next close call to Team B, and that whenever he takes himself to have made an honest mistake in favor of Team B, he develops a disposition to award the next close call to Team A. The referee is thus biased in favor of Team A during some parts of the game and biased in favor of Team B during other parts. (Contrast a case in which things are asymmetrical: the referee is disposed to award make-up calls to the home team but not to the visiting team, perhaps out of a conscious or unconscious desire to placate the home team’s fans, at least when he suspects that their outrage is justified.) Imagine a referee who is even-handed in awarding make-up calls to the two teams, in the way described above, and compare him to an unbiased referee who calls each play as she sees it, without regard to earlier events. It’s consistent with the stipulations that both referees are perfectly unbiased between Team A and Team B over the course of the game as a whole. Nevertheless, there is another respect in which the first referee is biased while the second is not. Although neither referee is biased in favor of either Team A or Team B over the course of the game as a whole, the first referee has a bias that the other lacks, namely, a bias in favor of whichever team he takes himself to have incorrectly disadvantaged most recently. This too is a type of bias, and one that he’s disposed to manifest throughout the game as a whole. There are then many different potential biases in the vicinity in a case like EGREGIOUS MAKE-UP CALL. When a referee issues a make-up call, the fact that he does so might be evidence that he’s biased in some of these ways but not in others. But in no case will it be evidence that confirms both the hypothesis that he’s biased in such-and-such a way at a certain time, and the hypothesis that he’s unbiased in that very same way, and at that very same time. The same points hold more generally. By acting in a certain way on a particular occasion, an agent might simultaneously provide us with evidence that he’s biased in some ways but unbiased in others. On the norm-theoretic account, this is because his action might simultaneously provide evidence that he’s disposed to systematically depart from a salient norm in such-and-such ways, while also providing us with evidence that he lacks other such dispositions. Moreover, it’s not simply that one and the same event can provide us with evidence that he’s disposed to systematically depart from one norm but not from another norm, a phenomenon that we explored in some detail in Chapter 3. Rather, the norm relative to which he counts as biased in one respect might be the same norm relative to which he counts as unbiased in another respect, inasmuch as he might be disposed to systematically depart from that norm in some ways but not in other ways that are also salient in the context. (As in the case of the referee who is unbiased between Team A and Team B, but who displays a consistent bias in favor of whichever-team-was victimized-by-his-most-recent-honestmistake, a bias that in practice sometimes favors Team A and sometimes favors Team B in more or less equal measure.) In such a case, the fact that the agent counts as biased in some ways but unbiased in others might be grounded in the same norm, and the agent’s dispositions with respect to it. An agent might be genuinely and actively concerned to be objective and unbiased, but his efforts to achieve that goal might misfire in a counterproductive way. On the other hand, an agent might successfully pursue the goal of objectivity by intentionally neutralizing or
compensating for biases that otherwise would have operated (while not overcompensating for them). Indeed, some norms are specifically concerned with objectivity itself, for example by offering instructions about how to purse and achieve it. The next chapter explores this special class of norms. Bias: A Philosophical Study. Thomas Kelly, Oxford University Press. © Thomas Kelly 2022. DOI: 10.1093/oso/9780192842954.003.0006
1 I’ll use the words “tendency” and “disposition” interchangeably in what follows. In the American Psychological Association’s online Dictionary of Psychology, the specific biases listed there are generally defined as tendencies, as when confirmation bias is defined as “the tendency to gather evidence that confirms preexisting expectations…” https://dictionary.apa.org/confirmation-bias. I intend my usage to align with the one employed there. Compare Johnson (2020:31), according to whom “bias is fundamentally a dispositional phenomenon.” Schwitzgebel (2010), Machery (2016), and Welpinghus (2020) endorse broadly dispositionalist accounts of implicit bias. On racial bias as a disposition, see Appiah (1990). 2 He need not, of course, make every call, or even every close call, in favor of Team A. After all, doing either of those things might alert observers to the fact that things aren’t on the up-and-up, something that would greatly displease the mob. The disposition to favor Team A might fail to manifest itself in other ways as well—for example, at a given moment, the referee’s natural instinct to “call them as he sees them” might take over and cause him to award a close and crucial call to Team B. All of this, I take it, is consistent with the referee’s being disposed to favor Team A over Team B, and therefore with his being biased in their favor. 3 For discussion and further references, see Nebel (2015) and Bostrom and Ord (2006, especially n. 9). Interestingly, although the orthodox view about each of these allegedly more fundamental tendencies is that it’s normatively inappropriate, in each case orthodoxy has been contested: Buchak (2017) defends the rationality of risk aversion; Kelly (2004) defends the rationality of giving weight to sunk costs; Cohen (2012) defends the rationality of an essentially conservative preference in favor of valuable things that already exist over equally or more valuable things that don’t yet exist; and something in the neighborhood of “omission bias” would seem to be entailed by a natural interpretation of the putative norm, “First, do no harm.” In part for this reason, it’s also at least somewhat controversial whether “status quo bias” is generally normatively inappropriate, or a bias in the pejorative sense. Nebel (2015) defends the view that status quo bias is sometimes normatively appropriate. 4 For warnings and protests against a “one-size-fits-all-model” of implicit bias, see Holroyd and Sweetman (2016) and Welpinghus (2020:1618–20). (The phrase “one-size-fits-all-model” is from Welpinghus.) 5 For discussion of the connections between implicit bias and moral responsibility, see, among others, Brownstein (2015), Faucher (2016), Glasgow (2016), Holroyd (2012, 2017), Holroyd and Kelly (2016), Levy (2012, 2016), Madva (2018), Mason (2018), Vargas (2017), Washington and Kelly (2016), Rosen (2022), and Zheng (2016). 6 To be clear, I don’t mean to suggest that any of the authors listed above are confused on the point, although given the view offered here the distinction in question bears explicit emphasis. The phenomenon of multiple realizability was central to Putnam’s (1967) influential early critique of the then dominant mind–brain identity theory at the outset of the rise of functionalism in the philosophy of mind. Just as the phenomenon of multiple realizability was taken to favor broadly functionalist or dispositionalist answers to the mind–body problem, so too one might see it as favoring broadly functionalist or dispositionalist accounts of bias. For functionalism about bias, see especially Johnson (2020). For dispositionalism about implicit bias, see especially Machery (2016). On the general theme of multiple realizability, see the survey pieces by Bickle (2020), Funkhouser (2007), and Jaworski (2020) and the further references provided there. 7 On the gradability of dispositions, see especially Manley and Wasserman (2007). As they point out, a good deal that’s said about dispositions in the traditional philosophical literature fits poorly with the fact that they’re gradable. 8 On these points, see especially David Lewis’ (1983) classic discussion of “flat”. Lewis is responding to Unger (1975), who develops and defends a view on which there would be no flat surfaces or unbiased coins. Interestingly, a group of Stanford mathematicians has recently offered a putative proof to the effect that even coins that are perfectly unbiased
between heads and tails will, when vigorously flipped, exhibit a slight bias in favor of coming up the same way they started, given the laws of physics. (According to the theorem, the chance of coming up the same way is about .51 See Diaconis, Holmes, and Montgomery (2007).) 9 While I believe that this last possibility is enough to secure the point at issue in Chapter 1, namely that there can be biased wholes all of whose parts are unbiased, it also makes salient another question: could there be biased wholes all of whose parts are perfectly unbiased, that is, parts that don’t depart at all from the relevant ideal or Platonic form? Again, I think the answer is Yes. Here is a case to think about. Suppose, as many believe and some have argued (e.g. Kelly 2013), that not every possible body of evidence determines a uniquely reasonable doxastic response: there are “permissive cases” in which multiple incompatible doxastic responses to a given body of evidence might each be perfectly reasonable. Next, consider two groups, each of whose members are perfectly unbiased (however exactly that notion is understood) and also perfectly reasonable, in the sense that everything that each member believes is a perfectly reasonable thing for them to believe given the evidence: no changes in what anyone believes would amount to an improvement with respect to their rationality. However, in the first group, the beliefs of the members in “permissive cases” happen to divide more or less evenly among the permissible options; in this respect, their opinions are relatively diverse. In contrast, in the second group, the opinions of the members just so happen to cluster in permissive cases. On any plausible way of aggregating individual opinions into a group opinion, this can lead to differences between the two groups with respect to their group opinion. And these differences in group opinion can make a difference to which potential policies and courses of action are considered live options for the group; in particular, some possibilities that are live options for the first group might not be live options for the second, because of the greater unanimity of the latter. In this kind of case, it seems at least somewhat natural to describe the second group as biased against those options or choices; after all, another group with the same evidence (and utilities, etc.) might have rationally chosen those options, as is witnessed by the first group, yet those options won’t be considered by the second group. Nevertheless, every member of both groups seems impeccable when judged as an individual. 10 On thick concepts, see especially the useful overviews by Väyrynen (2019) and Roberts (2013), and the further references provided there. 11 When we describe a person as courageous, we attribute to them a certain disposition as opposed to a paradigmatically observable property. Nevertheless, as the example makes clear, this is no obstacle to our empirically confirming or disconfirming the claim that they are courageous in relatively straightforward ways, at least in some cases. The same holds for many paradigmatic attributions of bias to people, which I’ve suggested should also be understood as attributions of dispositions. 12 Compare the definition of hindsight bias offered by the American Psychological Association’s dictionary of psychology: https://dictionary.apa.org/hindsight-bias. 13 Compare again the APA’s definition: https://dictionary.apa.org/anchoring-bias/. 14 Compare the definitions offered by Grimes (2012:355), Mladina and Grant (2016:1), and Swedroe (2019), among others. 15 In line with what was argued in Chapter 3, §2, such disputes can take different forms. For example, it might be a dispute about whether an agent’s behavior actually fell short of some undisputed norm; or it might be a dispute about whether the putative norm to which the agent failed to conform is actually a genuine norm. 16 On the general phenomenon of alienated belief, see especially Hunter (2011). On the possibility of racism in the absence of racist behavior, see, e.g. Arthur (2007:17) and Glasgow (2009:66–7). 17 In the context of arguing against cognitivist accounts of racism, Jorge Garcia has, in a series of papers, appealed to the putative possibility of a racist who simply hates black people in the absence of holding any biased beliefs about them. See Garcia (1997a:13), (1997b:49), (1999:4,10), 2001b:135–6). 18 The heterogeneity of implicit bias is an important theme in the work of Jules Holroyd (Holroyd 2016; Holroyd and Sweetman 2016), who argues that recognizing this heterogeneity is necessary for understanding the different strategies that might be needed to respond to implicit bias. The sense of heterogeneity at issue there, however, seems different from the one at issue here. Holroyd and Sweetman insist that different implicit biases might involve different sorts of associations, or different structures. I think that this is correct, and important. The thesis of RADICAL HETEROGENEITY goes beyond that, however, in holding that what we would ordinarily count as the same bias might be had by people who share none of the features listed above. In addition to the kinds of mixed or impure cases at issue here, in which there is a disconnect between an individual’s beliefs and their behavior and/or their affective states, there are also impure cases within each of the three categories. (As when an individual’s beliefs or actions systematically deviate from a norm in one type of context but not in another.) Impure
cases of the latter sort have received extensive attention in the implicit bias literature. See, e.g., Schwitzgebel (2010) and many of the references cited above. 19 Given her high rate of compliance, it might never be predictable in advance, for any particular occasion on which the agent deviates, that she would deviate from the norm as opposed to comply with it. Rather, what’s predictable is the conditional claim that if this is one of those rare occasions in which she fails to comply, then she’ll fail by overshooting as opposed to undershooting. 20 In addition to the agent who is reliable despite being biased, another apparent possibility is that of an agent who is reliable because he’s biased, as in a case in which an agent’s bias fortuitously dovetails with his environment in a way that makes for reliability. For a discussion of this apparent possibility and what a proponent of the norm-theoretic account might say about it, see Chapter 9. 21 https://dictionary.apa.org/random-error. But for some important limitations on this general idea, see Kahneman, Sibony, and Sunstein (2021). 22 Although I’m unsure where this style of critique first originated, a particularly influential purveyor of it over the years has been the Nobel Laureate Paul Krugman, in his regular New York Times op-ed columns. For example, the following passage, from a column published on the eve of the 2000 presidential election, is representative: [T]he mainstream media are fanatically determined to seem evenhanded. One of the great jokes of American politics is the insistence by conservatives that the media have a liberal bias. The truth is that reporters have failed to call Mr. Bush to account on even the most outrageous misstatements, presumably for fear they might be accused of partisanship. If a presidential candidate were to declare that the earth is flat, you would be sure to see a news analysis under the headline: “Shape of the Planet: Both Sides Have a Point” (2000/2020:378). Although to my knowledge this style of critique first became common during the Bush presidency, it intensified significantly during the Trump years. Moreover, there is some evidence that it actually affected the norms of journalism, whether for better or for worse. For example, partially as a response to this sort of critique, during the Trump years the New York Times and other newspapers routinely began to run headlines of the form “Trump, Without Evidence, Claims X.” In the past, headlines of the same form were apparently considered out of bounds, judging by the fact that they never appeared, even when they would have been accurate. While it’s certainly true that Trump regularly made claims in the absence of evidence, it’s also true that earlier American presidents had been known, on occasion (!), to make such claims, including in contexts in which such claims were extremely important, e.g. claims directly relevant to questions about whether the United States should go to war or undertake some new military campaign. Is the relevant difference that Trump made claims without evidence more frequently than past American presidents? But why would such a difference bear on the general acceptability of such headlines, as opposed to warranting a policy of e.g. explicitly noting the absence of evidence, in any case of reporting on a significant claim made by an American president, regardless of the identity of the president at that time? 23 This is especially true given the well-documented fact that people tend to be highly unreliable in their judgments about what genuinely random sequences look like and therefore too quick in claiming to find patterns in such sequences, patterns that are actually spurious. For an overview of the phenomenon and further references, see Gilovich (1991:Ch. 2). 24 See especially Gift (2015). Interestingly, the main professional basketball league, the National Basketball Association (NBA) seems to have changed its attitude toward make-up calls over time. According to its own self-commissioned study (Pedowitz 2008), prior to 2003, the NBA tolerated and did not actively discourage make-up calls by its referees, while after that, it did actively discourage such calls. This coincided with the rise of the idea that refereeing should be less of “an art” and more of “a science.” (See “‘Old’ vs. ‘New’ Refereeing Philosophies,” 42–4 of the aforementioned study.) On moral aspects of the make-up call, see Hamilton (2011). 25 Notice that in the general case, it’s perfectly possible for a single piece of evidence to confirm both the hypothesis that “S is biased in such-and-such a way” and the rival, competing hypothesis that “S is unbiased in such-and-such a way,” for the following reason. As noted at the outset of Chapter 1, “biased” and “unbiased” are contraries, not contradictories: something might be neither biased nor unbiased. Thus although the hypotheses that “S is biased in such-and-such a way” and “S is unbiased in such-and-such a way” are incompatible with one another, they don’t exhaust logical space. And as is now generally understood, a single piece of evidence can simultaneously confirm rival and incompatible hypotheses, so long as there are still other possibilities that haven’t already been eliminated by previous evidence. (Example: even if we know that some murder was committed by a single person acting alone, that is, that there is a unique murderer, learning that the murderer is left-handed might confirm both the hypothesis that Mr. X committed the murder as well as the rival hypothesis that Mrs. Y committed the murder, if in fact both Mr. X and Mrs. Y are known to be left-handed, and there are other, not
previously eliminated suspects who are right-handed.) Thus, as a matter of structural possibility, there can be cases in which the same piece of evidence confirms both the claim that S is biased and also the claim that S is unbiased, because it tells against the claim that S is neither biased nor unbiased. (Suppose, for example, that it was previously very much an open question whether S is even the kind of thing that might be biased, and then one gets evidence for this, e.g., information to the effect that S is often evaluated with respect to its bias or lack thereof.) However, given the background assumption that the referee in EGREGIOUS MAKE-UP CALL is either biased or unbiased (these exhaust the possibilities) this is clearly not what’s going on, so some other solution or dissolution to the puzzle is needed.
6 Norms of Objectivity 1. Some Varieties Where does bias come from? How does it get into the world, assuming it’s not some kind of fundamental constituent of reality that was there all along? According to the norm-theoretic account of bias, the phenomenon of bias arises out of systematic departures from norms that are not themselves about bias. In the usual case, the norm that the biased agent violates won’t itself be concerned with bias, or objectivity, or any closely related notion. For example, suppose that the norm governing rational action is that of maximizing expected value. In that case, an agent might exhibit status quo bias by violating that norm in a specific way, namely by choosing options that preserve the status quo over alternative options that have higher expected value but which fail to preserve it. However, the norm of maximizing expected value says nothing about bias, and it might be violated by perfectly unbiased agents as well. Indeed, an agent might count as biased in virtue of the way in which she departs from the ideal of maximizing expected value, even though her departures from that ideal are much less severe than those of another agent who is perfectly unbiased. Similarly, if I consistently overestimate the chances that my favorite team will win its game each week, then I manifest a bias, but the norm that I violate, that of proportioning my beliefs to the available evidence, isn’t itself concerned with bias, or with objectivity, or with any other closely related notion. In the first instance, the phenomenon of bias arises from the manner in which we fail to comply with norms that aren’t about bias. However, some norms are specifically concerned with bias, in one way or another. Let’s call these norms of objectivity. Here are some plausible1 examples: • “Research papers submitted for publication in academic journals should be evaluated by referees who don’t know the identity of their authors.” • “When interviewing multiple job candidates for the same position, make sure that the questions asked of the candidates are standardized.” • “Judges should recuse themselves from cases in which they have a personal stake or a conflict of interest.” • “In putting together a committee that will determine or recommend policies for the group as a whole, make sure that the committee membership is inclusive and
• • • • • • • • • • • •
representative of the group as a whole.” “In considering whether a junior professor should receive tenure, don’t rely on a letter from their dissertation advisor for an outside evaluation of the quality of their work.” “When coaching a youth sports team, don’t give your own child more playing time than you give to the other children.” “Judge people by the content of their character, not by the color of their skin.” “In accepting or rejecting a grant proposal/a paper submitted for publication/issuing a verdict for or against the plaintiff, state the reasons for your decision.” “Don’t call on male students more often than female students.” “In reporting the news, political reporters shouldn’t let their personal political opinions influence their coverage.” “A scientist’s/historian’s values shouldn’t influence the conclusions that they reach in their research.” “In discussing controversial issues in class, teachers should make sure that students are aware of the strongest arguments on both sides of the issue.” “Don’t make a judgment about who (if anyone) is at fault for the breakup of a marriage before getting both sides of the story.” “Don’t believe something just because you want it to be true.” “Make your decisions in an unbiased manner.” “Don’t be biased.”
As the list illustrates, norms of objectivity, like norms more generally, vary greatly with respect to their level of generality, or in the range of circumstances in which they’re applicable. Some apply only in quite specific situations—for example, in the context of a job interview, or the evaluation of a tenure case, or a classroom discussion. Others apply much more widely, as in the case of a norm calling for unbiased decision-making. Relatedly, some norms of objectivity attach to particular social roles—for example, to teachers, referees, coaches of little league teams, or to professional historians. Others apply to people more generally, in our capacities as agents and believers. Some norms of objectivity might apply not to people but to institutions, at least in the first instance. Some norms of objectivity that apply quite generally might take on a particular importance in certain contexts, or for individuals in certain social roles. Consider a putative norm of testimony to the effect that a testifier who is offering an account of an actual historical event should offer an unbiased account. We can imagine two different contexts in which this putative norm is violated. The first context is one in which a professional historian writes a monograph that purports to be the definitive account of the event with which it’s concerned. The second context is one in which a proud grandparent relates the latest exploits of his grandchild to the other grandparents at the senior center, where it’s tacitly understood that such stories tend to be exaggerated (and the other grandparents will similarly exaggerate their grandchildren’s exploits when it’s their turn, and so on). The historian violates a significant professional norm and is appropriately subject to serious criticism, perhaps even
censure, for doing so. Indeed, to the extent that the failing is characteristic of his work, he’s a bad historian. Nothing similar is true of the grandparent. Nevertheless, the fact that the grandparent deviates from the relevant standard in the way that he does makes it appropriate to describe him as biased in favor of his grandchild, and the account of his grandchild’s exploits that he offers is a biased account, not an unbiased one.2 Next, some taxonomy. Among the more general category of norms of objectivity, we can distinguish at least the following three more specific categories: preventative norms, or norms of preemption; ameliorative norms, or norms of remediation; and constitutive norms of objectivity. Let’s consider each in turn. The purpose of a norm of preemption is to prevent some salient possible bias from operating. Given our background knowledge or theory, we believe that we’re in a situation in which bias is likely to be a problem in the absence of an active intervention on our part, and so we intervene in order to stop the bias from operating, in the way that it would or might in the natural course of things. A norm that plays a role in governing such interventions, or that’s generated by the reasons that we have to intervene efficiently in such cases, is a norm of preemption. Among norms of preemption, an important species consists of norms of blinding. When scientists purposefully design and execute blind experiments, or college teachers grade their students’ essays in deliberate ignorance of which student wrote which paper, they are acting on norms of this kind. Another important species falling under this genus are norms of recusal. When a judge deliberately steps aside and turns a case over to a colleague because she’s related to one of the parties before the court, or a person abstains from participating in a job search because a relative or former romantic partner has applied for the job, they are following norms of this sort. Some norms of public reason might also function as norms of preemption.3 Consider norms that require people to state their reasons for a decision, policy, or judgment. If it’s perfectly acceptable for the referee of a paper submitted for publication to recommend acceptance or rejection without offering any reasons for their recommendation, that has the effect of making it maximally easy for a referee who is biased against the paper to recommend rejection, and for a referee who is biased in its favor to recommend acceptance. However, if the referee is required to justify or provide respectable reasons for their decision, and the referee knows this in advance, then that might provide at least some check on such bias. Of course, merely requiring a referee to state respectable reasons for their decision is far from a guarantee against the relevant kind of bias: a referee who is strongly biased against a paper might devote great effort to hunting for such reasons, while withholding information about the paper’s virtues. Some norms of public reason might even introduce new biases into the system or decision-making process. For example, in a given case, the reasons that speak against a proposed policy might be more difficult to articulate than the reasons that speak in its favor. In such circumstances, the set of publicly articulated reasons might be a biased subset of the totality of genuine reasons that bear on the decision one way or the other, and so a decision made on the basis of the publicly available reasons might be a biased decision. (Similarly: those who would be harmed by a given policy might be less adept when it comes to articulating reasons than those who would be helped by it. Or perhaps the strongest
reasons against a given policy, although genuine, would be too embarrassing to state publicly, and so on.) In addition to norms of blinding, recusal, and public reason, some norms of preemption are designed to alter the incentive structures of agents at a high risk of bias. Consider the norm of “Divide and Choose.” Two agents are confronted with the need to divide some good (e.g. a desirable pie) in a mutually agreeable way. According to the norm of Divide and Choose, one agent should divide the pie into two portions, and the other agent should decide which portion she will take, leaving the unchosen portion for the cutter. This gives the cutter an incentive to divide the pie as fairly as possible, for he knows that if he divides it in a way that makes one portion clearly more desirable than the other, then he will end up with the less desirable portion. By contrast, if it’s known that the same person will divide the pie and then choose, there is an obvious risk that self-interested bias will affect the way the pie is cut.4 A good norm of preemption might prevent many different biases at once. For example, the use of blinding in clinical trials in medicine is justified by its ability to eliminate or mitigate distorting effects due to the expectations of the participants, observer effects on the participants, confirmation bias, and other biases as well.5 Norms of preemption are typically informed by empirical information or putative information about the world, including facts about human psychology, the circumstances in which various biases are likely to be a problem, and so on. Sometimes, the empirical knowledge is obvious and easy to come by. The fact that people are prone to bias in favor of their kin, or when their own interests are at stake, has been known for millennia. Some suggest that the norm of Divide and Choose appears in the Book of Genesis.6 By contrast, much of the more detailed empirical knowledge on which modern techniques of experimental blinding rest is of more recent vintage. The first deliberately blind scientific experiment was conducted by Benjamin Franklin and his colleagues at the end of the 18th century in a successful attempt to debunk then popular claims about “animal magnetism” as a genuine force of nature.7 The first double-blind experiment was not performed until early in the 20th century, while the efficacy and importance of triple-blinding in certain contexts was not recognized until still later (Robertson and Kesselheim 2016). Consider the practice of triple-blind review, as practiced by some academic journals. Here, as in cases of double-blind review, the author of the submitted paper is ignorant of the identity of the referees for the paper, who are in turn ignorant of the identity of the author. In addition, however, there is a third layer of blindness: the editor who receives the paper and assigns it to a referee does so in ignorance of the author’s identity. This last bit of ignorance is intended to eliminate the possibility of the editor’s favoring (or disfavoring) a particular author by assigning their paper to a referee who she suspects will be relatively sympathetic (or unsympathetic). However, this ignorance on the part of the editor comes with a cost: it creates a situation in which it’s entirely possible for an editor to unknowingly invite an author to serve as the referee for their own paper (!). Because of this, journals that practice tripleblind review typically note, in the context of extending requests to potential referees, that if the paper that they’ve been asked to referee is their own, then they should decline the request without comment or explanation. Here, a norm of recusal (“don’t referee your own paper”) is
invoked in order to preempt a possibility of bias that arises because of the use of another norm of objectivity (the use of triple-blinding). Finally, the request to decline without comment or explanation is also motivated by concerns about avoiding bias: if a potential referee declines a request because he’s the author of the paper, then explaining his actual reason to the editor would immediately “unblind” the editor to his identity and therefore remove the possibility of triple-blind review.8 When norms of blinding are adhered to with an eye towards improving the reliability of our judgments about what’s true, or about the objective merits of some object of evaluation, doing so is an attempt to know more by way of knowing less. Consider a paradigm of the phenomenon: the practice of using a literal, physical screen for musical auditions, so that evaluators can hear the piece of music that they are evaluating but not see the person who is producing it. Presumably, the original use of screens in this context was motivated by concerns about bias. Notice how, even if an evaluator is certain that he’s biased in a particular way (e.g. in favor of male candidates) and sincerely wants to offer unbiased assessments, the method of blinding might be essential to achieving that goal, and superior to every other available method. For example, consider the respects in which blinding is superior to the possibility of explicitly compensating for a pro-male bias by, say, adding a certain number of points to the scores of female candidates and/or subtracting a certain number of points from male candidates. Here there is an obvious risk of either undershooting or overshooting, and thereby either failing to eliminate the original bias or else introducing a new and opposite bias.9 Even if one knows with certainty that one has a bias in favor of male candidates, and so in a large enough group will select too many males and not enough females, one might not know whether one’s bias is operative in any particular case; and even if one did know that, one might not know the strength of the bias. In general, one might know that one is biased in a certain direction, while lacking any way of calibrating one’s judgment. What an evaluator who sincerely wants to make an unbiased assessment of a given performance wants to know is: what would my evaluation of this performance be if I was ignorant of the fact that the performer is female/black/good-looking/of such-and-such an age (etc.)? However, for an evaluator who does know the relevant facts, this piece of counterfactual knowledge won’t generally be available, inasmuch as it concerns how he would judge in another possible world in which he’s ignorant of certain facts that tend to influence his judgment when he is aware of them. By deliberately and actively safeguarding one’s ignorance in the actual world, one automatically achieves a measure of objectivity and a piece of self-knowledge that would otherwise depend upon a precarious and highly speculative inference. In addition to norms of preemption, a second important category of norms that are concerned with bias are ameliorative norms, or norms of remediation. As in the case of norms of preemption, ameliorative norms of objectivity become relevant when we take ourselves to be in a situation in which bias is or is likely to be a problem. However, rather than trying to prevent the bias from operating, we attempt to counteract it, by intervening so that the bias is offset or its effects are mitigated. We actively intervene so as to make the world more like it would have been if the original bias had never operated (at least in some salient respect). A norm that plays a role in governing such interventions, or which is
generated by the reasons that we have to intervene efficiently in such cases, is a norm of remediation. We’ve already noted one paradigm of this phenomenon in earlier chapters: using one bias to offset or to help compensate for other biases. Consider also the use of a “devil’s advocate” in order to help offset the effects of confirmation bias. By explicitly assigning a capable person the task of making the best case that can be made against a generally favored possibility or option, one hopes to find and adequately take account of relevant considerations that might otherwise have been given short shrift or overlooked entirely. In this way, one hopes to arrive at a better overall set of reasons on which to base the ultimate decision. When the process succeeds, the set of reasons that results is superior to the one that would otherwise have been available, not only in the sense that it’s larger but also, and more importantly, in the sense that it’s less biased, or more representative of all of the reasons that are relevant to the choice, since it’s less skewed by confirmation bias and other biases that pull in the same direction. Why are ameliorative norms necessary? If the effects of the original bias are bad enough to warrant compensation or remediation, why not just prevent it from operating in the first place, via norms of preemption? In some cases, preventing the original bias from operating might be impossible; or we might not know how to do it; or our doing it might involve unacceptably high costs; or we might simply be too late. As the last possibility suggests, often the bias will already have operated, and the best that we can do now is to respond it, perhaps inadequately, guided by ameliorative norms. Consider the financial compensation given to Japanese-Americans who suffered from anti-Japanese bias during the Second World War, including internment. Rationales for affirmative action programs that appeal, not to the perceived benefits of diversity, but to the need to compensate for past and present bias against members of certain groups also fall here.10 Similarly, some philosophers suggest that, when doing philosophy, we should give priority to certain kinds of judgments or intuitions as opposed to others because the latter are more likely to be artifacts of bias. For example, it’s sometimes suggested that, in doing moral philosophy, we should privilege or give more weight to our “abstract theoretical intuitions” about morality as opposed to our particular judgments or intuitions about concrete examples, because the latter are often distorted by bias.11 When motivated in this way, a putative norm like “Privilege your abstract theoretical intuitions about morality over your concrete moral intuitions” is an ameliorative norm of remediation in the relevant sense: it’s an attempt to compensate for a bias that’s already been manifested, in order to minimize its downstream effects. It’s characteristic of many ameliorative norms that they call for a course of action that would itself be subject to the charge of bias, were it not for the fact that the course of action is in response to past instances of bias. Whatever case might be made for a policy of privileging our abstract moral intuitions over our concrete intuitions as things stand, such a policy would clearly not be a good one if we were fully rational and unbiased all along, including when it comes to arriving at our concrete moral intuitions. In that case, it would generally be agreed that a policy of discounting our concrete moral intuitions in this way
would be a biased policy, and methodologically unsound for that reason. Similarly, bestowing financial benefits on a particular group of American citizens that aren’t available to others would be the manifestation of an objectionable bias in their favor were it not for the fact that those citizens were victims of anti-Japanese bias in the past.12 Are the categories of norms of preemption and norms of remediation mutually exclusive? Or are some norms both? Consider certain norms of representation and inclusion. In putting together a university faculty committee that will determine or recommend policies that will apply to the faculty as a whole, it would be undesirable to have a committee that’s made up exclusively of men, or people of European descent, or professors from the school of engineering, as opposed to a committee that’s broadly representative of the faculty as a whole. Consider then the norm, “Choose a committee that’s inclusive and representative.” One rationale for such a norm is preventative or preemptive. For example, a committee that’s made up entirely of white men might be more likely to overlook or fail to address issues and concerns that arise exclusively or more often for female members of the faculty, or for faculty members of color. However, the reasons that speak in favor of such a norm might also suggest that it’s at the same time an ameliorative norm, or one that’s justified by a history of past bias. Because of this history, the credibility and authority of the committee might depend on its being representative and inclusive. In a faraway possible world in which there had never been any racial or gender bias, a committee consisting exclusively of white men— imagine that this outcome is produced by a random process13—might be less objectionable than it would be in the world that we live in. Thus, the norm might be best understood as both a norm of preemption and a norm of remediation, and the categories are not mutually exclusive.
2. Constitutive Norms of Objectivity In addition to norms of preemption and norms of remediation, there are also what I will call constitutive norms of objectivity. Here are some plausible examples: • “Don’t let your financial interests influence your judgment about what you’re morally required to do.” • “Judge people by the content of their character, not by the color of their skin.” • “Don’t favor a graduate student over others just because they agree with the central claims of your research program and are committed to advancing that program.” • “In making hiring decisions, don’t give an advantage to people because they belong to your ethnic group.” • “Arrive at your decisions in unbiased ways.” • “Don’t be biased.” Constitutive norms of objectivity, like norms of other kinds, differ widely in their level of
generality and applicability, ranging from the most general anti-bias norm: “Don’t be biased” to much more specific norms. Regardless of its level of generality, the following is the distinguishing mark of a constitutive norm of objectivity: unlike all other norms, a constitutive norm of objectivity is such that any departure from it ipso facto amounts to a case of bias. In this respect, constitutive norms are the unique exceptions to one of the central ideas of the norm-theoretic account, namely that some ways of departing from genuine norms don’t amount to biases. In particular, even if one regularly deviates from a genuine norm, this doesn’t amount to a bias if the deviations are random as opposed to systematic. However, in the case of a constitutive norm of objectivity, there is no further, relevant question to be asked about the randomness or non-randomness of a given deviation: any deviation at all means that a charge of bias is in order. When it comes to constitutive norms, non-compliance amounts to bias. The point is utterly trivial in the case of the most general and abstract constitutive norms of objectivity, such as “Don’t be biased.” Clearly, in any situation in which this norm is applicable, non-compliance amounts to bias. But the point also holds for more specific constitutive norms. For example, someone who departs from the norm “Don’t let your views about what morality requires of you be influenced by your personal financial interests” is ipso facto guilty of self-interested bias. Similarly, if it’s a genuine moral norm that we should judge people by their character as opposed to the color of their skin, then a person who judges another in terms of the latter rather than the former is ipso facto guilty of racial bias; nothing more is required. The same holds for any other constitutive norm, regardless of its level of generality or abstractness. As I use the term, a “constitutive norm of objectivity” is a norm that has the following feature: any failure of compliance amounts to a case of bias. It’s obvious that this feature isn’t shared by ordinary norms that aren’t specifically concerned with bias or objectivity, e.g., the norm of maximizing expected value, or the norm of proportioning one’s beliefs to the evidence. It’s less obvious, but also true, that the same holds for norms of objectivity that aren’t constitutive norms. Consider, for example, norms of recusal. Suppose that a judge fails to recuse himself in a case in which he should have. Does it follow that the judge himself (and/or some aspect of his subsequent participation in the case) is biased? Suppose that the judge was genuinely and non-culpably ignorant of the fact that he should have recused himself.14 This ignorance need not be an artifact of any current or past bias on the judge’s part. Suppose further that, after this initial failure, the judge proceeds to conduct himself with impeccable objectivity from then on. In that case, there is no bias to be found in the scenario, notwithstanding the judge’s failure to comply with a genuine norm of objectivity. Of course, in any given case a failure to comply with a norm of recusal might very well be the result of bias. (Imagine a Supreme Court justice who should recuse himself but decides not to because he realizes that he might very well be the deciding vote on a case that will set an important precedent for an issue in which he’s deeply invested.) And a failure to comply with a norm of recusal might be responsible for a biased decision or judgment, as when an agent who should have recused himself but didn’t succumbs to the relevant temptation in exactly the circumstances that the norm is intended to prevent. A failure to comply with a norm of preemption or a norm of remediation might be caused by bias, or cause bias, or both.
However, unlike in cases involving constitutive norms, whether we have a genuine case of bias always depends on how the details of the case are filled in. When it comes to both norms of preemption and norms of remediation, “honest” failures of compliance are always possible, and that’s why non-compliance doesn’t entail bias. Not so when we’re dealing with constitutive norms of objectivity.15 Among norms of objectivity, constitutive norms have a certain priority to both norms of preemption and remediation. Norms of preemption and remediation are typically concerned with techniques, techniques that get their point and purpose from the reasons that we have to adhere to constitutive norms of objectivity. The reasons that we have to follow norms of preemption and remediation are thus derivative from the reasons that we have to follow constitutive norms. For example, the fact that scientists have reasons to conduct double-blind experiments derive from the reasons that they have to engage in objective, unbiased inquiry. Consider the most general constitutive norm of objectivity, the anti-bias norm: “Don’t be biased.” Understood in terms of the norm-theoretic account of bias, that norm in effect proscribes departing from genuine norms in systematic ways. So understood, the anti-bias norm is a kind of meta-norm that rules out certain ways of failing to comply with other norms. Successfully complying with those more fundamental norms is thus sufficient for successfully complying with the meta-norm. Given that successful compliance with the more fundamental norms doesn’t require endorsing the anti-bias norm, it follows that one can be unbiased even if one doesn’t endorse the anti-bias norm. Indeed, one can be unbiased even if one sincerely disavows any such norm. The postmodernist who, for reasons of theory or practice, condemns any putative norm calling for objectivity or the absence of bias as naïve or pernicious nonsense, or a tool of the patriarchy, might nevertheless still be unbiased given their actual practice.16 The converse also holds. For example, someone who systematically departs from relevant moral norms by treating blacks worse than whites, or women worse than men, counts as biased, and ipso facto violates the anti-bias meta-norm, even if they sincerely endorse that meta-norm, as in some cases of implicit bias. Setting aside certain arguable and non-central cases, it’s a general feature of norms that someone who sincerely endorses and earnestly tries to comply with a norm might nevertheless fail to comply with it. It’s thus unsurprising that sincerely endorsing and earnestly trying to comply with the anti-bias norm is insufficient for complying with it. More interesting is the frequency with which agents fail to comply with the anti-bias norm because they sincerely endorse and earnestly try to comply with it, as in some cases of overcompensation. In such cases, an agent’s attempts to comply with the higher-order norm might cause her to violate some first-order norm in systematic ways, thereby violating the higher-order norm. Of the various ways of being in compliance with the anti-bias norm, only some of them involve responding to that norm, in any sense. As already noted, an agent might be in compliance with the anti-bias norm in virtue of complying with other, more fundamental norms, even if they don’t accept it; indeed, such an agent need not even have the concept of bias. The same is true of the agent who is in compliance with the anti-bias norm because they depart from the more fundamental norms in sufficiently random ways. In some cases,
however, one’s only means of being in compliance with the anti-bias norm might involve responding to it. Consider, for example, an agent who knows that if she’s in circumstances C she’ll be disposed to systematically depart from norm N. She thus intentionally avoids circumstances C, and so does not violate N, and thereby remains in compliance with the antibias norm. (As when one recuses oneself from a series of decisions that will either favor or disfavor one’s own children at the expense or benefit of other children.) Unlike in the other cases, here the fact that the agent remains in compliance with the anti-bias norm depends on her following or responding to it.
3. Following the Argument Wherever It Leads Among the norms that are frequently invoked when concerns about bias are salient, one has often been taken to have special relevance for philosophy: the venerable Socratic injunction to “follow the argument wherever it leads.” Consider, for example, Bertrand Russell’s (in)famous assessment of St. Thomas Aquinas: There is little of the true philosophic spirit in Aquinas. He does not, like the Platonic Socrates, set out to follow wherever the argument may lead. He is not engaged in an inquiry, the result of which it is impossible to know in advance. Before he begins to philosophize, he already knows the truth; it is declared in the Catholic faith. If he can find apparently rational arguments for some parts of the faith, so much the better: If he cannot, he need only fall back on revelation. The finding of arguments for a conclusion given in advance is not philosophy, but special pleading. I cannot, therefore, feel that he deserves to be put on a level with the best philosophers either of Greece or of modern times (1945:463).
Whether this is a fair assessment of Aquinas is controversial.17 Setting that issue to one side, this section explores the intellectual ideal of which Aquinas allegedly falls short. What is it to follow the argument wherever it leads? However exactly it should be understood, it’s clear that the relevant ideal has exerted a powerful attraction in the history of Western philosophy. As one might expect given Russell’s invocation of “Plato’s Socrates,” the theme is a recurrent one in the Platonic corpus. Thus, in the Euthyphro, Socrates declares that “the lover of inquiry must follow his beloved wherever it may lead him” (14c). In the Republic, he instructs his interlocutors that “…wherever the argument, like a wind, tends, there we must go” (394d; Cf. Phaedo 107b). And in the Crito, he offers the following by way of self-description: “I am the kind of man who listens only to the argument that on reflection seems best to me” (46b). The norm is also invoked by many later thinkers. For example, John Stuart Mill, in the midst of his defense of the freedom of conscience in On Liberty, declares that “No one can be a great thinker who does not recognize that as a thinker it is his first duty to follow his intellect to whatever conclusions it may lead” (1859/1978:32). To this day, great philosophers who embraced strange conclusions are praised in eulogies for their willingness to follow the argument wherever it leads,18 and the ideal is mentioned in faculty handbooks in the context of discussions of academic freedom.19 For purposes of trying to understand this ideal, I’ll adopt the working hypothesis that
follow the argument wherever it leads is a genuine norm of inquiry.20 In typical cases, the speech act of telling someone, “Follow the argument wherever it leads!” seems to call for or demand a kind of objectivity in inquiry. Indeed, it’s tempting to classify the relevant norm not only as a norm of objectivity but also as a constitutive norm of objectivity. However, unlike the constitutive norms of objectivity considered in the last section, it seems as though even a perfectly unbiased thinker might fall short with respect to this norm. At least, that possibility follows given extremely minimal assumptions about what the norm requires. In particular, given the minimal assumption that successfully following the argument where it leads involves reaching a certain destination, it seems as though it must be possible to fail to reach that destination as a result of random error, as opposed to systematic error or bias. On the face of it, it seems that someone might fail to follow the argument where it leads in virtue of making an honest intellectual mistake, just as someone might simply botch an arithmetical calculation and arrive at the wrong number for an answer. It also seems that a person could fail to follow the argument where it leads because he’s too tired to follow it, or too drunk, or because he gets distracted by one of his kids. However, it’s a striking fact that the norm is never invoked in response to concerns about random error (or even error more generally) as opposed to concerns about systematic error, of the kind that’s characteristic of biased inquiry. Typically, the norm is invoked when the concern is that the relevant audience might be unwilling to draw the conclusions which emerge in the course of a properly conducted inquiry: either because they’re in the grip of some religious or ideological dogma, or because they’re overly reluctant to depart from commonsense or from views that are generally accepted among their peers or popular in their social circle, or for some other such reason. Thus, even if it’s possible for an unbiased thinker to fail to follow the argument where it leads, in practice the speech act of telling someone: “Follow the argument wherever it leads!” serves as a kind of admonition against biased inquiry. In order to unpack this norm, let’s begin by noting some elementary points about the more general notion of following. Perhaps the paradigm of following involves literally following someone to a certain location in physical space, as when you know how to get to our shared destination but I don’t, so that we agree in advance that I’ll follow you there. Notice that following is itself a modal notion. If I’m genuinely following you, then it’s not merely that I zig after you zig and zag after you zag, in such a way that we eventually end up in the same places. Rather, I must zig because you do. Moreover, even if your zigging causes me to zig, that doesn’t mean that I’m following you, for not every kind of causal influence will result in a case of following in the relevant sense. (If your zigging serves as a signal to your henchmen to knock me unconscious and then carry my motionless body over the same ground that you’ve just traversed, that doesn’t amount to my following you, in the relevant sense.) The modal aspect of following is reflected in the fact that genuine following will support a certain range of counterfactual conditionals. If I’m genuinely following you, then it’s not simply that I zig after (and because of the fact that) you do. Rather, it’s also true that: if you had zagged rather than zigged, then I would have done so as well. Moreover, notice that, even if I genuinely follow you to a certain location (and would also have followed you to a certain range of other locations), that obviously doesn’t mean that I would have followed you wherever you went. For example, even if I genuinely follow you to our common destination,
it might be that if you had veered off down a path that struck me as sufficiently dangerous, I would have stopped short and not followed you down it. That has no tendency to show that I didn’t genuinely follow you, given that no such thing actually happened. Although elementary, these observations about the more general notion of following will be worth bearing in mind in thinking about the norm of following the argument wherever it leads. What would be true of a person who fully lived up to the ideal? Imagine that an inquiry leads to a certain conclusion, and an inquirer believes that conclusion. Does that mean that they have followed the argument to where it leads? No, for they might not believe that conclusion because that’s where the argument has led. Is it enough then if they believe the conclusion on the basis of the argument? No, for consider the following case: BIASED BELIEVER: A person holds a view on the basis of a rationally compelling argument. However, they are dogmatically committed to the view in question, in the following respect: if they were unware of this or any other rationally compelling argument in its favor, they would still hold it, for in that case, certain currently dormant psychological mechanisms would be activated that would ensure that they hold the view despite the lack of supporting considerations.
BIASED BELIEVER is a case of asymmetric overdetermination or causal preemption, in which the rationally compelling argument on which the believer bases their belief preempts the mechanisms that ensure that they would hold the same belief even in the absence of that argument. Given the plausible assumption that holding a view on the basis of a rationally compelling argument for it is sufficient for one’s belief to be rational, the biased believer’s belief is rational, as things stand.21 (Although it would not be, in certain counterfactual scenarios.) However, even if that assumption is granted, it obviously doesn’t follow that the biased believer has lived up to the ideal of following the argument wherever it leads. My suggestion then, is that even if the kind of dogmatic commitment that’s characteristic of the biased believer is compatible with having a belief that’s rational, it isn’t compatible with genuinely following the argument wherever it leads. In order to fully satisfy the ideal, one must not hold any views that have been discredited in the course of the inquiry; in addition, it must be true of those views that one does hold, that one is disposed to abandon them in the event that the inquiry turns against them. Someone who lacks dogmatic commitments to any of the views under discussion more perfectly exemplifies the ideal than someone who has such commitments, even if, given the state of the inquiry as things stand, the views of both are reasonable. In this respect, following the argument wherever it leads is a more demanding standard than believing reasonably or having beliefs that are proportioned to the evidence: the former involves a deeper modal aspect than the latter. In addition to dogmatic commitments, it’s also characteristic of the biased thinker to be dogmatically averse to believing certain things. Perhaps there are some propositions that I’m dogmatically averse to believing, in the sense that I wouldn’t believe them even if my evidence strongly suggested that they are true: my not believing them is quite robust. Someone who perfectly fulfills the ideal of following the argument wherever it leads will lack not only dogmatic commitments but also dogmatic aversions. I mention this explicitly because some philosophers have held that there is an important normative asymmetry between (1) believing something when the available reasons strongly
suggest that it’s false, and (2) refraining from believing something when the available reasons strongly suggest that it’s true. For example, both Bernard Williams (1973:151) and Robert Nozick (1993:87–8) claimed that, while there is something objectionable about holding a pleasant belief in the face of strong evidence, it’s at any rate much less objectionable to simply refrain from holding an unpleasant belief that is well-supported by the evidence.22 Indeed, according to the norms of belief revision proposed by Nozick, while one shouldn’t hold a belief that’s unlikely to be true given one’s evidence (even if doing so would have high expected utility), it’s perfectly permissible to refrain from holding a belief that is wellsupported by one’s evidence if the expected utility of refraining exceeds the expected utility of believing (85–6). Whatever might be said on behalf of such a system of norms, it seems clear that someone who revised their beliefs in this way, discriminating among equally wellsupported views on the basis of considerations of utility, wouldn’t be following the argument wherever it leads. (In terms of the Socratic metaphor: such a person’s beliefs would not be carried along by the argument, in the way that the leaves are carried along by the wind.) Thus, in order to fulfill the ideal, one must not only refrain from believing discredited views, but one must also hold any view that’s adequately supported at a given stage in the inquiry. Moreover, just as there is a modal aspect to the former, so too there is a modal aspect to the latter. That is, in order to fully satisfy the ideal, it’s not enough that one holds all of those views that are well-supported by the argument at any given stage, on the basis of the considerations that support them; in addition, one must be prepared to take up any view that one doesn’t currently hold, should the argument turn in its favor. If one is biased against holding some view that’s under discussion, one doesn’t get off the hook simply because the view isn’t well-supported as things actually stand.23 On the present account then, following the argument wherever it leads involves a kind of modalized reasonableness: one tracks the state of the argument through time, both in what one believes and in what one refrains from believing. Suppose that p and q are among the propositions that are at issue in the inquiry in which one is engaged. If one believes p, then it will typically be true both that: (1) One’s belief that p is reasonable, and (2) If it were not reasonable for one to believe that p, then one wouldn’t believe that p. Similarly, where q is a proposition that one does not believe, it will typically be true that: (3) It’s reasonable for one to refrain from believing q, and (4) If it were not reasonable for one to refrain from believing q, then one would believe q. One might attempt to incorporate these conditions into a reductive analysis of following the argument wherever it leads.24 However, I think that the prospects for such an analysis are bleak, as it would immediately confront the kinds of problems that often bedevil conditional
analyses. For example, suppose that in the actual world you’re perfectly unbiased with respect to the question of whether p is true: you believe that p because it’s supported by compelling evidence, but your commitment to it isn’t at all dogmatic, nor are you at all dogmatically averse to believing not-p, and so on. Still, it might be that the closest possible worlds in which your evidence fails to support p are worlds in which the process responsible for this difference also affects your lack of bias and rationality in ways that lead you to unreasonably believe p. In that case, the relevant counterfactual (2) would be false. But this doesn’t mean that you in any way fall short of the ideal in the actual world, given that here you both reasonably believe p and are genuinely disposed to abandon that belief in the event that the evidence becomes unfavorable. For parallel reasons, (4) isn’t a promising candidate to play the role of a necessary condition in an analysis. However, although neither (2) nor (4) is a necessary condition for following the argument wherever it leads, in typical cases an inquirer who adheres to the ideal will satisfy them with respect to the views under discussion. A more promising approach eschews counterfactuals and appeals directly to dispositions to change one’s mind, replacing conditions (2) and (4) above, as follows: (2*) One is disposed to abandon one’s belief that p in response to its becoming unreasonable for one to believe that p. (4*) One is disposed to acquire the belief that q in response to its becoming unreasonable for one to refrain from believing q.
A final condition on following the argument wherever it leads is suggested by Michael Veber (2021). Veber notes that, even if a person’s belief about p is reasonable as things stand, and they are also disposed to change their mind about p in response to relevant changes in their evidence, this is consistent with their being dogmatically averse to considering further evidence that bears on p. (In Veber’s sense, a person is dogmatically averse to considering further evidence about some issue if they are averse for a particular kind of reason: because doing so might result in a change of mind.) However, a person who is dogmatically averse to considering further evidence isn’t someone who fulfills the ideal of following the argument wherever it leads. Incorporating Veber’s insight, we thus arrive at the following modest proposal: √FOLLOWING THE ARGUMENT WHEREVER IT LEADS. One who is engaged in an inquiry is following the argument wherever it leads if and only if: (A) For any proposition at issue in the inquiry which one believes: (1) One’s belief is reasonable, and (2*) One is disposed to abandon the belief in response to its becoming unreasonable to hold it, and (B) For any proposition at issue in the inquiry which one does not believe: (3) One’s refraining from belief is reasonable, and (4*) One is disposed to acquire the belief in response to its becoming unreasonable to
continue refraining, and (C) One is not dogmatically averse to considering evidence that bears on any proposition that’s at issue in the inquiry. Before concluding, it’s worth comparing the modalized reasonableness account to a plausible rival view. According to that rival view, following the argument wherever it leads should simply be identified with an absence of motivated irrationality in inquiry.25 At the outset of the discussion, I noted that the importance of adhering to the norm is typically emphasized when the concern is that the target audience might be unwilling to draw the conclusions that emerge in the course of inquiry. Why not simply take this fact at face value? One follows the argument wherever it leads when where one ends up is not influenced by one’s desires or other conative states concerning the questions at issue in the inquiry. Although I believe that it would be a mistake to simply identify following the argument where it leads with the absence of motivated irrationality, there are clearly important connections between the two. First, even if (as I’ll argue) an absence of motivated irrationality is insufficient for following the argument wherever it leads, it’s extremely plausible to think that it’s necessary. (Certainly, a proponent of the modalized reasonableness account will think that when motivated irrationality influences the conclusions that one reaches, one has failed to follow the argument where it leads.) Moreover, it’s also extremely plausible to hold that an absence of motivated irrationality is both necessary and sufficient for something in the near neighborhood of the relevant ideal, namely being willing to follow the argument wherever it leads. Just as philosophers are praised for following the argument wherever it leads and criticized for failing to do so, so too they are sometimes praised for their willingness to follow the argument wherever it leads and criticized for their unwillingness to do so. I suggest that a willingness to follow the argument wherever it leads is simply a lack of motivated irrationality. In general, however, being willing to Φ is distinct from actually Φ-ing. Thus, inasmuch as it’s plausible to identify a willingness to follow the argument wherever it leads with an absence of motivated irrationality, it’s plausible that following the argument wherever it leads is not simply the absence of motivated irrationality or having arrived at views that are uninfluenced by one’s desires or other conative states. Moreover, there are strong independent reasons for not identifying following the argument wherever it leads with an absence of motivated irrationality. First, notice that, on the assumption that an absence of dogmatism is necessary for following the argument wherever it leads, the absence of motivated irrationality is insufficient, for dogmatism doesn’t require motivated irrationality. Although paradigmatic cases of dogmatism might very well involve such irrationality, it’s perfectly possible to be dogmatic about some issue (in the sense that one’s view about that issue is resilient in unfavorable epistemic circumstances) where this isn’t due to the operation of motivated irrationality. For example, in principle, one might simply overestimate one’s evidence and become certain that some claim is true even when it’s uncertain, where this mistake isn’t due to the operation of a desire or other conative state; thereafter, one might simply reason from the belief that one treats as certain to the conclusion
that particular pieces of evidence that suggest that it’s false must be misleading. That’s a coherent story in which one becomes a maximally dogmatic about one’s current view, but in which there is no motivated irrationality. (In Bayesian terms: one mistakenly invests maximal credence in some contingent proposition, for some reason other than motivated irrationality, and after that conducts one’s epistemic life in the manner of a perfect Bayesian reasoner, dismissing any alleged counterevidence.) Finally, following the argument wherever it leads shouldn’t be identified with an absence of motivated irrationality for the sorts of reasons canvassed at the beginning of this section. A dispassionate inquirer might make an honest mistake about the normative import of the arguments under consideration and arrive at a view that’s not rationally tenable; perhaps if this were pointed out to her, she would see the error and immediately change her mind as a result. Still, even though motivated irrationality plays no role in her believing as she does, we shouldn’t credit her with having successfully followed the argument where it leads as things stand, in the way that someone who has responded to the arguments correctly has. Again, this possibility seems to result more or less immediately from the nature of following: typically, genuinely following (as opposed to trying to follow, or being willing to follow) involves being on track to reach some target destination (at least, as indicated by the person or thing that one is following). But one can get off track for any number of reasons, and the possibility of honest mistakes or random errors always exists. Similarly, a person might fail to follow the argument where it leads for any number of reasons, although it’s unsurprising that cases in which the failure is due to motivated irrationality or other forms of bias are especially salient, and especially likely to evoke censure, compared to cases involving random errors reflecting simple fallibility. When a thinker who follows the argument wherever it leads inquires as to whether p is true, she is unbiased among the possible outcomes of the inquiry: if the inquiry shows that p is true, she will believe p, but if the inquiry shows that p is false, then she will believe not-p instead. In both her beliefs and her dispositions to believe, she respects the symmetry between the possible outcomes of the inquiry. The deep connections between bias and symmetry are among the topics explored in the next chapter. Bias: A Philosophical Study. Thomas Kelly, Oxford University Press. © Thomas Kelly 2022. DOI: 10.1093/oso/9780192842954.003.0007
1 I put the examples below in quotation marks in order to signal that I don’t mean to endorse them as genuine norms exactly as they are formulated here. At a minimum, further tinkering and refinement would be needed in order to get the details right in many cases. In some cases, there might be more substantive challenges to the principle offered here, challenges to the effect that there is no genuine norm which the principle even approximates. For example, some radical historians reject the traditional idea that objectivity is a genuine norm or ideal for historical inquiry. (For an excellent account of the history of this debate, see Novick 1988.) Similarly, some would reject the idea that there is some genuine norm involving “color-blindness.” In any case, I trust that the formulations offered here will at least serve to get the topic that I want to discuss on the table. 2 There are two possibilities here. First, it might be that the grandparent violates a genuine norm of testimony, although violations of that norm count for little in the context; perhaps it’s even overridden by other norms. (Especially if there is a general practice in place of exaggerating the exploits of one’s grandchildren, perhaps it’s better if the grandparent conforms
to this general practice, as opposed to offering a scrupulously even-handed and unbiased recounting of his grandchild’s performance, with discussion of all of the weak moments as well as the strong ones, and so on.) Alternatively, it might be that the relevant standard, which functions as a genuine norm in the scholarly context of the professional historian, does not do so in the context of the senior center, although we nevertheless count the grandparent and his testimony as biased because of the way in which they systematically depart from it. On our willingness to attribute bias to agents who systematically depart from salient standards that are not genuine norms, see Chapter 7, §5, “Pejorative vs. Non-Pejorative Attributions of Bias.” 3 Thanks to Gideon Rosen for raising this possibility. 4 Interventions designed to prevent or minimize the operation of bias by way of altering the environment in which a decision or choice is made are sometimes known as “nudges.” See especially Thaler and Sunstein (2021) which updates their classic (2009) account. The distinction drawn here between norms of preemption and norms of remediation corresponds roughly to the distinction between “ex ante” and “ex post” debiasing strategies; for an overview, see Kahneman, Sibony, and Sunstein (2021):Ch.19. 5 See Robertson and Kesselheim (2016), especially Part III. 6 When Abraham and Lot reach the land of Canaan, Abraham suggests that they divide it among them; he then divides, and lets Lot choose. For the claim that this passage should be understood as an application of the relevant norm, see Brams and Taylor (1999:53). 7 Among the colleagues was the father of modern chemistry, Antoine Lavoisier, who a few years later was executed by guillotine during the French Revolution, and the physician Joseph Guillotine, after whom the guillotine was named. For an account of this fascinating historical episode, see Dingfelder (2010) and the further references provided there. 8 Unsurprisingly, which practices it makes sense to adopt can depend on subtle empirical questions about how various potential biases operate and interact. Consider norms of recusal. I remain somewhat surprised by how often I receive requests from journal editors to referee papers that critique my own published work. (Based on my own experience, I assume that this is a common practice, at least in philosophy.) On the face of it, the practice of inviting potential referees to evaluate whether work that criticizes their own is worthy of publication seems like about the most obvious conflict of interest imaginable and something that blatantly violates compelling norms of preemption. (Notice that there are actually two kinds of norms worth considering here, one that applies to editors or those with the power to choose referees—“Don’t invite X to referee a paper that criticizes X’s work”—and also a potential norm of recusal that applies to the potential referee qua criticized author—“Decline to referee papers that criticize your own work.”) Isn’t it just obvious that people shouldn’t be asked to referee papers criticizing their own work, because they will tend to be biased against recommending such papers for publication? However, regardless of what’s ultimately said about the defensibility of the relevant practice, things are at least not so simple. For in addition to the obvious thought that referees will have self-interested reasons to recommend against publication in such circumstances, it’s also plausible that the referee will have self-interested reasons that favor acceptance (and therefore, for there to be a countervailing bias that tends to offset or compensate for the original bias). After all, it’s also very much in an academic’s professional interests to have discussions of their work appear in print: for a scholar, as for many others, critical attention is better than no attention at all. Even if the saying, “There’s no such thing as bad publicity” is hyperbolic, there is an important truth in the neighborhood. (Google Scholar and other citation metrics don’t distinguish between “papers that cite you because they think that you’re getting it right” and “papers that cite you because they think that you’re getting it wrong.”) This seems especially true for a subject like philosophy, where even—or perhaps, especially —the most impressive and celebrated of philosophers, both historical and contemporary, are also much criticized. Perhaps all else being equal, an author’s first choice would be to have a paper in print that supports their work as opposed to a more critical paper. Given human nature, we wouldn’t be surprised if the typical scholar showed at least some significant bias that favors the supportive paper over the critical one. However, that is not the choice situation that confronts an author who agrees to referee a paper that’s critical of their own work. Rather, the question concerns whether this paper is worthy of appearing in print or not. And here it might be thought that referees will not generally be biased against such a paper, given the totality of their incentives. Notice that if something in the neighborhood of this general line of thought is playing some role in an editor’s thinking, the same rationale wouldn’t go any way towards vindicating a practice of inviting authors to referee papers that offer positive or favorable discussions of their published work. For in such cases, all of the referee’s selfinterested incentives and potential biases point in the same direction: in favor of acceptance. Perhaps the most compelling reason to invite an author to referee a paper critical of their work is the possibility that doing so will serve as a check on misinterpretation or misrepresentation: generally speaking, authors will be better positioned than third parties to know what their claims and arguments actually are. Given that this consideration doesn’t
itself do anything to diminish concerns about possible bias, it perhaps suggests a policy of inviting the criticized author to serve as an additional referee, alongside some more obviously disinterested other or others. Finally, a confession: despite serious doubts about whether the practice is defensible, I have over the years refereed a significant number of papers that critically discuss my own work. Some of these I’ve recommended for acceptance, some for rejection (and some for “revise and resubmit”). Is the typical author of such a paper any better or worse off, with respect to the probability of its ultimate acceptance by the relevant journal, in virtue of having me serve as (one of) their referee(s), than if the paper had been assigned to some more obviously neutral and disinterested third party? I would not hazard a guess. And even if I knew with certainty the answer in my own case, I wouldn’t generalize that answer to anyone else. 9 In the event of overshooting, this would be a case in which bias results from overcompensation. See the discussion in Chapter 5, §5. 10 For a good summary of different sorts of defenses of affirmative action, some of which are ameliorative and some of which are not, see Anderson (2010):Ch. 7. 11 For proposals of this sort, see especially Singer (1974) and Huemer (2008). I discuss this suggestion in Chapter 9, §1. 12 The fact that an ameliorative norm calls for a response that would itself be subject to a charge of bias in the absence of some compelling rationale is reflected in the kinds of criticisms offered by those who reject the norm or principle. For example, philosophers who reject the proposal that we set aside our moral intuitions about concrete cases in favor of our intuitions about abstract general principles will think that the proposed methodology is a biased one; critics of affirmative action will often charge that affirmative action programs involve “reverse discrimination,” and so on. 13 Of course, given the onerousness (or worse) of serving on university committees, we shouldn’t assume that those who are selected by the process are “the winners” as opposed to “the losers.” As is sometimes recognized, precisely because many university administrators and department heads are now concerned to follow norms such as “Make sure committees are diverse and inclusive,” women and minority faculty members are sometimes asked to do more than their fair share of “service” work, work that often competes with (among other things) research and teaching. Given that the latter are typically more highly valued in the context of promotion decisions, salary increases, and so on, we see here how following a norm that’s designed to minimize bias can also have the creation of new biases among its unintended consequences. 14 It’s significant here that questions about the circumstances in which judges should recuse themselves are often quite controversial among well-informed commentators, not only in practice but even at the theoretical level. On this point, see, e.g., Saphire (1997). 15 Of course, just as some norms of recusal and how they apply in concrete circumstances are unobvious and controversial, others are utterly obvious and not seriously disputed by disinterested parties. When a norm is sufficiently obvious, but a potential decisionmaker to whom it applies nevertheless refuses to recuse himself, we might reasonably treat that very fact as compelling evidence of bias. (Imagine that a member of a search committee refuses to recuse himself even after an immediate family member applies for the position.) Here the inference is warranted because the only plausible explanation of why the person declines to step aside involves attributing bias to him. Even so, the mere refusal to recuse oneself does not itself constitute bias. (Imagine that the person ultimately participates and decides against their own family member, where their doing so doesn’t involve a bias against that family member but is simply in response to a perfectly objective assessment of the relevant facts.) 16 Compare: a philosophical skeptic might disavow any claim to knowledge and embrace theoretical principles that entail that we know nothing. Nevertheless, he might know quite a bit—more or less as much as we do—because enough of his ordinary beliefs satisfy the conditions for knowing. 17 For a representative exchange on the question, see Nelson (2001) and Oppy (2001). 18 See, e.g., Tim Crane, “David Lewis”, in The Independent, October 23, 2001. 19 “A university is characterized by the spirit of free inquiry, its ideal being that of Socrates—to follow the argument where it leads”. Rice University Faculty Handbook https://fachandbook.rice.edu/faculty-rights-privileges-andresponsibilities. 20 Of course, to assume that it’s a genuine norm is not to assume that it always has overriding importance, any more than to assume that truth-telling is a genuine moral norm commits one to thinking that there are no circumstances in which it’s permissible to lie. (The content of the working hypothesis adopted here is not equivalent to the claim that we should follow the argument wherever it leads, though the heavens fall.) 21 Compare the discussion of “biased knowing” in Chapter 8, §1 below.
22 More recently, a number of prominent philosophers have embraced “moderate pragmatism,” according to which there is an asymmetry between belief and refraining from belief when it comes to the capacity of purely practical considerations to provide justification. See, e.g., Fantl and McGrath (2002:81–3) and Schroeder (2012:266–8). The term “moderate pragmatism” is due to Worsnip (2021) who criticizes the view on the grounds that it is not theoretically well-motivated and potentially unstable. 23 Compare Blanshard: What is creditable…is not the mere belief in this or that, but the having arrived at it by a process which, had the evidence been different, would have carried one with equal readiness to a contrary belief (1974:413). 24 Such an account would parallel Nozick’s (1981) “tracking” account of knowledge, according to which (roughly) one knows that p just in case one’s true belief that p is counterfactually sensitive to whether p obtains. On the envisaged account, one is following the argument wherever it leads with respect to p just in case one’s believing p is counterfactually sensitive, not to the truth of p, but rather to the reasonableness of believing p given the state of the argument at a particular point in time. 25 This suggestion was made to me, years ago, by both Frank Jackson and Timothy Schroeder.
7 Symmetry and Bias Attributions 1. Two Challenges According to the norm-theoretic account of bias, paradigmatic cases of bias involve systematic departures from norms. I’ve suggested, and attempted to show, that this way of thinking about bias provides a fruitful and illuminating framework for theorizing about the phenomenon. Still, even if there is something to that suggestion, the question remains: just how seriously should we take the proposed framework? One way of taking it very seriously would be to endorse the norm-theoretic account as a reductive analysis of the notion of bias. Of course, the track record of reductive analyses in philosophy is not encouraging—to put it mildly. Even apart from such general inductive considerations, there are more specific reasons for pessimism in the present case. As we will see, there are some cases involving systematic departures from genuine norms that we wouldn’t ordinarily classify as cases of bias. Conversely, there are also contexts in which we’re happy to describe an agent as biased even though we don’t believe that there is any genuine norm that they are disposed to violate, systematically or otherwise. In short, the fact that a case involves a systematic departure from a norm seems to be neither necessary nor sufficient for its being a case of bias. That is, there seem to be relatively clear counterexamples to any attempt to straightforwardly identify biases with systematic departures from norms (or with dispositions to systematically depart from norms), in both directions. More threateningly, even if one has little appetite for reductive analyses (or for the project of attempting to provide non-circular necessary and sufficient conditions for cases of bias, etc.), one might worry that these anomalous cases suggest that the entire approach embodied by the norm-theoretic account is simply wrongheaded. After all, the history of science is replete with theories that initially seemed to promise genuine insight into the phenomena with which they were concerned, and to accommodate much of the relevant data, but that were then revealed to be on the wrong track entirely. Perhaps the existence of certain anomalies suggests that the same is true of the norm-theoretic approach. What we should make of these cases—what lessons they teach us about bias, and how we should take account of them in our theorizing—are among the questions explored in this chapter. In addition to questions about its extensional adequacy, there is a more subtle challenge that confronts the norm-theoretic account, a challenge that arises from the phenomenon of non-pejorative uses of “bias.” As I’ve emphasized, much of our discourse employing “bias”
and its cognates, in both everyday life and in the sciences, seems to presuppose, imply, or implicate that being biased is at least in some respect a negative or bad thing. (Even when the badness in question is not moral badness.) On the other hand, as noted in Chapter 1, it’s also true that the word “bias” and its cognates are often employed in ways that do not presuppose, imply, or implicate any such thing. In both everyday life and in the sciences, speakers often attribute bias where no negative judgment is intended. For example, when vision scientists investigate how our visual system manages to deliver perceptual knowledge, they sometimes refer to the “biases” of the system. Similarly, cognitive scientists attempting to understand how we reason inductively will routinely speak of our “inductive biases.” In neither context is there any suggestion that the biases in question are a bad thing, or that it would be better if we didn’t have them. On the contrary, it’s assumed that such biases play an indispensable role in exemplary reasoning and paradigmatic episodes of knowledge acquisition, and that without them, learning from experience would be impossible. Recall also Burge’s remark on the correct approach to the study of language: [T]here is a methodological bias in favor of taking natural discourse literally, other things being equal. For example, unless there are clear reasons for construing discourse as ambiguous, elliptical, or involving special idioms, we should not so construe it (Burge 1979:116).
Here, far from being used pejoratively, the “bias” to which Burge refers is one that he straightforwardly endorses and recommends to others as a part of optimal practice. What can a proponent of the norm-theoretic account say about this aspect of our practices of attributing bias? As emphasized in Chapter 3, §1, the norm-theoretic account is an account of bias, in the pejorative sense of “bias.” As such, it is not an account of our practices of attributing bias in the pejorative sense, as opposed to the phenomenon itself, as it arises in the world, independently of our practices. Still less is the norm-theoretic account of bias an account of our bias attributing practices more generally, including both pejorative and nonpejorative attributions. However, although the norm-theoretic account is not itself an account of our practices of attributing bias, a theorist who accepts it seems well-positioned to make sense of pejoratives attributions of bias and (more generally) the evaluative, non-neutral uses of the word “bias” and its cognates. After all, if a bias involves a systematic departure from a genuine norm,1 then it’s obvious why the description of someone or something as “biased” involves a negative aspect or element. Namely, the negative evaluation carried by an attribution of bias derives from the fact that the person or thing departs or would depart from a salient norm; and even in those cases when we judge that this is all things considered for the best (as in the case of the moral but biased referee considered in Chapter 3, whose bias is directly responsible for saving innocent human lives), we can still recognize that person or thing as falling short of a genuine standard, one that we have reason to respect in other contexts even if not in this one. In contrast, the norm-theoretic account might seem to have simply nothing to offer when it comes to understanding uses of “bias” where no negative evaluation at all is in play or even hinted at, as when cognitive scientists discuss the “inductive biases” that are characteristic of exemplary human reasoning. Is there an objection or reason for concern here? One might think not. After all, why should we expect an account of bias in the pejorative sense to shed any light on non-
pejorative uses of the term “bias”? On the face of it, it would seem that non-pejorative uses of “bias” are simply a different topic, and therefore irrelevant to the credibility of an account of bias in the pejorative sense. However, I think that that line of thought lets a proponent of the norm-theoretic account off the hook too easily, for the following reason. Notice that, although some uses of “bias” suggest something negative while other uses do not, this phenomenon doesn’t seem like a case of pure semantic ambiguity akin to what we find with the word “bank.” That is, the fact that we sometimes use the word “biased” to convey a negative evaluation of the object of our attribution, but in other cases we don’t, seems importantly different from the fact that we sometimes use the word “bank” to talk about financial institutions and sometimes use the same word to talk about riverbanks. It’s an accident that we use the word “bank” to talk about both financial institutions and riverbanks. But it’s no accident that we use the same word to talk about both biased judges and biases of our perceptual systems, even if in the former case the use of the term will generally convey negative information about its object while in the latter case it won’t. In the light of that consideration, I suggested in Chapter 1 that a sufficiently comprehensive theory of bias and our practices of attributing it should fulfill the following desideratum. Given the plausible assumption that the evaluative uses of “bias” and the neutral, non-evaluative uses don’t simply reflect the kind of pure ambiguity that we find with “bank”, a comprehensive theory should provide us with some insight into this phenomenon. That is, it should tell us something about what the evaluative and the non-evaluative uses have in common, as well as what distinguishes them, beyond the obvious. As noted above, the norm-theoretic account does not purport to be a comprehensive theory of bias and bias attributions in this sense. Nevertheless, one might worry that a theorist who accepts it will be poorly positioned to make sense of the full range of data in this area, inasmuch as the general picture suggested by the norm-theoretic account seems to make the existence of nonpejorative uses of “bias” completely mysterious, or at best a matter of pure semantic ambiguity. We have then two sets of challenges to the norm-theoretic account: • Challenges to the effect that this way of thinking of about bias is fatally flawed and unilluminating because it leaves us with a picture that isn’t even extensionally adequate: some systematic departures from genuine norms do not involve biases, and some biases involve no systematic departures from genuine norms. • The challenge of whether someone who accepts the norm-theoretic account can tell a plausible story about the distinction between pejorative and non-pejorative uses of “bias,” a story that does justice both to what such uses have in common and how they differ, without positing the kind of ambiguity that we find in the case of the word “bank.” This chapter addresses both sets of challenges. Indeed, it aspires to provide a unified treatment of the two. Along the way, it also addresses a number of what I take to be
independently interesting issues, including the pervasive role of symmetry considerations in our thinking about bias and the relationship between a theory of bias and a theory of our biasattributing practices. In this respect, it is as much an exercise in exploration as it is a defense. In a nutshell, and to a first approximation, the story that I’ll develop and argue for runs as follows. A central aspect of our practices of attributing bias is this: we often attribute bias to an agent because they systematically depart (or are disposed to systematically depart) from a contextually salient standard for thought or behavior. In some cases, the standard in question is a genuine norm. In such cases, the agent counts as biased in the pejorative sense, in keeping with the picture put forward in the previous chapters. However, even if we don’t endorse the contextually salient standard as a genuine norm—indeed, even if we deny that it’s a genuine norm—we might still be happy to describe the agent as “biased,” provided that their departure from the salient standard can be naturally understood or conceptualized as a departure from symmetry. In that case, however, the description of the agent as “biased” lacks any evaluative or normative punch. Cases in which an agent counts as biased in the pejorative sense can thus be viewed as a special case of an even more general phenomenon, the special case in which the standard from which the agent departs is a genuine norm, as opposed to a symmetry standard that might or might not be a genuine norm. I begin by discussing some cases in which an agent systematically departs from a genuine norm, but where it doesn’t seem especially natural to describe them as “biased.”
2. Norms without Bias? Does every systematic departure from a genuine norm amount to or involve a bias? It seems not. Consider the following examples: (1)
SYSTEMATIC MISPROUNUNCIATION: When they were younger, two of my three children consistently mispronounced certain words or sounds when those words or sounds occurred embedded within certain linguistic constructions. Thus, my children both departed and were disposed to depart from the norms of correct English pronunciation, in systematic ways. However, it doesn’t seem natural—at least to my ear—to attribute bias to my children for that reason.
(2)
LONG DIVISION: A person who consistently makes the same kind of mistake when doing long division systematically departs from the norms of arithmetic. But intuitively, they are not biased on that account.
The intuitive judgments that the protagonists in SYSTEMATIC MISPROUNUNCIATION and LONG DIVISION are free from bias are disputable. After all, in both cases, it’s at least somewhat natural to describe the relevant agents as biased towards making certain mistakes as opposed to others, out of all of the possible ways of going wrong.2 Moreover, perhaps our
reluctance to describe these agents as biased is an artifact of our reluctance to apply the often negatively valenced term “biased” to people when their systematic mistakes are innocent ones, as in the case of a child mispronouncing words, or a person who is struggling to do math correctly.3 However, consider a less sympathetic character: (3)
UNBIASED SERIAL KILLER: In order not to arouse suspicion, a serial killer impeccably follows the moral norm Do not harm others, except when an opportunity to undetectably poison someone arises. He would never punch, kick, or mug anyone on the street; nor would he destroy or steal someone else’s property, commit identity theft, or engage in “white collar crime.” Nor would he commit murder by any other method, or in any other circumstances. When he does kill, he chooses his victims without regard to their race, sex, sexual orientation, religion, political ideology, and so on.
Compare the serial killer with an agent who manifests status quo bias. A person who impeccably follows the norm of maximizing expected value, except when deviating from it would preserve the status quo, counts as biased. The serial killer, no less than the agent with status quo bias, departs from a genuine norm in a way that’s patterned, predictable, and systematic. Nevertheless, intuitively, the person in the grips of status quo bias is a clear case of a biased agent, while the serial killer is not. More generally, it seems as though we are inclined to classify some systematic departures from norms as cases of bias, but not others. What should we make of that? Let’s distinguish two different responses to such examples. According to Response #1, such examples are straightforward counterexamples to the claim that a tendency to systematically depart from a genuine norm is a bias. Of course, someone who adopts this response might retain the idea that there is some theoretically important connection between the two notions—for example, they might think that the former is necessary, even if insufficient, for the latter. Such a theorist might very well search for additional necessary conditions for genuine cases of bias, conditions that aren’t satisfied by the cases above, and which serve to distinguish such cases from genuine cases of bias. Alternatively, they might give up on the idea that there is any close connection between the two notions. Or they might retain the idea while denying that the difference between the two can be captured in independent terms.4 While I feel the pull of Response #1, I also want to encourage openness to another way of responding to examples like those above. According to Response #2, we should endorse the idea that tendencies to systematically depart from genuine norms are biases, notwithstanding the intuitive pressure that examples like (1)–(3) put on that idea. Rather than regarding such examples as counterexamples, we should regard them as showing us something about our practice of attributing bias: namely, that we are more likely to apply the term “bias” to some systematic departures from norms than to others. All else being equal, the first response is preferable to the second, for the following
reason: it promises a potentially simpler and more straightforward account of the relationship between what bias is and the circumstances in which we attribute it. What, if anything, recommends Response #2 over the Response #1? Here are two lines of thought that support it. First, when we’re concerned with a genuine norm and an agent to whom it applies, the difference between the agent’s complying with that norm and their failing to comply with it is a genuine distinction: it carves reality at the joints. Similarly, the difference between an agent’s systematically departing from a genuine norm and their randomly departing from it is a genuine distinction, as is the tripartite distinction between complying, departing randomly, and departing systematically: they too carve reality at the joints. The fact that an agent departs from a norm systematically, as opposed to randomly or not at all, is often an interesting and important fact about the agent, in everyday life, in the sciences, and for legal purposes. Given this, we would expect our conceptual scheme to have a way of marking this phenomenon, and the concept of bias has at least as good a claim as any other to filling this role. Therefore, we have good reason to accept the idea that tendencies to depart from genuine norms in systematic ways are biases, notwithstanding the fact that we more readily apply the label “bias” to some such tendencies than to others. Second, and relatedly, it’s plausible that an identification of biases with tendencies to systematically depart from norms or standards corresponds to at least one notion of bias that’s frequently employed in the social sciences and elsewhere. In fact, given how broadly some social scientists and psychologists use the term “bias,” I suspect that it would be extremely difficult, and perhaps impossible, to come up with a less inclusive account (e.g. one that imposes additional necessary conditions) that doesn’t at the same time exclude at least some of the things that some working scientists would classify as biases. If that’s true, then an account that identifies biases with tendencies to systematically depart from norms might provide a plausible explication of at least one concept of bias, in something like Carnap’s (1950) sense of explication. As understood by Carnap, an explication of a concept doesn’t aim to perfectly capture or accommodate all of the subtleties and nuances of ordinary usage; indeed, it typically won’t even be equivalent in its extension to the pre-theoretical notion. Rather, the aim of an explication is to capture the theoretically interesting and important notion in the vicinity, and it makes sense from the standpoint of theorizing to carve things up in the relevant way, even when by carving things up in that way one departs at the margins from ordinary usage.5 I feel the pull of both Response #1 and Response #2. I won’t attempt to further adjudicate between them, beyond what I’ve said here. What seems to me to be an indisputable datum in this area is the following: we much more readily apply the term “bias” to some systematic departures from norms than to others. For this reason, even a theorist who accepts Response #2, or who is prepared to endorse an identification of biases with dispositions to systematically depart from norms as a kind of Carnapian explication of the notion of bias, cannot simply ignore examples like the ones considered in this section. For insofar as we more readily apply the term “bias” to some cases involving systematic departures from norms than to others, as such examples suggest, a good account of our practices of attributing bias should have something to say about that fact.
Given then that we more readily apply the term “bias” to some systematic departures from norms than to others, what makes the difference in particular cases? My own suspicion is that our relative willingness to apply the term “bias” in particular cases is sensitive to a multitude of diverse factors, some of which will turn out to be more philosophically interesting than others.6 Plausibly, who and what we’re willing to describe as “biased” in specific contexts is a messy affair, and one that’s likely to resist capture in a clean and counterexample-proof set of necessary and sufficient conditions. Instead of trying to offer an exhaustive catalog of the factors and considerations that can make a difference (something that I doubt is possible in any case), I want to look in some detail at one factor that I believe often plays a central role, and which is of significant philosophical interest in its own right. In my view, our willingness to attribute bias in particular cases is highly sensitive to symmetry considerations. The next section develops this proposal and explores the connections between bias and symmetry more generally.
3. Symmetry As a warm-up for thinking about more complicated cases involving agents, consider first the simple paradigm of the biased coin. When the coin is flipped, there are two possible outcomes. It’s characteristic of the unbiased coin that there is a symmetry between those two outcomes, inasmuch as they are equally probable. In the case of the biased coin, the symmetry is absent. The same holds when we’re concerned with subjective probability as opposed to objective or physical probability. A person who is disposed to invest more credence in the possibility that a fair coin will land heads than in the possibility that it will land tails is biased in favor of heads. A natural thought is that we count this as a bias because it involves being disposed to violate a symmetry constraint on rational believing. My conjecture is that the relevant story generalizes. Even in more complicated cases, paradigmatic instances of bias typically involve departures from standards that amount to symmetry violations, while being unbiased involves respecting or preserving certain symmetries and invariances. Consider an admissions process. The simplest way to have an unbiased admissions process is when every applicant has an equal chance of being admitted, as when offers of admission are determined by a fair lottery. But of course, even if some applicants—the better qualified ones—have a greater chance of being admitted than others, the process might still be unbiased. In that case, even though applicants will differ in their chances of being admitted, other symmetries will be preserved. For example, if an admissions process is unbiased with respect to ethnicity, then applicants of different ethnic backgrounds with equal qualifications will have equal chances of gaining admission: the chance of an applicant’s getting in will be invariant with respect to their ethnicity. In the case of the biased admissions process, this symmetry won’t be preserved. Contrast the cases of SYSTEMATIC MISPROUNUNCIATION, LONG DIVISION, and the UNBIASED SERIAL KILLER considered in §2, in which many find it counterintuitive to attribute bias to the agents in question, notwithstanding their systematic departures from
genuine norms. When my son mispronounces certain “r” words, although he’s systematically departing from a genuine norm, his departures aren’t naturally conceptualized as symmetry violations. The same holds for someone who repeatedly makes the same mistake when doing long division, or a serial killer who relies on a particular method of murder. My conjecture is this: when a person is disposed to systematically depart from a standard or norm, the more their departures are naturally understood as deviations from symmetry, the more natural it will be to count the disposition a bias, all else being equal.7 In order to test this conjecture, let’s briefly survey how many familiar biases can be seen as tendencies to systematically depart from norms in ways that amount to symmetry violations. Consider first some biases that we have already had occasion to discuss: • Status quo bias, the disposition or tendency to prefer the status quo to equally good or better states of affairs. Faced with a choice between the status quo and another, equally good outcome, the agent with status quo bias will prefer the status quo; their preferences thus fail to respect the symmetry between equally valuable outcomes. • The bias blind spot, the tendency to think that one is less biased than other people. A person who is no less biased than other people but who consistently judges that their political opinions are distorted by bias while assuming that his own are free from such distorting influences in effect posits an asymmetry between himself and others that doesn’t correspond to anything in the phenomena themselves. • Confirmation bias, the tendency to search for, interpret, and recall information in a way that confirms or favors hypotheses that one already believes. A person who exhibits confirmation bias will characteristically fail to respect two related symmetries. First, they fail to respect the symmetry that obtains between a body of evidence that confirms a believed hypothesis and another body of evidence, of equal relevance and probative force, that would disconfirm it. Second, a hypothesis that they currently believe will be treated differently from an equally plausible alternative hypothesis that they don’t currently believe; in this respect, their cognitive behavior doesn’t reflect the symmetry between the two hypotheses. How should we understand the notion of symmetry in play here? Informally, we can often think of a departure from symmetry as a failure to treat like cases alike, for some contextually relevant respect of likeness or resemblance. Somewhat less informally, we can think of any symmetry as partitioning the domain of discourse into equivalence classes. For example, the class of possible actions open to an agent can be sorted into equivalence classes according to their expected value, with each equivalence class containing all and only those possible actions that are alike in this respect. Similarly, the propositions that are possible objects of belief for the agent can be sorted into equivalence classes according to how wellsupported they are by her evidence, with each equivalence class containing as members all and only those propositions that are supported to the same degree as every other member of the class. It’s characteristic of the believer in the grips of confirmation bias to invidiously discriminate among propositions belonging to the same equivalence class, depending on
whether she already believes the proposition or not. Similarly, it’s characteristic of the agent in the grips of status quo bias to invidiously discriminate among possible actions belonging to the same equivalence class, depending upon whether performing them would serve to preserve the status quo or not. The asymmetry involved in the bias blind spot is a kind of self-other asymmetry. Selfother asymmetries are characteristic of many other biases that psychologists have documented as common among human beings.8 It’s unsurprising, even trivial, that biases that involve the positing of self-other asymmetries are naturally conceptualized in terms of symmetry violations. But the conceptualization is no less natural when the objects of the biased person’s evaluation are his own efforts (without any implicit comparison to the efforts of anyone else) or the efforts of other people (without any implicit comparison to his own). The former is exemplified by self-serving attributional bias, while the latter is exemplified by recency bias: • Self-serving attributional bias. At Princeton University, every student is required to complete a year-long senior thesis project in order to graduate. As one might expect, this policy ensures that over the course of their career the typical Princeton professor will direct a large number of senior theses, the quality of which varies enormously. At one end of the spectrum, the very best theses are of such a quality that that they could be, and sometimes are, published in leading professional journals in the relevant discipline.9 At the other end of the spectrum are theses that reflect on every page their authors need to submit something in order to graduate. After my first several years of supervising theses, I realized that, although I often felt significant pride in the successful theses written under my supervision (and was conscious of the various ways in which I had genuinely helped the student develop their ultimately successful project, etc.), I generally felt little if any responsibility for any of the poor senior theses written under my direction, and tended to view those outcomes as reflecting the indifference and other shortcomings of their undergraduate authors. That is, I tended to see my own agency as a contributing factor to the successful outcomes but not to the less successful outcomes. In offering fundamentally different types of explanations for above par performances and below par performances, my explanatory practice failed to reflect the symmetry between the two kinds of events.10 • Recency bias. Charged with compiling a list of the greatest rock musicians of all time, a critic generates a list that’s heavily slanted towards currently active artists. In particular, a significant number of contemporary artists are included while earlier artists with an equal or stronger case for inclusion are left off. In failing to respect the symmetry that exists between the contemporary artists and earlier artists of equal achievement, the critic betrays a recency bias. Many other theoretically interesting and practically important biases that have been identified and documented by psychologists are naturally conceptualized as systematic departures from norms that involve symmetry violations, including the halo effect,11
hindsight bias,12 the endowment effect,13 end aversion bias,14 and hostile attribution bias.15 Cultural, racial, ethnic, and gender biases are also naturally understood in terms of symmetry violations, as involving systematic failures to treat like cases alike. Consider, for example, the phenomenon of epistemic injustice, as explored by Miranda Fricker (2007). A person who discounts credible testimony because it comes from a woman (but who would have believed equally credible testimony had it come from a man) fails to respect the symmetry between the cases. In addition to biases that are familiar from everyday life or the psychology laboratory, there are also biases (or putative biases) that have primarily been discussed by philosophers. In these discussions, the theme of an intimate connection between bias and symmetry violations also looms large. Consider the following examples: • Lucretius’ symmetry argument against the fear of death/our future non-existence. According to Lucretius, our prenatal non-existence is relevantly similar to our posthumous non-existence. Given this symmetry, our attitude towards our future nonexistence and our past non-existence should be the same. But no reasonable person is troubled, let alone deeply troubled, by the fact that there are past times when they didn’t exist. However, many people are troubled, and some are deeply troubled, by the fact that there will be future times when they won’t exist. Given the symmetry between past and future, the fact that people are troubled by their future non-existence, but untroubled by their past non-existence, reflects an irrational bias. If we’re rational, we’ll change our attitudes towards our death and future non-existence in order to bring them into alignment with the relative equanimity with which we view our past nonexistence.16 • Time biases. My concern for my future self seems to outstrip my concern for my past self in various ways. For example, given the medical necessity of some excruciating operation, I would much prefer that that procedure, and the accompanying agony, be in the past rather than in the future. The fact that our preferences fail to respect the apparent symmetry between our past selves and our future selves, and in fact rather strongly favor the latter over the former, has led some philosophers to write of our “bias towards the future.” Other “time biases” similarly involve violating some (alleged) symmetry standard.17 • Symmetry arguments and the epistemology of peer disagreement. Consider the question of how you should respond in cases of peer disagreement, or to the discovery that an “epistemic peer” holds an opinion that conflicts with your own. Any way of specifying what makes two people epistemic peers with respect to a topic will generate a set of equivalence classes of all and only those who stand in the relation of peerhood to one another.18 Some hold that, given the symmetries that hold between peers, it would be unreasonable to do anything other than to suspend judgment in cases of peer disagreement.19 According to this line of thought, a person who habitually sticks with their original opinion, or who adopts a new opinion that’s closer to their original opinion than to the original opinion of their peer, exhibits an irrational bias in favor of
their own opinions. Thus, the theme of a connection between bias and symmetry violations is prominent in philosophy as well as outside of it. Finally, I note that, although here we’ve focused on biases of people, many biases that are not primarily properties of people or group of people are also naturally conceptualized in terms of symmetry violations.20 Are there cases where thinking in terms of symmetry leads us astray? As noted in Chapter 1, a bias that favors some person or thing over salient alternatives might manifest itself in a judgment or preference, even though the content of that judgment or preference doesn’t itself favor that person or thing over the alternatives. Recall the case of the biased parent: BIASED PARENT: A parent is biased when it comes to evaluating his own child’s musical performances. When his child performs at a recital, the parent’s bias leads him to conclude that the performance was just as good as the best student’s, whose performance was in fact objectively superior to all of the others and was recognized as uniquely best by the other parents.
Here the parent’s bias in favor of his own child manifests itself not in a judgment of superiority but in a judgment of equality or parity. Is this an example where the symmetry criterion yields the wrong results?21 No, for even in this case the parent is disposed to depart from symmetry. If a third child, unrelated to the parent, had performed exactly as his child had performed, the parent would not have judged that performance equal to the performance of the best student. It’s not as though the parent simply has eccentric standards about what makes for a quality performance, standards that his child happened to satisfy to a high degree. Indeed, if that had been the explanation of the parent’s original judgment, then the attribution of bias in favor of his own child wouldn’t be warranted, although it would still be correct to describe the parent as mistaken, and mistaken in a way that favored his child.22 Rather, an essential part of his being biased in favor of his own child is that his child was the one who performed in this way, as opposed to someone else, in which case his reaction to the performance would be more in line with the reactions of the other parents. Thus, even in judging that his child’s performance is equal to the best performance, he’s disposed to depart from symmetry, for his judgment of the quality of the performance isn’t invariant with respect to the identity of the person who delivered it; he would judge an identical (and therefore, equally weak) performance differently if someone else was responsible for it. Inasmuch as the biased parent wouldn’t treat an equally good performance by another child on a par, the case of BIASED PARENT also exemplifies the connection between bias and considerations of symmetry. Finally, the importance of symmetry considerations in our thinking about bias is also seen in various heuristics or tests that have been suggested for helping us to detect or overcome our biases. Thus, somewhere in his voluminous popular writings, Bertrand Russell offers us the following advice: in reading news stories about international events and the actions of different countries on the world stage, we should acquire the habit of mentally replacing the names of countries towards which we have generally favorable associations with the names of countries towards which we have generally unfavorable associations (and vice versa), in
order to gauge whether our reactions to their actions and policies would remain the same. In the Preface to his exegetical study Marxism: Philosophy and Economics (1985), the social theorist Thomas Sowell notes that he has been writing about Marx for over twenty-five years, during which time his own political and economic views (and as a result, his level of sympathy for Marx) have changed radically. He reports his satisfaction at finding that, notwithstanding these radical changes in his own ideological commitments, his basic interpretation of what Marx said has changed very little over the same period (6–7). In context, the clear suggestion is that, if an author’s interpretation of some text or texts isn’t invariant with respect to changes in the author’s own political and ideological commitments, that would provide grounds for skepticism about the author’s interpretation.
4. Bias without Norms? According to the norm-theoretic account of bias, paradigmatic instances of bias involve systematic departures from norms. However, as noted in §2, we are more likely to attribute bias to agents for some such departures than for others. In §3, I suggested that we are especially likely to attribute bias to an agent when we conceptualize their systematic departure as a symmetry violation, or a failure to treat like cases alike, for some contextually salient respect of likeness or similarity. On the other hand, just as there are some cases of systematic departures from norms that we wouldn’t ordinarily describe as biases, there are, conversely, some cases in which we’re happy to describe an agent as biased, even though there is no genuine norm from which they depart. As we’ll see, the notion of symmetry is helpful in understanding these cases as well. As a first step towards putting the kinds of cases that I want to consider on the table, consider the more familiar case of Buridan’s ass: BURIDAN’S ASS: Buridan’s ass finds itself perfectly equidistant from two equally appealing bales of hay, either of which would satisfy its growing hunger. Things are perfectly symmetrical between the bale of hay to its right and the bale to its left. Lacking any reason to prefer either bale to the other, the ass is paralyzed by indecision and ultimately dies of starvation.
It’s generally agreed that it’s rationally permissible for the ass to choose either bale of hay; the one thing that’s not rationally permissible is to choose neither. Notice that by refraining from choosing either option over the other, the ass’s behavior reflects the symmetry that obtains between the two options and the ass’s recognition of that symmetry. Indeed, it’s at least somewhat tempting to describe the ass as too unbiased for its own good. However, bias isn’t necessary in order to solve the practical problem of rationally arbitrary choice. Consider, for example, the following two cases: ASS #2: Finding itself in the same situation as Buridan’s ass, another ass saves itself by choosing randomly. Although it chooses the bale of hay on its right, this isn’t a modally robust fact; it might just as easily have chosen the one on its left. Moreover, if it found itself in a series of such situations, it would continue to choose randomly. ASS #3: When confronted with a series of such situations, a third ass consistently alternates what it chooses. On
any given occasion, it chooses the bale to its left if and only if it chose the bale to its right on the immediately preceding occasion, and it otherwise chooses the bale to its right.
The second and third asses lie at opposite ends of a certain spectrum. The former always chooses randomly, while the latter never does; as a result, the latter’s choices are completely predictable, while the former’s never are. Despite these differences, they are alike in the following respect: neither of them manifests a consistent bias for bales on the right or for bales on the left. Like Buridan’s ass, their behavior and patterns of choice reflect the underlying symmetry between the two options. Consider finally a fourth ass: ASS #4: Whenever it finds itself in a Buridan’s ass type situation, Ass #4 is strongly disposed to choose the bale to its right over the bale to its left. As a result, it repeatedly chooses the bale to its right in a long series of such choice situations.
Unlike the first three asses, this ass’s behavior does fail to reflect the symmetry that obtains between its options. At least in part for this reason, it’s also natural to describe the ass as having a bias in favor of bales to its right. Given that Ass #4 can be described as biased, the norm-theoretic account suggests that we should find some norm with which the ass fails to comply. But is there some genuine norm which it violates? By hypothesis, the bales to its left and the bales to its right are equally worthy of choice. Choosing randomly or alternating between the two—or for that matter, not choosing at all—seems to reflect this symmetry in a way that it is not reflected in the choices of the agent who consistently favors bales on the right. Given the equal choiceworthiness of the options, does it follow that the agent who is biased in favor of one option over the other shouldn’t choose as it does, or that it has at least some reason to alternate or choose randomly, or that things—or the agent itself—would in some respect be better if it did so? More generally, should rationally arbitrary choices be made in a way that reflect their arbitrariness, as opposed to in a biased way that doesn’t? I don’t think so. Of course, things might be different if the agent’s tendency to choose bales of hay on the right were based on some kind of mistake or confusion on its part, for example to the effect that bales on the right really are more choiceworthy in some respect. But as I’m imagining it, that’s not true of the case: the ass is completely clearheaded about such things; it simply has a brute bias in favor of bales to its right.23 If that’s correct, then we have a case in which we can correctly attribute bias to an agent in virtue of the fact that their dispositions and choices fail to reflect a salient symmetry, but where they don’t systematically depart from any genuine norm. Relatedly, notice that here the description of the agent as “biased” seems to lack any evaluative force or punch. Rather, the description of the agent as biased in favor of bales to its right seems to function as a straightforwardly factual, non-normative, and non-evaluative description of the agent’s dispositions and patterns of choice.24 The example thus seems to bring together the two challenges to the norm-theoretic account that I distinguished at the outset of this chapter. First, inasmuch as it’s an example in which an agent counts as biased even though there is no genuine norm from which they
depart, it seems to show that the connection between biases and norms is less tight than the norm-theoretic account suggests. Second, it’s an example of our willingness to describe something as biased without engaging in negative evaluation, an aspect of our practice that seems to escape the norm-theoretic approach entirely. In the next section, I want to offer a unified treatment of these two issues.
5. Pejorative vs. Non-Pejorative Attributions of Bias Here is a picture of our bias-attributing practices that I think is essentially correct, albeit incomplete. Generally speaking, we are happy to attribute bias to an agent when they systematically depart from a contextually salient standard, particularly when their departure is naturally conceptualized as a symmetry violation. In some such cases, the contextually salient standard is a genuine norm; in other cases, it isn’t. When the description of an agent as “biased” is intended, as it often is, not as a mere description, but as one that also conveys a negative evaluation, then the attributor of bias in effect commits himself to the claim that the standard from which the agent departs is a genuine norm: if there is no genuine norm from which the agent departs, then the claim or suggestion that they are biased in some objectionable way is false. When an agent is described as “biased,” but there is no genuine norm from which they depart, there are two possibilities. First, it might be that the attributor mistakenly believes that a genuine norm has been violated even though no such thing has occurred. Second, it might be that the attributor does not believe that any genuine norm has been violated, in which case the attribution lacks any normative or evaluative punch; it functions as a neutral description. Consider first a case in which the attributor of bias mistakenly believes that some principle is a genuine norm even though it isn’t: APPLIED ETHICIST: Although there is no genuine moral norm which entails that we are obligated to be equally concerned about the suffering of innocent human beings and the suffering of non-human animals, a particularly radical applied ethicist mistakenly believes that there is.25 He therefore attributes bias to the other members of his family, who tend to be more concerned about the suffering of innocent human beings than about the suffering of nonhuman animals.
Although the general phenomenon exemplified by this case is common and practically important, as far as I can see it raises no new theoretically interesting issues or questions: it’s simply what we would expect to find, given that people can have false beliefs about which putative norms are genuine, just as they can have false beliefs about virtually anything else. A more philosophically interesting aspect of our bias-attributing practices is this: even when we don’t endorse a contextually salient symmetry standard as a genuine norm—and indeed, even if we explicitly deny that it’s a genuine norm—we are still often happy to count a tendency to systematically depart from it as a bias. This, I think, is what we find in the Buridan’s ass case described above, in which we unhesitatingly attribute bias to the agent who consistently resolves rationally arbitrary choices by choosing goods on its right over
equally desirable goods to its left, even if we deny that this pattern of choice violates any genuine norm. But it is what we find elsewhere as well—for example, when we turn from principles of rational choice to moral principles. Suppose, for example, the story of the applied ethicist continues as follows: THE APPLIED ETHICIST’S SISTER: The applied ethicist’s sister grows weary of being accused of bias, where this is clearly intended as a moral criticism. However, rather than denying the charge, she responds by owning it: “Yes, I’m biased in favor of human beings. I’m much more concerned about their suffering than about the suffering of non-human animals.” Perhaps she adds, for good measure: “But we should be biased in favor of our fellow human beings!” or, somewhat less strongly, “But that’s a perfectly reasonable bias to have!”26
The applied ethicist’s sister admits that she’s biased in favor of human beings, but this need not be a confession on her part: it’s not as though she agrees that she’s committed a moral transgression by favoring human beings in the way that she does. Indeed, she might very well think that if she failed to favor human beings in this way, that failure would amount to a significant moral transgression. Nevertheless, she might be perfectly happy to describe the moral stance or practice that she endorses in terms of “biases,” precisely because it systematically departs from the kind of hyper-egalitarianism that she repudiates. In owning the charge of bias, is the sister simply engaged in loose speech, or making a kind of rhetorical move that’s designed to divest the charge of its usual sting? Before drawing any such conclusions, let’s look at another example, this one non-hypothetical. As noted in Chapter 3, in the “heuristics and biases” tradition that derives from the seminal work of Tversky and Kahneman (1974),27 the ideal of probabilistic coherence is assumed to provide genuine norms of rational belief. For example, someone who gives greater probability to the proposition that Linda is a bank teller and a feminist than to the proposition that Linda is a bank teller believes irrationally (Tversky and Kahneman 1983). Similarly, the prescriptions of orthodox decision theory are assumed to provide genuine norms on preference and action. In this tradition of research, this framework of norms is typically taken for granted as part of the assumed background, and human cognitive biases are identified as systematic departures from those norms. Consider next the challenge to the Tversky and Kahneman paradigm from Gerd Gigerenzer and his collaborators.28 Gigerenzer explicitly denies that either the probability calculus or decision theory provides genuine norms for human agents or believers. Nevertheless, he is happy to speak of systematic deviations from the relevant principles as “biases.” It’s just that they are good biases.29 Notice that in both the actual case of Kahneman and Tversky and Gigerenzer, as well as in the hypothetical case of the applied ethicist and his sister, there is a substantive, nonterminological disagreement about the status of the relevant principles. In each case, although one side to the dispute regards the relevant principles as genuine norms, the other side regards them as spurious. This might suggest that the willingness of the other side to describe departures from what they regard as spurious principles as biases should be understood as a willingness to engage in a kind of pretense for conversational purposes. Given that it’s understood by both sides that there are substantive disagreements that are not merely terminological, perhaps there are conversational pressures to not allow things to get
sidetracked by a merely terminological dispute about whether to call something “a bias,” and this leads the one side to acquiesce and speak in ways that they wouldn’t in other contexts. (This seems especially plausible in the case of Gigerenzer, who wishes to substantively and critically engage with a research tradition in which the idea that the probability calculus and decision theory provide genuine norms for human belief and behavior is deeply entrenched and often taken for granted, as part of the assumed background of normal science.) Although I suspect that there is some truth in this neighborhood, the following is also true: there are some contexts in which we’re happy to speak of “biases” even once it’s generally recognized or conceded that the salient principle is not a genuine norm. Again, this is exemplified by the Buridan’s ass case discussed above. An agent who makes rationally arbitrary choices by consistently choosing goods to its right over qualitatively identical goods to its left departs from a salient symmetry standard that is satisfied by an agent that makes such choices randomly. The first agent thus systematically fails to treat like cases alike, and inasmuch as this is true, we are happy to describe them as biased. But our willingness to do this, and the propriety of our doing it, does not seem to depend upon our being in dialogue with (or even on the existence of) someone who believes that there is a genuine norm corresponding to the salient symmetry standard from which the agent departs. Compare our willingness to predicate bias of inanimate objects such as dice and coins. Surely it’s not a coincidence that we use the same word to describe both a coin that’s disposed to land heads rather than tails as well as a judge who is disposed to rule in favor of the prosecution even before hearing the facts of the case. Of course, coins aren’t subject to genuine norms in anything like the way we are subject to genuine norms in our capacities as believers and agents, or as occupiers of certain social roles such as that of a judge. Nevertheless, there is the following robust similarity between the cases. The ideally rational agent or believer, or the ideal judge (who invariably rules as they should, given the facts of the case plus the relevant pieces of law) provides a contextually salient standard, against which the behavior of actual believers, agents, and judges can be compared. A believer, agent, or judge who is disposed to depart from that standard in patterned and predictable ways counts as biased. Similarly, when we describe the coin as biased, we attribute to it a disposition to depart, in a patterned and predictable way, from a certain contextually salient standard: namely, the standard provided by the unbiased coin, that’s disposed to land heads exactly 50 percent of the time. As this and other examples suggest, the relevant standards need not be or reflect deep features of the True Normative Order, or anything of the sort— although in some cases they will. A world in which most coins are disposed to land heads exactly half the time need not be better (even in that respect and all else being equal) than a world in which most coins favor heads over tails. And to the extent that we think that a world containing mostly fair coins is better (say, because of our practice of using fair coins as randomizing devices in various contexts), we can easily imagine having a different set of purposes that would rationalize the opposite preference. At the beginning of this chapter, I noted a desideratum that any good account of our bias attributing practices should fulfill. In particular, a good account of our bias attributing practices should help make sense of and give us insight into the following facts:
(i) (ii) (iii)
Many attributions of bias are pejorative—they seem to presuppose, imply, or implicate that being biased is in some respect a negative thing, but Many attributions of bias are not pejorative, and The difference between the evaluative uses of “bias” referred to in (i) and the nonevaluative uses referred to in (ii) isn’t a matter of pure semantic ambiguity akin to the kind of ambiguity exemplified by the word “bank.”
Relative to this desideratum, there are two opposite ways an account of our bias attributing practices that purports to explain both the evaluative and the non-evaluative uses can go wrong. First, even if the account provides a story that would account for the evaluative uses, and also a story that would account for the non-evaluative uses, the stories in question might not mesh in a way that does justice to the idea that the different uses have something important in common. The fact that we use the word “bias” in both cases isn’t some quirk of our linguistic practice or some purely historical artifact; rather, it reflects something significant about what it takes to count as biased in different circumstances. On the other hand, an account of our bias attributing practices might make the opposite mistake and fail to do justice to the importance of the difference between bias attributions that serve as evaluations (and which often have normative significance for what we should do or think) and bias attributions that don’t. I believe that the account offered here fulfills the desideratum in a relatively satisfying and plausible way: whatever objections might be raised against it, and whatever else might be said in its favor, it’s at least the kind of story that we should be looking for, with respect to the balance that it strikes in accounting for the similarities and dissimilarities of the two kinds of bias attributions. According to the account on offer, there is a strong structural similarity— indeed, a structural identity—that’s in play when we’re concerned with evaluative uses and when we’re concerned with non-evaluative uses. In paradigmatic cases, both the evaluative uses and the non-evaluative uses involve judgments to the effect that someone or something is systematically departing, or disposed to systematically depart, from some contextually salient principle or standard, in a way that’s naturally conceptualized as a deviation from symmetry. In this respect, the account does justice to what the evaluative and the nonevaluative uses have in common. On the other hand, there is, of course, often all the difference in the world between systematically departing from a genuine norm and systematically departing from a principle or standard that isn’t a genuine norm. In this way, the account also does justice to the importance of the distinction between the evaluative and the non-evaluative uses of “bias” and the fact that the former often have normative significance in a way that the latter generally do not. Appreciating the significant similarities and dissimilarities between something’s being biased in the pejorative sense and its being biased in an innocuous sense is also crucial for understanding the relationship between bias and knowledge. In particular, it’s crucial for understanding the way in which bias can seem to both threaten and to enable the acquisition of knowledge. The relationship between bias and knowledge is the central theme of the third and final part of this book.
Bias: A Philosophical Study. Thomas Kelly, Oxford University Press. © Thomas Kelly 2022. DOI: 10.1093/oso/9780192842954.003.0008
1 Again, it’s important in this context that the relevant sense of “norm” is not the statistical or social expectation sense. See the Introduction and Chapter 3, §1 for discussion. 2 Consider my children’s tendency to systematically depart from the norms of correct English pronunciation. Suppose that, instead of assessing their actual speech, we ask them a series of questions about correct pronunciation—not about the underlying norms, but about whether such and such a way of pronouncing this word in this context is correct, and so on. Suppose that, in answering these questions, they make all and only the mistakes one would expect them to make, given their actual linguistic practice. Here, I think, it would be natural to count them as biased about such questions. Certainly, it would be natural to count them as biased if what they are doing is answering the questions by consulting their own practice (“How would I pronounce this?”), for in that case, it would be natural to say: when it comes to answering the questions correctly, they are biased by their own (mistaken) practice. Of course, one might attempt to draw the line there, and say that although they are biased by their own practice, when it comes to answering the questions correctly, their systematically deviant practice isn’t itself biased. However, when (i) systematically mistaken practice and (ii) systematically mistaken views about correct practice align perfectly, it seems unattractive to place them on opposite sides of the biased/not-biased distinction. 3 Thanks to an anonymous referee for raising this possibility. 4 With respect to the last possibility, compare the view about the relationship between knowledge and true belief endorsed by Williamson (2000). According to that view, although every case of knowledge is a case of true belief, there are no additional, independently specifiable necessary conditions for knowledge which are jointly sufficient for it when taken in conjunction with truth and belief. 5 Johnson (2020) offers her account of bias as an explication. 6 As an example of a factor that plausibly plays a significant role here that I think would not be especially interesting from a philosophical point of view, consider the suggestion made by an anonymous referee mentioned above, according to which we’re relatively reluctant to apply the term “biased” to struggling agents whose systematic departures from genuine norms involve innocent mistakes. 7 But can’t we construe the systematic departures from norms exhibited by the protagonists in SYSTEMATIC MISPROUNUNCIATION, LONG DIVISION, and UNBIASED SERIAL KILLER as symmetry violations? Various proposals for doing this have been made to me. I certainly don’t insist that there is no way of doing so. Rather, what the current proposal suggests is the following: the more one conceptualizes or thinks of these cases in terms of symmetry violations, the more natural it will be to describe their protagonists as “biased,” all else being equal. 8 Other common biases that centrally involve self-other asymmetries include the optimistic bias, or the tendency to claim that one is less likely than one’s peers to suffer harm (Weinstein 1987); the above average effect, the tendency to hold favorable views of one’s own abilities compared to those of others (Williams and Gilovich 2008); its counterpart, the belowaverage effect (Kruger 1999); the holier than thou effect, the tendency to think that one is more likely to engage in unselfish and kind behavior than one’s peers (Epley and Dunning 2000); and our tendency to arrive at biased estimates of individual contributions to group projects, as when two authors who contribute equally to a co-authored paper each judge that their own contribution is greater than the other’s (Ross and Sicoly 1979). 9 See, e.g., Nebel (2015), by Jake Nebel, Class of 2013. 10 More generally, self-serving attributional bias is “the tendency to interpret events in a way that assigns credit for success to oneself but denies one’s responsibility for failure, which is blamed on external factors” https://dictionary.apa.org/self-serving-bias. For discussion and further references, see Gilovich et al. (2016):169–71. 11 In the context of an academic job search, a gushing letter of recommendation written by a scholar who has made seminal contributions in her own research is deemed highly credible by the search committee. An equally credible letter, written by another scholar whose own research contributions aren’t as admired, is judged to be “over-the-top.” Because their judgments fail to reflect the symmetry between the equally credible letters, the committee and its judgments count as biased. For a discussion of some of the philosophically interesting issues surrounding the halo effect, see especially Moller (2013). For an overview of the psychological phenomenon, see Forgas and Laham (2017). 12 As previously noted, hindsight bias is the tendency to overestimate the prior predictability of an event when one knows that the event ultimately occurred. A juror concludes that a firm acted negligently in failing to anticipate an accident
on the basis of evidence available to it beforehand, although he wouldn’t take that evidence to be probative if he didn’t know that the accident occurred. The juror thus exhibits hindsight bias, because his judgment isn’t invariant across two relevantly similar cases: a case in which the firm had such-and-such evidence available to it beforehand and he knows the eventual outcome, and a case in which the firm has the same body of evidence but he’s ignorant of whether the event occurred. For a discussion of some of the philosophically interesting issues surrounding hindsight bias, see Hedden (2019). For an overview of the psychological phenomenon, see Roese and Vohs (2012). 13 A person manifests the endowment effect if they overvalue an item because they own it. The agent estimates the value of an item that they own as n, although they would estimate its value if they didn’t own it (or the value of a very similar item that’s equally valuable) as significantly less than n. Inasmuch as their valuations fail to reflect the symmetry that exists between equally valuable items that they own and don’t own, they are biased. On the endowment effect, see especially Thaler (1980), Kahneman, Knetsch, and Thaler (1991), and Ericson and Fuster (2014). 14 A supervisor is charged with evaluating workers on a scale of 1–5, with 1 representing “poor” and 5 representing “outstanding.” Although the group to be evaluated includes a significant number of both poor and outstanding workers, the supervisor awards relatively few 1s or 5s, and most of her evaluations cluster closely around the midpoint value of 3. Note that here the bias is in favor of certain numbers or values rather than the individual that the numbers are used to evaluate. The supervisor who manifests end-aversion bias fails to treat like cases alike because she picks the extreme values less frequently than the middling values, even though there is no basis for this asymmetry within the population being evaluated itself. See https://dictionary.apa.org/end-aversion-bias. 15 Faced with multiple, equally plausible interpretations of another person’s behavior, an agent with hostile attribution bias is disposed to adopt an interpretation that imputes hostility to the other person. They thus fail to respect the symmetry that exists between the interpretation that imputes hostile intent and those that do not. See https://dictionary.apa.org/hostileattribution-bias and Nasby, Hayden, and DePaulo (1980). 16 On Lucretius’ symmetry argument, see especially Rosenbaum (1989), who notes that one finds versions of the argument in Cicero, Montaigne, Hume, and Schopenhauer, among others. Lucretius’ presentation is in his De Rerum Natura. 17 The contemporary philosophical debate about “time biases,” is largely due to Parfit (1984):Ch. 8. Sullivan (2018) is a recent book length treatment. 18 As this suggests, different notions of peerhood have been used in the literature. Contrast, e.g., Kelly (2005a:175) with Elga (2007:487). 19 Consider, e.g., the way in which Richard Feldman explicitly appeals to symmetry in arguing for that conclusion in the following passage: Consider those cases in which the reasonable thing to think is that another person, every bit as sensible, serious, and careful as oneself, has reviewed the same information as oneself and has come to a contrary conclusion to one’s own…An honest description of the situation acknowledges its symmetry…In those cases, I think, the skeptical conclusion is the reasonable one: it is not the case that both points of view are reasonable, and it is not the case that one’s own point of view is somehow privileged. Rather, suspension of judgment is called for (2006:235). 20 Plausible examples include publication bias (Devito and Goldacre 2019), volunteer bias (Boughner 2012), and the Matthew effect (Merton 1968). 21 Thanks to Andy Egan for raising this issue and to Michael Smith for helpful discussion. 22 In that counterfactual scenario, it would also be correct to describe the parent as biased in favor of a certain type of performance, although specification of the relevant type wouldn’t in any way make mention of his child. 23 How much do we resemble the fourth ass? In fact, when confronted with a choice between qualitatively identical objects, human beings in controlled experiments generally exhibit a strong bias towards choosing the rightmost object in the array. However, when pressed about why they chose as they did, they tend to produce spurious reasons as to why the rightmost object is in fact objectively superior. See Nisbett and Wilson (1977:243–4). 24 Icard (2021) offers an argument against randomization as a rational requirement in such situations. Interestingly, if it’s true that there is no genuine norm with which Ass #2 (who consistently randomizes) complies but which Ass #4 violates, then this might amount to a significant difference between rationally arbitrary choice and morally arbitrary choice. For in the moral domain, it’s arguable—and in fact, has been argued by many—that one is required to randomize, in a case in which (e.g.) one can save either of two people (but not both) who are in desperate need of rescue, and all else is equal. For
endorsements of the view that agents are morally required to randomize in such situations, and that an agent who fails to do this violates a moral obligation, see especially Broome (1984,1990) and Kornhauser and Sager (1988); as well as Childress (1970), Daniels (2012), Diamond (1967), Elster (1989), Goodwin (2005), Kamm (1993), Saunders (2008), Sher (1980), Stone (2011), and Wasserman (1996). For a dissenting view, see especially Henning (2015). In contrast to the moral case, no one to my knowledge has argued in print that the agent in a Buridan’s ass type situation is rationally required (or for that matter, required in any sense) to choose randomly between the two bales of hay (i.e. that Ass #2 is superior to Ass #4 in this respect). Some theorists who endorse a requirement to randomize in the moral case suggest that it follows from a more general requirement of distributive fairness, one that attaches in this case to the people in dire need of rescue. If correct, that would also explain why there is no parallel requirement to randomize in the prudential case—it isn’t as though the bale of hay on the right and the bale of hay on the left have equal claims to be eaten (or not to be eaten?) by the ass who has them in its sights, and that he treats one of them unfairly by choosing non-randomly in favor of the other. 25 For purposes of the example, I assume that the applied ethicist is wrong to believe this. Given that any realistic example that might be used instead would be potentially controversial in the same way, readers who think that the applied ethicist is right are invited to alter the example accordingly. 26 Tyler Burge once remarked to me that “I would describe myself as biased against Fox News, but I think that that’s a reasonable bias.” For what it’s worth, an exact phrase google search turns up approximately 403,000 hits for “reasonable bias” and an additional 18,500 results for “rational bias” (7/17/2022). 27 The volumes edited by Kahneman, Slovic, and Tversky (1982) and Gilovich, Griffin, and Kahneman (2002) are classic collections within this tradition. 28 See, e.g., Gigerenzer, Todd, and ABC Research Group (1999), Gigerenzer (2002), and Gigerenzer (2008). 29 See, e.g., the co-authored “Homo Heuristicus: Why Biased Minds Make Better Inferences.” A google search for the exact phrase “good bias” turns up 237,000 results (7/17/2022). The account on offer here also helps us to make sense of Gigerenzer’s otherwise paradoxical remark that “biases are not biases” (1991:86). On the current account, this is a perfectly coherent remark that should be understood along the following lines: a tendency that counts as a bias (or a “bias”) because it’s a tendency to systematically deviate from some contextually salient standard need not be a bias in the sense of a tendency to systematically depart from some genuine norm. On “good biases” vs. “bad biases,” see also Antony (1993, 2021).
PART III
BIAS AND KNOWLEDGE
8 Bias and Knowledge 1. Biased Knowing In any ordinary context, the claim that a person is biased about some question, or that their view about that question is biased, would naturally be understood as a criticism. On the other hand, to credit a person with genuinely knowing that something is true, as opposed to merely believing that it is, seems to involve a positive or favorable assessment. What is the relationship between bias and knowing? A tempting and natural thought is that being biased—at least, when the bias in question is sufficiently strong—excludes knowing. A person who genuinely knows is free from biases that determine what they believe. Consider the following case: BIASED JUDGE: In the context of a criminal trial, the judge is biased against the defendant. Indeed, his bias against the defendant is so strong that he would believe that the defendant is guilty regardless of what the evidence suggests. When weak evidence of the defendant’s guilt is presented in court, the judge concludes that the defendant is guilty, and his drawing that conclusion is a manifestation of his bias.
The fact that the judge arrived at his belief in this way seems to guarantee that it isn’t knowledge: even if the defendant actually is guilty and so the judge’s belief is true, this is not something that he knows. While I agree that this is the right thing to say about the case as described, I also believe that caution is in order when it comes to drawing general lessons. Let’s examine things more closely. A salient feature of the case is that the judge’s belief that the defendant is guilty is, although true, insensitive to the truth: even if the defendant had been innocent, the judge would still have believed that he’s guilty. More generally, one common way that cognitive biases manifest themselves is by making our beliefs insensitive to the truth, in the technical sense of “insensitive” employed by epistemologists. In this sense, your belief that p is insensitive just in case: if p had been false, you would still have believed p. It’s tempting then, to conclude that bias excludes knowledge in virtue of making our beliefs insensitive to the truth.1 Although this would be a straightforward and theoretically satisfying story about the relationship between bias and knowledge, it’s too simple. For as has been effectively argued, there are compelling reasons to think that sensitivity isn’t a necessary condition for knowing:
even if one of your beliefs is insensitive, and you would hold it even if it were false, it might still amount to genuine knowledge as things actually stand. For example, in ordinary circumstances, someone who has just dropped a trash bag down a trash chute, and thus truly believes that the bag is now at the bottom of the chute, will also know that it is, even if, had the bag become snagged halfway down by some fluke, they would in that event still have believed (in that case falsely) that the bag is now at the bottom of the chute (Sosa 1999a). Similarly, someone who knows that they left a pitcher of ice cubes in the backyard hours ago on a sweltering summer day, and on that basis truly believes that the ice cubes have melted by now, will generally know that the ice cubes have melted, even if they would still believe this in the closest possible world in which the ice cubes haven’t melted (Vogel 1987).2 In the standard counterexamples in the literature, the subject knows despite believing insensitively, but their belief is not insensitive due to the operation of some bias.3 Does this matter? Perhaps believing insensitively is compatible with knowing so long as the fact that the subject’s belief is insensitive isn’t due to the fact that the subject is biased, but that, if the insensitivity is due to bias, then this guarantees that the subject’s belief isn’t knowledge. That is, perhaps the following principle is true: (?) If one’s belief that p is insensitive because one is biased in favor of believing p, then one doesn’t know that p.
However, there are counterexamples to this principle. Consider, for example, cases of asymmetric overdetermination in which an unbiased process delivers knowledge, and in doing so preempts a biased process that would have otherwise ensured that the same belief is held. This possibility is exemplified by the following case: BIASED KNOWER: A parent watches her young child playing normally. The parent can plainly see—and thus, knows—that her child is alive and well, just as anyone else who is viewing the same scene can know the same proposition. However, the parent’s belief that the child is alive and well is insensitive: if the child wasn’t alive and well, the parent would still believe this, because the parent is so deeply invested in it being true that the child is well, and her desires would ensure that she believes accordingly. If credible evidence began to emerge that the child wasn’t alive and well, this would trigger psychological mechanisms that would lead the parent to dismiss that evidence or explain it away so as to allow for the retention of the desired belief. Indeed, these psychological mechanisms would be efficacious in ensuring that the relevant belief continues to be held even if the evidence against that belief became very strong.
As I’m imagining the case then, it involves an element of extreme bias: the parent’s belief is very psychologically robust, so that she would continue to hold it even if the evidence decisively turned against it. However, even given this stipulation, it seems that, intuitively, the parent can know that the child is alive and well on the basis of straightforward and unproblematic observation, as things actually stand.4 One moral of the case is this: √BIASED BELIEVERS CAN KNOW: Even if a bias is sufficiently strong to make a given belief inevitable, it doesn’t follow that that belief isn’t knowledge. Biased believers can sometimes know, even when they believe in accordance with their biases, and even if those biases guarantee that they would believe as they do even if the truth were otherwise. In this respect, being biased is not inconsistent with knowing.
In the case of the biased parent, the parent knows on the basis of visual perception, a process which preempts the biasing mechanisms that would otherwise be operative. Does the point that biased believers can sometimes know depend on the epistemic primacy of perception? No, for sometimes biased believers have knowledge that isn’t perceptual knowledge. Consider the following example. A relatively common belief among homeowners in Princeton, New Jersey, is that their property taxes are too high. Let’s assume for the sake of argument that there is some relatively objective fact of the matter about what an appropriate level of taxation is—something that’s presumably determined by some complicated set of facts about property values and current economic conditions, considerations of justice and educational needs, and so on—and therefore, some relatively objective fact of the matter about how much a given homeowner should be taxed. We will also assume, on grounds of general plausibility, that inasmuch as there is some objective fact about how much a given homeowner should be taxed, there is some more or less vague range of monetary values here, as opposed to some unique and precise amount. Call this range of values the just range. Perhaps the typical Princeton homeowner, who believes that their property taxes are too high, is biased about this issue in the following sense: for some subrange of monetary values that falls within the just range, they are disposed to judge that amounts within that subrange are too high; for any amount within the subrange, if this was the amount that they were actually being taxed, then they would believe, falsely, that they were being taxed too much. It’s obvious that someone who is biased in this sense might have a true belief that they are being taxed too much, as happens when the amount that they are actually being taxed exceeds the just range. It’s less obvious, but also true, that a person who is biased in this sense might nevertheless know that they are being overtaxed. For there are surely values that exceed the just range by enough that even the biased homeowner would be in a position to know that they are being taxed too much, if that were the amount that they were actually being taxed. It’s not, after all, as though a homeowner who is disposed to believe that they are being overtaxed because of a self-interested bias would still fail to know this in the event that Princeton raised taxes to $1,000,000 per home, for example.5
2. Can Biased Beliefs Be Knowledge? In general, being biased about a topic is not inconsistent with knowing, even if one believes in accord with one’s bias, and even if one’s bias would prevent one from believing the truth if the truth were otherwise. In this respect, being a biased believer is compatible with knowing. Of course, as emphasized in Chapter 1, we predicate bias not only of believers but also of their beliefs. Thus, in the case of the biased judge, it’s not only the judge who has the property of being biased, but also his belief that the defendant is guilty. Even if being a biased believer is compatible with knowing, it doesn’t follow that biased beliefs can qualify as knowledge. In the case of the parent who is disposed to believe her child is alive and well no matter what the evidence suggests, it’s natural to describe her as biased about the relevant question. On the other hand, when we credit her with knowing that her child is alive and well
on the basis of unproblematic visual perception, it would not be natural to describe her perceptually based token belief to the effect that her child is alive and well as a biased belief, given that her bias plays no role in the psychological process that produces and sustains that belief. The relationship between a believer’s being biased and their beliefs being biased is thus not as straightforward or as transparent as one might have thought. In particular, it’s not enough, in order for a token belief about some issue to be biased, that it’s a belief held by a person who is biased about that issue. It’s not even enough that it’s a belief that aligns with the content of the person’s bias (in the terminology of Chapter 2: that it’s “bias congruent”) and that the person’s bias is sufficient to ensure that they would hold that belief even if it were false. For even in that case, the psychological process that delivers the belief (e.g. unproblematic visual perception in the case of the parent) might be an extremely reliable and unbiased way of arriving at that belief in the circumstances. And when a true belief is delivered by an extremely reliable and unbiased mechanism, it will be difficult to resist the verdict that that belief counts as knowledge (even if the highly reliable and unbiased mechanism is in effect preempting another psychological mechanism that is biased, and which would deliver the same belief if the first process did not). In the case of BIASED JUDGE, the judge’s believing that the defendant is guilty in response to weak evidence is a manifestation of his bias. In the case of BIASED KNOWER, the biased parent’s belief that her child is alive and well is not a manifestation of her bias. In the case of the judge, his bias against the defendant is psychologically efficacious in producing the belief; it’s not merely a preempted, would-be cause. Of course, given the original stipulation in BIASED JUDGE that only weak evidence is presented in court, even unbiased observers are not in a position to know. Consider a variant case in which the evidence presented in court strongly incriminates the defendant. In that case, when the unbiased courtroom observers believe that the defendant is guilty on the basis of the compelling evidence that incriminates him, they know that he’s guilty, while the judge, whose true belief is a manifestation of his bias as opposed to a rational response to the incriminating evidence, doesn’t. Different accounts of knowledge will explain the judge’s failure to know in different ways. For example, on some “justificationist” accounts of knowledge, a true belief counts as knowledge only if it is doxastically justified. On such accounts, the judge’s belief fails to qualify as knowledge because it fails to be justified in this sense.6 Alternatively, proponents of safety-theoretic accounts of knowledge might attempt to explain the judge’s failure to know in terms of his belief’s being insufficiently safe: even though the belief is true, given that it’s determined not by the evidence but by the judge’s bias, he too easily could have been wrong or had a false belief in a relevantly similar case in which the defendant is actually innocent.7 But the intuitive idea that the judge fails to know given that his belief is a manifestation of his bias seems robust across quite different theoretical frameworks for thinking about knowledge. Generalizing in the obvious way, this suggests the following principle: (?) BIASED BELIEFS AREN’T KNOWLEDGE: If a belief is a manifestation of a bias, then it’s not knowledge even if it’s true.
Should we endorse this principle? In the next section, I want to consider at some length what I take to be the most philosophically interesting reason for doubting that anything like it could be true.
3. Are Biases Essential to Knowing? “We know that human knowledge requires biases.” —Louise Antony, “Quine as Feminist: The Radical Import of Naturalized Epistemology” (215).
Perhaps the strongest reason to doubt this or any similar principle is the following: it’s plausible that biases—or at least, things that resemble paradigmatic biases in central respects —are deeply implicated in paradigmatic modes of knowledge acquisition. Indeed, one might very well take this to be a central lesson of some of the most interesting and influential science and philosophy since World War II.8 Consider the following areas in which something that looks suspiciously like a bias in the pejorative sense has been plausibly claimed to play an essential role in cognition: • Sense perception. According to constructivist models of perception, currently widely accepted among vision scientists, the capacity of the human perceptual system to represent the world as being one way rather than another depends on its making what amount to implicit or hidden assumptions about the environment in which it’s operating.9 In general, proximal stimulations of our sense organs underdetermine their possible distal causes. For example, a convex object viewed under normal lighting conditions produces retinal stimulations that are inherently ambiguous between the following two inconsistent possibilities: (1) the object is convex and is being illuminated by light from above, or (2) the object is concave and is being illuminated by light from below. In response to the inherently ambiguous retinal stimulation, our visual system represents the object as convex; in effect, it resolves the ambiguity by assuming that the object is being illuminated by light from above as opposed to light from below. Because the objects that we perceive typically are illuminated by light from above as opposed to light from below, this way of resolving the ambiguity is generally effective and produces visual representations that correctly depict their objects. Significantly, however, the “implicit assumption” to the effect that the light comes from overhead isn’t sensitive to the actual facts of the matter: when we’re placed in a context in which objects are illuminated from below, a perceptual illusion is generated, in a predictable way: our perceptual system misrepresents a concave object as convex. In effect, the human visual system is biased in favor of the assumption that it’s operating in an environment in which the light comes from overhead as opposed to from below, regardless of the actual direction of the light source. Moreover, analogous biases (or “biases”) are apparently pervasive in ordinary cases of sense perception.
Induction (1): Bias in favor of “natural” hypotheses. Observing a large number of • emeralds in diverse environments, and finding that each emerald is green, we infer that all emeralds are green or, more modestly, that the next emerald that we observe will also be green. But our observational evidence is not only consistent with (and entailed by) “the green hypothesis”; it’s also consistent with (and entailed by) the rival “grue hypothesis,” according to which emeralds that were first observed in the past are green but emeralds that are first observed in the future will be blue. However, in contrast to our willingness to accept the green hypothesis, we would never so much as consider accepting the grue hypothesis; in this respect, it seems that we’re biased in favor of former and against the latter. More generally, we seem to be biased in favor of “natural” hypotheses and against “gruesome” ones (Goodman 1955). • Induction (2): Prior probabilities. According to Bayesian accounts of confirmation, rational learning from experience is a matter of updating one’s prior probabilities in response to new evidence. The impact and significance of a given piece of evidence thus depends on one’s prior probability distribution. To the extent that one is rational, one’s current priors were generated by conditionalizing on evidence that one acquired in the past, evidence whose receipt prompted one to update one’s past priors, that is, the probability distribution that one had before acquiring that evidence. (“Yesterday’s posterior probabilities are today’s prior probabilities.”) But if rational learning from experience always requires a prior probability distribution, then it seems that not every prior probability distribution could have been arrived at by conditionalizing on past observations; at some point, there must have been a prior probability distribution that was not arrived at via previous observations. (“The Ur-distribution.”) The standard Bayesian framework thus suggests that, in advance of any observations of the world, we were not cognitive blank slates who were scrupulously neutral among all of the empirical possibilities, or ways that the world might be. Rather, certain perfectly consistent empirical hypotheses started off ahead of others, in advance of any empirical input from the world itself, and even though there are possible worlds in which the disfavored hypotheses are true and the favored hypotheses are false. It seems that on the Bayesian picture, inductive confirmation presupposes that we’re biased in favor of some genuine empirical possibilities and against others.10 As noted in previous chapters, discussions of induction, both inside and outside of philosophy, often refer to our “inductive biases”—without any suggestion that what is being discussed are negative or even suboptimal aspects of our cognitive lives. • Knowledge of language. As Noam Chomsky famously emphasized, young children acquire the ability to discriminate between grammatical and ungrammatical sentences of their native language with remarkable speed, given that the utterances that they actually hear spoken (1) greatly underdetermine the choice between countless possible languages to which those utterances might belong, and (2) those utterances are in any case often ungrammatical, since even linguistically competent adults frequently speak
in ways that are technically ungrammatical in the context of ordinary communication. Given this “poverty and corruption of the stimulus,” the ease and rapidity with which young children acquire the ability to discriminate between grammatical and ungrammatical sentences seems inexplicable on classical behaviorist accounts of our knowledge of language, according to which the human mind in effect starts off “neutral” among possible languages, and then progressively narrows down the possibilities based on experience. Rather, the human mind must be biased in favor of some possible languages (languages characterized by such-and-such grammatical rules) and against others.11 • Scientific knowledge. According to the most influential model of science of the 20th century (Kuhn 1970), it’s characteristic of any mature, successful science that scientists working in the field are deeply committed to a particular paradigm. According to Kuhn, commitment to a paradigm manifests itself in a number of ways, including preconception and resistance. In Kuhn’s terminology, “preconception” in science refers to strongly held convictions on the part of individual scientists, prior to engaging in research itself, about the eventual outcomes of their research. “Resistance” refers to an unwillingness to interpret novel findings or unexpected results as evidence that disconfirms or falsifies central aspects of the paradigm. Significantly, Kuhn explicitly contrasts both the tendency to preconception and resistance with openmindedness.12 But open-mindedness is an ostensible virtue which it’s natural to contrast with bias, given a context in which what’s at issue is a person’s stance towards their favorite theory. More generally, on Kuhn’s account, successful “normal science,” requires a deep and often dogmatic commitment to the status quo. As he emphasized, his account of actual scientific practice stands in no small measure of tension with “the image of the scientist as the uncommitted searcher after truth…who rejects prejudice at the threshold of his laboratory, who collects and examines the bare and objective facts, and whose allegiance is to such facts and to them alone” (347). In effect, on Kuhn’s picture, successful normal science has a kind of built-in conservative bias in favor of the existing paradigm, and this conservative bias is in practice something like a precondition for further progress.13 If any of these views is correct, then it looks as though biases, or dispositions akin to biases, play a central role in what we would ordinarily take to be paradigmatic cases of knowledge acquisition. If all of these views are correct, then hardly any of our putative knowledge is untouched. On the other hand, there is the intuitively plausible thought that, when a belief is the manifestation of a bias, it doesn’t count as knowledge, a thought that seems to receive support from cases such as BIASED JUDGE. The skeptic, who denies that we have anything close to the amount of knowledge that we ordinarily take ourselves to have, will see this tension as grist for his mill. For the skeptic will suggest that we (continue to) endorse the intuitively plausible thought that true beliefs that are manifestations of bias are not known, and he will invite us to conclude that what look like paradigmatic instances of gaining knowledge of the world via perception or inductive inference are not actually
genuine instances. At best, what we have are cases in which the operation of a bias leads to the formation of a true belief because of a fortuitous convergence between it and the world. But, like the judge’s true belief that the defendant is guilty, these beliefs do not amount to knowledge, something that we’re now in a position to recognize upon reflection, given what we’ve learned about the way in which we actually arrive at such beliefs. Notice that this kind of skepticism, unlike the more traditional kinds of philosophical skepticism that I take up in the next chapter, does not appeal to abstract or hypothetical possibilities of error, but rather to facts that we’ve learned about how we arrive at our beliefs, such as empirically discovered details about how the human perceptual system works. (For example, the fact that the human perceptual system relies on “implicit assumptions” about the environment in which it’s operating, assumptions that, although often accurate, don’t track the actual state of the world, and generate perceptual illusions whenever they fail to hold.) In short, the skeptic will insist that we have no business reflectively endorsing our ordinary claims to knowledge, now that we’ve seen how the sausage actually gets made.14 What should we make of this?
4. Knowledge and Symmetry Let’s begin by focusing on induction, which seems particularly central in the present context. Indeed, both the case of perceptual knowledge, given a broadly constructivist paradigm, and the case of language acquisition, given a broadly Chomskyan paradigm, are naturally interpreted as special cases of induction, inasmuch as both involve the need to solve a certain kind of underdetermination problem. In both cases, the acquisition of putative knowledge involves moving from a limited set of data, data which seem to underdetermine the choice between rival possibilities, to some particular possibility drawn from the set; it’s thus natural to describe us (or the relevant cognitive mechanism) as “biased” in favor of the possibility that gets selected over its rivals. Thus, in the case of perception, inherently ambiguous stimuli underdetermine their possible causes, but our visual system “solves” the underdetermination problem by in effect taking the data to confirm the hypothesis that what’s in view is a convex object illuminated by light from above as opposed to the rival hypothesis that what’s in view is a concave object illuminated by light from below. Our visual system thus seems to exhibit an “inductive bias” in favor of the former hypothesis and against the latter hypothesis.15 In the case of language learning, the prelinguistic child is in effect in the position of an inductive reasoner who must figure out, from the utterances that she hears spoken around her, which possible language is being spoken, even though the linguistic data are consistent with16 any number of possible languages. The fact that she ultimately manages to do this—as opposed to, say, remaining agnostic among the various possibilities—makes it natural to think of her (or our “language organ”) as having an innate, inductive bias in favor of certain possible languages. Thus, it makes sense to focus on the case of induction. In exploring that case, I’ll assume a broadly Bayesian framework, although the issues considered here, and the possible moves that might be made in response to them, could also
be translated into alternative frameworks for thinking about induction.17 As emphasized under Induction (2), the aspect of the Bayesian framework that makes it natural to speak of “inductive biases” is the role played by prior probabilities. In particular, most contemporary Bayesians will insist that a believer can rationally invest more credence in some basic alternative possibilities than in others, even in advance of any empirical learning about the world. They will thus reject the following claim about what’s true of ideally rational believers: X EGALITARIANISM: An ideally rational believer, prior to any experience of the world, will assign equal probability to every basic alternative possibility.
Relatedly, they will reject the idea that the following principle is a genuine norm of rational believing: X UNRESTRICTED INDIFFERENCE: Prior to any experience of or learning about the world, give equal credence to every basic alternative possibility.
Notice that, like many genuine norms, UNRESTRICTED INDIFFERENCE embodies a kind of symmetry standard. Like other symmetry standards, it seems to ask no more of us than that we treat like cases alike. Nevertheless, notwithstanding its intuitive plausibility and its resemblance to genuine norms, the conventional Bayesian will reject UNRESTRICTED INDIFFERENCE for principled reasons.18 By their lights it’s a spurious principle, and a believer who does not conform to it does not thereby fail to comply with any genuine norm. In what sense then, is a believer whose prior probabilities favor some basic empirical possibilities over others biased? Recall the discussion of bias attributions offered in Chapter 7, §5. As argued there, it’s an important aspect of our practice of attributing bias that we are often happy to attribute bias to an agent who systematically departs from a contextually salient symmetry standard, even though we ourselves reject the idea that the principle in question is a genuine norm. The same holds at the level of belief. My suggestion is that our willingness to describe exemplary human reasoning and cognition in terms of bias can be understood as a special case of this more general phenomenon. Recall the case of the APPLIED ETHICIST’S SISTER. The applied ethicist’s sister explicitly rejects an egalitarian moral principle according to which we’re obligated to be equally concerned about the suffering of innocent human beings and the suffering of nonhuman animals, and she acts accordingly. As emphasized, she might very well admit that she’s “biased in favor of human beings,” but this need not be a confession on her part: it’s not as though she agrees that she’s committed a moral transgression by favoring human beings in the way that she does. Indeed, she might very well think that if she failed to favor human beings, that failure would amount to a significant moral transgression. In these respects, the sister is like the conventional Bayesian who explicitly denies that there is any genuine egalitarian norm that requires us to give equal credence to every basic possibility. Indeed, like the sister, the conventional Bayesian will think that it would be a mistake to (try to) to follow the more egalitarian norm. Nevertheless, the conventional Bayesian, like the sister,
might be perfectly happy to talk of the practice that he endorses in terms of “biases”—in his case, in terms of “inductive biases”—precisely because it systematically departs from the kind of hyper-egalitarianism that he explicitly repudiates.19 Consider our preference, qua inductive reasoners, for “the green hypothesis” over “the grue hypothesis”—and more generally, for “natural” hypotheses over “gruesome” ones. It’s apposite here that there are initially attractive models of induction on which the green hypothesis and the grue hypothesis are on a par. Consider, for example, the so-called “instantial model of confirmation,” according to which a generalization of the form “All Fs are Gs” is confirmed by its positive instances (i.e. observations to the effect that “This F is G”), and in proportion to the numbers of those positive instances.20 Given that, as far as our past observations go, “All emeralds are green” and “All emeralds are grue” have an equal number of positive instances, and neither has any falsifying instances, the most natural and straightforward version of the instantial model of confirmation would require us to treat these hypotheses evenhandedly. Notice that a principle to the effect that we should give equal credence to such hypotheses is naturally construed as a symmetry standard to the effect that we “treat like cases alike.” The naturalness of speaking in terms of our “bias” in favor of the green hypothesis and against the grue hypothesis reflects the fact that, in opting for the former over the latter, we flout a contextually salient symmetry standard. On the other hand, the propriety of our opting for the green hypothesis over the grue hypothesis reflects the fact that the principle that we flout by doing so is not in fact a genuine norm.21 Consider next Kuhn’s account of scientific knowledge. As noted above, Kuhn maintained that it is characteristic of any mature, successful science that scientists working in the field are deeply committed to a particular paradigm.22 This deep commitment manifests itself, inter alia, in an unwillingness or reluctance to interpret unexpected experimental results as evidence that falsifies or even strongly disconfirms core elements of the paradigm. In practice, unexpected experimental results that seem to contradict the paradigm are, if not simply waived away as noise, treated as spurs to future research within the paradigm itself. On the picture presented by Kuhn then, both science as an institution as well as individual scientists seem to have a built-in conservative bias in favor of the currently accepted paradigm. Why is it so natural to talk of “bias” here? Notice that there are at least some superficially plausible principles that appeal to the notion of equal treatment that are violated by exemplary scientific practice as described by Kuhn. For example, on Kuhn’s picture, a scientist won’t treat an unfulfilled prediction as falsifying a central aspect of the currently accepted paradigm, even if that outcome would amount to a falsification if it were taken at face value, and even though the scientist might very well treat an unfulfilled prediction of a theory that she does not currently accept as a good reason for rejecting it. At one level of abstraction, a scientist who proceeds in this way might seem no better than a person who is dogmatically committed to an ill-considered political view, who, when presented with apparently decisive evidence against that view, stubbornly refuses to acknowledge it as such. However, it’s far from clear that when working scientists favor the currently accepted paradigm in the characteristic ways described by Kuhn, they systematically depart from any
genuine norms of inquiry as opposed to contextually standards of equal treatment that do not correspond to genuine norms. After all, on Kuhn’s own account, the very fact that a given field has a genuine paradigm represents a substantial epistemic achievement on its part.23 To have a paradigm at all is to have a highly successful theory of the domain, one whose past successes in explaining and accounting for central phenomena inspire rational allegiance among scientists working in the field. A scientist working within the currently accepted paradigm will thus typically have a high degree of justification for accepting it, prior to any particular episodes of cutting-edge research. Thus, notwithstanding their superficial similarities, we should not assimilate the epistemic situation of such a scientist to that of the biased person who insists on explaining away apparently strong counterevidence to their favorite political views. There is no genuine norm according to which our responses to new evidence should be invariant with respect to the prior credibility of the theories that the evidence seems to tell against. Although there is a clear sense in which it’s natural to describe scientists on Kuhn’s account as biased in favor of the currently accepted theory, that sense is an epistemically innocuous one, insofar as it involves departing from salient standards of equal treatment that are not genuine norms.
5. How and When Bias Excludes Knowledge: A Proposal Consider again the principle formulated earlier in this chapter: BIASED BELIEFS AREN’T KNOWLEDGE: If a belief is a manifestation of bias, then it’s not knowledge even if it’s true.
What should we say about this principle? Here is a line of thought that I find attractive. To say that a token belief is a manifestation of bias is to say that it’s the manifestation of a certain kind of disposition. The disposition in question might be a disposition that leads one to systematically depart from a genuine norm, or from a contextually salient symmetry standard that is not a genuine norm. In paradigmatic cases of knowledge acquisition (e.g. exemplary instances of inductive reasoning), insofar as “bias” can be attributed with propriety, the propriety of the attribution is grounded in the fact that the believer has nonaccidentally departed from a salient symmetry standard that is not a genuine norm. However, the fact that such a symmetry standard is violated in the process of arriving at a belief has no tendency to show that the belief falls short of knowledge. (Compare: the mere fact that an agent who always chooses the leftmost option in situations involving rationally arbitrary choices can with propriety be described as “biased” in virtue of departing from a contextually salient standard of equal treatment—such as “Choose randomly among rationally arbitrary options”—does nothing to impugn their claim to being a fully rational agent. Nor does it impugn the rationality of any token choice that is a manifestation of that general disposition.) By contrast, when a belief is a manifestation of a bias, and the bias in question involves a tendency to systematically depart from a genuine norm, this counts against the claim that the belief is an instance of knowledge. For example, it’s plausible that truth is a genuine norm of
belief: √THE TRUTH NORM: Don’t believe what isn’t true.
Imagine a judge who is disposed to depart from the truth norm systematically—say, in cases involving certain types of defendants, or in cases in which the prosecution’s case is presented by a particular prosecutor of whom the judge is especially fond. Inasmuch as the judge’s belief about whether the defendant is guilty is a manifestation of such a bias, then it fails to count as knowledge even if it’s true. Similarly, it’s plausible to think that the following is a genuine norm of belief: √THE EVIDENCE NORM: Believe in accordance with the evidence.
If that’s so, then a person might count as biased in the pejorative sense in virtue of being disposed to depart systematically from the evidence norm (e.g. in cases involving certain types of defendants or a favorite prosecutor). When a token belief is a manifestation of such a bias, it will generally fail to count as knowledge, even if it’s true, and even if it’s likely to be true given the available evidence. (Notice that a token belief might be the manifestation of a disposition that leads one to systematically depart from the evidence norm in a certain direction, even if the belief happens to accord with the available evidence as things turn out.) More generally, according to the suggested picture, a tendency to depart systematically from a genuine norm of belief (e.g. the truth norm, or the evidence norm) is a bias in the pejorative sense, and a token belief that’s the manifestation of such a bias fails to count as knowledge even if it satisfies the other conditions for knowledge (e.g. truth, being in accordance with the evidence).24 On the current line of thought then, BIASED BELIEFS AREN’T KNOWLEDGE is true when understood as equivalent to the following: √BIASED BELIEFS AREN’T KNOWLEDGE (Unqualified): If a token belief is the manifestation of a tendency to systematically depart from a genuine norm of belief, then it’s not knowledge.
Although I think that the general picture is ultimately defensible, a complication is generated by the possibility that there are genuine norms of belief that are purely practical or non-epistemic. For example, some have argued there are norms of friendship that apply to beliefs, which can conflict with epistemic norms.25 Or perhaps there are purely moral norms that apply to beliefs, or norms of “clutter avoidance” (Harman 1986). If there are such norms,26 then the suggested picture requires qualification. For example, suppose that there is a norm of clutter avoidance, which we violate whenever we believe an uninteresting triviality, even if we can see that it’s obviously true. In that case, a person might count as biased by deviating from the clutter avoidance norm in a systematic as opposed to a random way; nevertheless, when the bias manifests itself, it doesn’t follow (as the unqualified principle above suggests) that they fail to know the triviality that they believe, given that they can see that it’s obviously true. In response to this kind of concern, a natural move is to endorse a more qualified
principle, along the following lines: √BIASED BELIEFS AREN’T KNOWLEDGE (Qualified): If a token belief is the manifestation of a tendency to systematically depart from an epistemic norm, then it’s not knowledge
where, intuitively, the truth norm and the evidence norm count as “epistemic” norms, while norms of clutter avoidance and friendship (etc.) do not. One might worry about this more qualified principle for the following reason: unless we have some independent grip on what counts as an epistemic norm and what counts as a nonepistemic norm, the proposed necessary condition looks objectionably circular and unilluminating. Happily, however, we do have an independent grip on the relevant distinction. Contrast, for example, the truth norm and the alleged norm of clutter avoidance. Even before anything having to do with bias is in view, we can say the following: when one departs from the truth norm on a given occasion by believing something false, this means that the belief isn’t knowledge, regardless of whether that token departure is part of a larger, systematic pattern of deviance or simply a random error (i.e. regardless of whether the violation of the norm had anything to do with bias). On the other hand, when I violate the alleged norm of clutter avoidance on a given occasion by believing some trivial truth, this obviously doesn’t disqualify the resulting belief from counting as knowledge; and again, this hold regardless of whether my token departure is part of any larger pattern or simply a random error (and therefore, regardless of whether bias is involved). Thus, we do have a grip on the distinction between epistemic and non-epistemic norms independently of anything having to do with bias: from the fact that a token belief fails to satisfy an epistemic norm it follows that it isn’t known, but the same entailment doesn’t hold in the case of a nonepistemic norm that applies to belief. The qualified principle above, like the unqualified principle, is thus a substantive and contentful claim about the relationship between knowledge and the absence of bias. Again, the interest of the claim is as a proposed additional necessary condition on knowledge, one that a given believer might fail to satisfy even as they satisfy other closely related necessary conditions. (As when a believer fails to know because their belief is a manifestation of a general disposition that causes them to systematically depart from the evidence in a certain direction, even if in the case at hand their belief happens to be in accord with the available evidence.) Suppose that it’s true that a biased belief (or a belief that’s the manifestation of bias) isn’t knowledge. As I’ve emphasized, that conclusion is consistent with the possibility that a believer who is biased (in the pejorative sense of “biased”) might have genuine knowledge. Indeed, as argued above, biased knowing in this sense is perfectly possible. Even if that’s right, why would it matter? The significance of biased knowing is the starting point for the next chapter, which further explores the relationships between bias and central topics in the theory of knowledge. Bias: A Philosophical Study. Thomas Kelly, Oxford University Press. © Thomas Kelly 2022. DOI: 10.1093/oso/9780192842954.003.0009
1 The idea that sensitivity is a necessary condition for knowledge is central to Nozick’s (1981) “tracking” account of knowledge. 2 In addition to the references cited in the main text, see especially Kripke (2011) and Williamson (2000) for penetrating critiques of the idea that sensitivity is a necessary condition for knowledge. An extended attempt to defend the idea against some of the most prominent objections is Becker (2007); for further discussion, see also the essays collected in Becker and Black (2012). 3 Is this true? It might be argued that there is a sense in which the subject in Sosa’s case is biased in favor of the hypothesis that the bag is now at the bottom of the chute (and against the rival hypothesis that it became snagged halfway down), and the subject in Vogel’s case is biased in favor of the hypothesis that the ice cubes have melted by now (and against the rival hypothesis that they haven’t). After all, they believe those hypotheses over their quasi-skeptical foils both in the closest possible world in which the hypothesis is true (i.e. the actual world) as well as in the closest possible world in which it’s the foil that’s true, notwithstanding the fact that both the hypotheses and the foils are perfectly consistent with their observational evidence in both cases. More generally, it might be claimed that whenever someone would believe p even if p were false, that person is biased in favor of (believing) p. On this view, cases like Sosa’s and Vogel’s are no less cases of bias than the case of the BIASED JUDGE. For the sake of argument, I’ll assume that there is some significant theoretical difference between cases like Sosa’s and Vogel’s and that of the biased judge with respect to bias, while remaining agnostic about how deep that difference cuts. If it turns out that there is no such difference, so much the worse for the suggestion explored (but ultimately rejected) here. For further discussion of related issues, see Chapter 9, §2. 4 Significantly, Nozick (1981) himself would admit that the parent can know in these circumstances; see especially 179– 85, where the sensitivity condition is relativized to methods of belief formation in order to accommodate the intuitive verdicts about structurally similar cases. 5 As this discussion suggests, it’s arguable that the self-interested bias of the homeowner does prevent their true beliefs that they are overtaxed from qualifying as knowledge at the margins. Consider a case in which the homeowner truly believes that they are overtaxed, but where the amount in question lies only slightly outside of the just range; if they were being taxed any less than they actually are, it wouldn’t be true that they are overtaxed, although in that case they would still falsely believe that they were, because of their self-interested bias. It’s arguable that in these circumstances, their true belief that they are overtaxed is insufficiently safe to count as knowledge: there is a very close possible world in which they still believe that they are being taxed too much, but in which they are wrong. On the view that knowledge requires “a margin for error,” this guarantees that they fail to know in the relevantly similar case which differs just enough for their belief to be true. An account of knowledge with these features is resourcefully developed by Williamson (2000), although the phenomenon of bias isn’t discussed there. 6 Here the salient distinction is between so-called “propositional” and “doxastic” justification, and the crucial fact is that even if a believer is “propositionally” justified in believing a proposition that she believes, her belief might not be doxastically justified. For example, suppose that I have compelling evidence that today will be a bad day, and I believe that today will be a bad day, but I don’t hold this belief on the basis of my compelling evidence; rather, I hold the belief because of an irrationally pessimistic temperament that leads me to believe that every day will be a bad day, regardless of my evidence. In these circumstances, I’m propositionally justified in believing that today will be a bad day, but my belief itself is doxastically unjustified. Similarly, for the example at issue in the text: even if the judge is propositionally justified in believing that the defendant committed the crime (given that he’s in court and hears the incriminating evidence which leads the unbiased onlookers to conclude that the defendant is guilty), his belief isn’t doxastically justified given that it’s neither arrived at nor sustained by that evidence, but rather by his bias. The distinction between propositional and doxastic justification is originally due to Firth (1978). 7 For safety-theoretic accounts of knowledge, see Sainsbury (1997), Sosa (1999a, 1999b, 2000), Pritchard (2007, 2009), and Williamson (2000). For criticism, see Comesaña (2005) and Neta and Rohrbaugh (2004). As is now generally appreciated, a token belief might be safe even if it is not sensitive. For safety requires that one could not have easily been wrong, that is, according to a common gloss, not wrong in any close or “nearby” possible world, while sensitivity requires that one not be wrong in the closest possible world in which the relevant proposition is false, a possible world which might or might not be close to the actual world. (Even if you would still hold the belief in the closest possible world in which the proposition that you believe is false, that possible scenario might be dissimilar enough to the actual world that your belief still counts as safe.) For lucid discussions of the distinction, see Sosa (2000), Williamson (2000), and DeRose (2017). 8 The theme is illuminatingly explored by Antony (1993, 2016) who credits the insight to Quine, Goodman, Hempel, Putnam, and Boyd among others (Antony 1993:188–9), although as she notes, they do not generally put the point in terms of
“bias,” as she does. In addition to being a major theme among heavyweights of the analytic tradition, the theme plays a role in post-World War II continental philosophy as well. See especially Gadamer’s (1960) account of the way in which prejudices (or “pre-judgments”) are essential to understanding anything at all. 9 Palmer (1999) is an encyclopedic account of the phenomena in this area, among much else, by a leading vision scientist. As he puts it: “The hidden assumptions made by the visual system are many and varied” (58). For an empirically well-informed philosophical discussion of various types of biases characteristic of our visual system and their relationship to problematic prejudicial biases towards demographic groups, see Munton (2022). 10 The Bayesian thus accounts for the difference between the green hypothesis and the grue hypothesis in terms of the vastly different prior probabilities that we assign to them: the fact that the observation of green emeralds confirms the green hypothesis much more strongly than the grue hypothesis is grounded in the fact that the prior probability that we assign to the former is much greater than the prior probability that we assign to the latter. (We in effect assume a priori that we are much more likely to be in a world in which the green hypothesis is true than in a world in which the grue hypothesis is). But the Bayesian will also draw such (invidious?) distinctions among hypotheses even when none is exotic in the way that the grue hypothesis is. For good critical overviews of Bayesianism, see Easwaran (2011a), (2011b), and Titelbaum (2022). In characterizing the position of the “standard” Bayesian in the main text, I’ve written as though the standard Bayesian does not accept any generalized Principle of Indifference, according to which we’re rationally required to assign equal probability to every simple possibility prior to any empirical learning. Although most Bayesians don’t accept any such principle, some have attempted to rehabilitate some version of it. Notwithstanding the characterization of standard Bayesianism offered here, the argument of this chapter will not assume that all such attempts at rehabilitation are unsuccessful, or that no version of the Principle of Indifference is true. For further discussion, see below. 11 Chomsky coined the term “poverty of the stimulus” in his (1980); however, the idea was first presented in his (1959). For a useful and philosophically sophisticated overview of the issues, see Rey (1997:Ch. 4.2). 12 For these contrasts, see especially (Kuhn 1963:347–8). 13 Although I won’t pursue the issue in what follows, I note in passing a significant point of contact between (i) Kuhn’s theme of the thoroughgoing non-neutrality of successful scientific practice, and (ii) the idea mooted above, that sense perception depends on “good biases” in order to deliver knowledge: Kuhn’s emphasis on the alleged “theory-ladenness of observation.” On a traditional picture of scientific objectivity, it’s the fact that scientists who favor rival theories can make the same observations which allows observation to ultimately serve as a kind of neutral arbiter in the context of theory choice: a rational, unbiased choice among rival theories is one where observation decides the issue, as opposed to one where the issue is decided by the scientists’ biases and their antecedent commitments. However, Kuhn (1970), following Hanson (1958), insisted that the notion of observation that’s important for understanding successful scientific practice and its epistemology is not theoretically neutral in the relevant sense: for a scientist working within a paradigm, the contents of her observations presuppose the categories of that paradigm. In cases of fundamental theoretical dispute then, there will typically be no theoretically neutral characterization of the observational evidence available; rather, adherents of rival theories will irremediably differ as to the appropriate description of the data itself. (For some reflections on the general theme, see Kelly (2022).) Given the priority that it gives to the connections between bias and central topics in the theory of knowledge, one obvious lacuna in the present study is the topic of biased perception. For provocative and sophisticated recent work on this topic, see especially Siegel (2017). 14 The skepticism in question is thus a deliverance, not of traditional epistemology, but of epistemology naturalized, in the sense of Quine (1969). As he put it elsewhere: “Skeptical doubts are scientific doubts” (1975:68). For an overview of this and related aspects of Quine’s thought, see my (2014). 15 Indeed, the currently flourishing research program of Bayesian perceptual psychology in effect treats normal human perception as a special case of induction, and then explicitly models the latter in Bayesian terms. For an illuminating and philosophically sophisticated overview of Bayesian perceptual psychology, see Rescorla (2015). Hohwy (2013) is a booklength treatment. 16 Here I ignore, in the interests of simplicity, considerations having to do with “the corruption of the data,” and focus on its poverty. 17 Antony (1993, 2016) and Johnson (2020) also emphasize the connections between bias and underdetermination. 18 The most prominent of these reasons are the kind of partition-relativity problems (according to which indifference principles can yield inconsistent results when a single problem can be parameterized in multiple ways) which were first devised by the 19th-century French mathematician Joseph Bertrand (1889) and then forcefully pressed within philosophy by
Bas van Fraassen (1989). 19 As noted earlier, some contemporary philosophers have attempted to rehabilitate some version of the Principle of Indifference. See, e.g., DiBella forthcoming, Eva (2019), Huemer (2009), Pettigrew (2016), Weisberg (2009), White (2009), and J. Williamson (2018). If some version of the Principle of Indifference is true, then there is a genuine norm here, and a believer who plays favorites among the basic alternatives is guilty of bias in the pejorative sense, as opposed to the nonpejorative sense. In any case, any theorist who believes that there is some such genuine norm here should regard systematic departures from it as biases in the pejorative sense: they stand to the conventional Bayesian as the applied ethicist stands to the applied ethicist’s sister. 20 See Lipton (2004:14–15) for discussion. As Lipton notes, “This model…within its restricted range, strikes many people initially as a truism” (14). 21 Although far more sophisticated than the instantial model of confirmation, Carnap’s original vision for inductive logic (1950) was also that of a purely formal system, one that would be able to capture and exhibit the goodness of any intuitively good inductive inference in terms of the formal properties of the propositions involved. (The program, although breathtakingly ambitious, was thus a natural attempt to extend to induction some of the greatest accomplishments of his teacher Frege in the domain of formal deductive logic. Frege had succeeded, largely through the introduction of the quantifier, in capturing the formal validity of certain deductive inferences that had long been recognized as intuitively valid, but which earlier formal systems had been unable to represent as formally valid. Carnap’s grand vision was thus to do for inference in general what Frege had done for the deductive case.) Goodman’s introduction of gruesome hypotheses in the context of “the new riddle of induction” is often taken to provide decisive reason for thinking that no purely formal inductive logic is possible, since formally speaking nothing seems to favor the green hypothesis over the grue hypothesis. (For this claim, see, e.g., Putnam’s 1983 “Foreword” to the fourth edition of Fact, Fiction and Forecast.) Even if one accepts this putative moral, the sense might linger that discriminating in favor of one of the two hypothesis in virtue of its content, when the two are on a complete par as far as their formal features go, makes it apt to talk of “bias” here. 22 Kuhn himself repeatedly characterizes the nature of this deep commitment as “dogmatic.” In addition to his epochal The Structure of Scientific Revolutions (1970), see especially his essay “The Function of Dogma in Scientific Research” (1963) which focuses in particular on the theme of “the dogmatism of mature science” (349) and “the importance of quasidogmatic commitment as a requisite for productive scientific research” (347). As Kuhn notes there, the theme was earlier emphasized by Polanyi (1958). 23 Notably, all of Kuhn’s examples of paradigms are drawn from highly successful natural sciences (e.g. physics and chemistry) and he regarded it as very much an open question whether any of the social sciences of his day could legitimately claim to have paradigms at all. 24 Notice that the suggested necessary condition is consistent with (and indeed, consonant with) many accounts of knowledge that don’t explicitly mention bias. For example, those who accept safety theoretic accounts of knowledge might accept it on the grounds that a belief that’s the manifestation of a bias is insufficiently safe to count as knowledge; evidentalists might hold that such a belief won’t be doxastically justified; virtue theorists might hold that such a belief won’t stand in the right relationship to an epistemic virtue, and so on. The point is to make explicit the relationship between bias and (an absence of) knowledge, in a way that such views don’t. 25 For interesting developments and defenses of this idea, see especially Hazlett (2013:Ch.3), Keller (2004, 2018), and Stroud (2006). For criticism, see Arpaly and Brinkerhoff (2018), Crawford (2019), Goldberg (2019), Hawley (2014), Kawall (2013), and Mason (2020). 26 This is, of course, controversial. Although I am ultimately among the skeptics (Kelly, in preparation), I don’t want to rely on those arguments here.
9 Knowledge, Skepticism, and Reliability Recall the case of BIASED KNOWER: BIASED KNOWER: A parent watches her young child playing normally. The parent can plainly see—and thus, knows—that her child is alive and well, just as anyone else who is viewing the same scene can know the same proposition. However, the parent’s belief that the child is alive and well is insensitive: if the child wasn’t alive and well, the parent would still believe this, because the parent is so deeply invested in its being true that the child is well, and her desires would ensure that she believes accordingly. If credible evidence began to emerge that the child wasn’t alive and well, this would trigger psychological mechanisms that would lead the parent to dismiss that evidence or explain it away so as to allow for the retention of the desired belief. Indeed, these psychological mechanisms would be efficacious in ensuring that the relevant belief continues to be held even if the evidence against it became very strong.
In BIASED KNOWER, the parent knows that her child is alive and well, even though she’s biased about that question. Moreover, she counts as biased, not in the epistemically innocuous way that even someone who engages in exemplary inductive reasoning counts as biased (in virtue of having the right “inductive biases” as it were). Rather, she counts as biased in the pejorative sense of “bias.” Even if biased knowing in this sense is possible, why does it matter? Is the possibility of biased knowing just a curiosity, or does it have any larger theoretical significance? In the next two sections, I explore some of the philosophically significant implications of the fact that biased believers can have genuine knowledge. In §1, I discuss the implications of this fact for certain proposed norms of inquiry. In §2, I discuss its implications for traditional forms of skepticism.
1. Biased Knowing and Philosophical Methodology Philosophers sometimes propose methodological norms that are clearly intended to safeguard inquiry from being corrupted by various kinds of biases. However, if biased knowing is possible, then this calls into question many proposed norms of this kind. Let’s look at some examples. Consider the method of reflective equilibrium, which for the past fifty years has been perhaps the most popular account of how moral inquiry should proceed among moral philosophers in the broadly analytic tradition.1 As characterized by Rawls and those who
follow him,2 the correct starting point for moral inquiry consists of the totality of our considered judgments about morality, in a semi-technical sense of “considered judgment.” In this semi-technical sense, a judgment counts as “considered” only if the person making the judgment doesn’t stand to gain or lose depending upon how the question is answered (see, e.g. Rawls 1971:48, Scanlon 2002:143). This condition is clearly intended to safeguard moral inquiry from the potentially distorting effects of bias. On this view, you should set aside or bracket any moral judgment that aligns with your self-interest, or when you’re invested in the relevant question being answered in one way rather than another. However, I think that this is bad advice. First, notice that it conflicts with the following plausible methodological norm: The Knowledge Platitude: If you know something that’s relevant to a question that you are trying to answer, then you should take that information into account in arriving at a view (McGrath 2019:23).3
For in any case in which you know something that aligns with your self-interest, the knowledge platitude will instruct you to take that piece of information into account, even though the relevant judgment doesn’t satisfy the conditions for being a considered judgment in the semi-technical sense. Moreover, even if the knowledge platitude doesn’t hold in full generality, the idea that you should set aside any judgment that aligns with your self-interest is incompatible with even more modest claims in the same vicinity. For example, it’s incompatible with the following claim: if you know something that’s relevant to a question that you’re trying to answer, then you’re permitted to take that information into account. Precisely because the relationship between bias and knowledge isn’t as straightforward as one might have initially thought, norms of the sort endorsed by Rawls and others threaten to exclude too much. Let’s briefly see how this difference plays out in the context of a concrete example.4 Consider the following claim: A person of color shouldn’t receive lesser consideration in virtue of being a person of color.
Notice that, for a person of color, this judgment is heavily bound up with their own interests. For that reason, it seems like this judgment will fail to qualify as a considered judgment for a person of color, and therefore will be one that they are required to bracket or set aside when it comes to (e.g.) thinking about which moral theories they should accept. That seems like the wrong result, however. On the contrary, it might be perfectly reasonable for a person of color to take this judgment into account in attempting to figure out which moral theories they should accept (e.g. by taking it to count in favor of theories that entail that it’s true as opposed to theories that entail that it’s false). The reason for this is the following: notwithstanding the fact that it will typically be very much in the self-interest of a person of color that this proposition is true as opposed to false, they will typically have a high degree of justification for thinking that it’s true; indeed, the fact that the proposition aligns with their self-interest is perfectly compatible with their knowing that it’s true. And if they know that it’s true, then, I submit, not only is it rationally permissible for them to take it into account in their deliberations, but it would be a methodological mistake for them to set it aside or
bracket it. Of course, even though it’s in the self-interest of a person of color that people of color receive equal consideration, it doesn’t follow that a person of color is or will be biased about the question of whether they should receive such consideration, in any objectionable sense.5 More generally, the mere fact that a person has an interest in a question being answered in one way rather than another doesn’t entail that they will be biased about that question. Moreover, as argued above, even when a person is biased about a given question, and they believe in accordance with their bias, it doesn’t follow that their belief will fall short of knowledge. This holds even when the bias in question is a self-interested one. For example, it’s very much in my self-interest that the world won’t end in the next thirty minutes. For all I know about my own psychology, it’s entirely possible that I’m biased about this question, in exactly the same way in which the parent in BIASED KNOWER is biased: in the closest possible world in which my evidence suggests that my belief is false, psychological mechanisms would be triggered that would cause me to cling to the belief to the cataclysmic end. Still, given that I know that the world won’t end in thirty minutes, it makes sense for me to rely on that proposition, both in deciding what to believe and what to do. Even genuine bias is compatible with full-fledged knowing. A fortiori, merely potential bias is compatible with full-fledged knowing. Given that it’s permissible for us to take into account relevant knowledge even when we are biased about a question, it’s a fortiori permissible for us to take into account relevant knowledge that meets the weaker condition that Rawls takes to be disqualifying, that of aligning with our self-interest. The same type of objection applies to various other proposed methodological norms that are motivated by concerns about bias. Consider, for example, the idea that, in deciding which philosophical theories to accept in the light of our intuitions about what’s true, we should privilege certain intuitions over others in virtue of their level of generality, precisely because intuitions at some levels of generality are comparatively immune to the corrupting influences of various biases.6 For example, Michael Huemer (2008) argues that, in deciding which moral theories to accept in the light of our moral intuitions, we should privilege our “abstract theoretical intuitions” over our “concrete intuitions.” Abstract theoretical intuitions are “…intuitions about very general principles, such as the intuitions that the right action is always the action that has the best overall consequences, or that it’s wrong to treat individuals as mere means” (383). In contrast, “concrete intuitions” are intuitions about more specific situations or concrete examples, such as the intuition that, even if one could save the lives of five people dying from organ failure by forcibly overpowering an innocent bystander and harvesting his organs, it would be morally wrong to do this. According to Huemer, we should privilege our abstract theoretical intuitions over our concrete intuitions, because abstract theoretical intuitions are relatively immune to being distorted by emotional, cultural, and biological biases, while such biases often influence our concrete intuitions (383–4). Similarly, in the course of objecting to the idea that certain “particular moral judgments” (or judgments about concrete examples) should be treated as strong evidence against traditional forms of consequentialism, Peter Singer suggests that we tend to give too much
weight to such judgments in the context of our theorizing: Why should we not make the opposite assumption, that all the particular moral judgments we intuitively make are likely to derive from discarded religious systems, from warped views of sex and bodily functions, or from customs necessary for the survival of the group in social and economic circumstances that now lie in the distant past? In which case, it would be best to forget all about our particular moral judgments, and start again from as near as we can get to self-evident moral axioms (1974:516).
On Singer’s suggested methodological picture, we should privilege our judgments about “moral axioms” over our particular moral judgments. This is because our particular moral judgments are more likely to be artifacts of various cultural and psychological biases: We have all been making moral judgments about particular cases for many years before we begin moral philosophy. Particular views have been inculcated into us by parents, teachers and society from childhood. Many of them we act upon every day. These judgments sink deep, and become habitual. By contrast, when we read Sidgwick for the first time we are suddenly called upon to decide whether certain fundamental moral principles, which we may never have explicitly thought about before, are self-evident. If it is then pointed out to us that this fundamental moral principle is incompatible with some of the particular moral judgments we are accustomed to making, and that therefore we must either reject the fundamental principle, or else abandon our particular judgments, surely the odds are stacked against the fundamental principle. Most of us are familiar with lingering guilt feelings that occur when we do something that we are quite certain is right, but which we once thought to be wrong. These feelings make us reluctant to abandon particular moral views we hold, but they in no way justify these views (516–17).
Consider a concrete moral judgment that’s discussed by both Singer and Huemer, the judgment that infanticide is morally wrong. As both Singer and Huemer point out, a strong prohibition on infanticide is far from a moral universal across human history. Perhaps many of us who endorse this judgment, for some salient precisification of “us” (e.g. “those of us who were raised in cultures strongly influenced by Judeo-Christian morality,” or “those of us who have children,”) are biased when it comes to questions about the morality of infanticide, in some interesting and pejorative sense of “biased.” Even if that’s so, it doesn’t follow that we should discount this judgment, or set it aside. For even if we’re biased about the issue, that’s compatible with our knowing that infanticide is morally wrong. And if we do know this, then, given the truth of the Knowledge Platitude, we should take it into account, in deciding which moral theories to accept. I emphasize this possibility because of my openness to the suggestion that, with respect to questions about the morality of infanticide, this is exactly where the truth lies. For example, as someone who believes that infanticide is morally wrong, I find myself much more open to the suggestion that I’m biased about the issue than that my belief is false, or even that it falls short of knowledge. Consider first the question of bias. If I were asked to provide examples of issues about which I’m definitely not biased, I would never include the morality of infanticide on the list; by my own lights, it’s certainly not among the paradigms or the very best candidates for being an issue about which I’m unbiased. In part, this assessment is due to my awareness of the kinds of considerations that impress Huemer and Singer. For example, I’m aware of the fact that my general moral sensibility developed in a culture that was in turn deeply influenced by a specific religion that was and is deeply opposed to infanticide, and that many of the other central tenets of that religion are things that I don’t accept. On the other hand, I find myself significantly less open to the suggestion that “for all I know,
intentionally killing infants is among the things that it’s morally permissible for me to do.” Of course, even if it turned out that this psychological difference is relatively widespread, it might reflect nothing more than a cultural taboo that attaches to infanticide. In any case, according to what I take to be the correct methodological view, any such psychological difference is not what ultimately matters: the knowledge platitude concerns what one knows, as opposed to what one takes oneself to know, or something similarly subjective. In the case of the morality of infanticide, what counts is whether one knows that infanticide is wrong, as opposed to whether one merely believes, mistakenly, that one knows that infanticide is wrong. Does this mean that in practice someone who takes herself to know (whether mistakenly or not) that infanticide is wrong will simply stamp her feet and insist that it is, and treat that as a reason to dismiss out of hand any moral theory that suggests otherwise? No, for nothing that has been said precludes such a person from offering reasons in support of the claim that infanticide is morally wrong. (Presumably, most people who think that infanticide is wrong think that there is something to be said in support of that claim, although it would badly overintellectualize knowledge to hold that the ability to produce compelling discursive reasons is a necessary condition for knowing.) Likewise, those who think that infanticide is morally permissible are free to offer reasons in support of that claim; such reasons are directly relevant to the first-order question of whether infanticide is morally wrong, and only slightly less directly relevant to epistemic questions about whether anyone knows that it is. Unsurprisingly, no survey of such reasons will be attempted here. The crucial point is that even if someone is biased about some moral issue, it doesn’t follow that they should set that view aside in making up their mind about other issues, for in cases of biased knowing there is no such requirement.
2. Are We Biased Against Skepticism? Consider the way in which the dialectic between the radical skeptic and the non-skeptic has often been conceptualized in epistemology. Skeptical arguments to the effect that we lack knowledge of a kind that we ordinarily take ourselves to have (e.g. knowledge about the external world, or about the unobserved, or about other minds) are often reconstructed, and sometimes explicitly presented as, arguments from underdetermination.7 Right now, it seems to you that you’re reading a philosophy book; you believe that you are, and (I assume) you take yourself to know this. In attempting to undermine your claim to know this mundane fact, the skeptic will offer some alternative hypothesis which, if true, would also account for why things appear or seem as they do (e.g. that you’re actually in the midst of an unusually realistic dream that you’re reading a philosophy book; or, more exotically, that you’re being deceived by an evil demon into thinking this).8 Naturally, we believe the “common sense hypothesis” and not any of the rival hypotheses that the skeptic calls to our attention. But the skeptic holds that our preference for the commonsense hypothesis is rationally arbitrary and unjustified. Although the traditional skeptic isn’t ordinarily presented as levelling the charge
of bias against us, that’s a natural gloss, particularly given the ideas about bias and bias attributions developed in this book. For the skeptic sees a deep symmetry or parity between the ordinary commonsense hypotheses that we naturally and instinctively believe and the skeptical hypotheses that he makes salient. As that theme suggests, the skeptic doesn’t endorse as true or suggest that we believe the skeptical hypotheses that he puts forward. Given the relevant symmetries, that too would be rationally arbitrary and unjustified.9 A general preference for or tendency to believe skeptical hypotheses as the correct explanations for why things appear to us as they do would also amount to an irrational bias, albeit an uncommon one. (For example, a bias against commonsense, or a bias in favor of jumping to the conclusion that one probably inhabits a world governed by an evil demon.) Rather, as traditionally understood, the role of a wellcrafted skeptical hypothesis is to serve as a foil: given the (alleged) rational symmetry that exists between it and the commonsense claim with which it competes, the uniquely rational response is to suspend judgment. To instead continue holding our ordinary beliefs amounts to an irrational bias in favor of commonsense and against the skeptic’s possibilities—or so says the skeptic. Notice, moreover, that our bias in favor of commonsense—if in fact that’s what it is—is by no means a mild one. It seems to me that I’m typing on a keyboard right now, I believe that I am, and I take myself to know this. In addition, I also believe that the reason why it seems to me as though I’m typing right now is that I really am. When I compare the commonsense hypothesis that I’m typing right now to the myriad skeptical hypotheses with which it competes (possibilities involving particularly realistic dreams in which I’m typing, undetectable hallucinations of typing, evil demons, mischievous neuroscientists, and countless other scenarios involving radical error), I recognize that any of these other hypotheses would, if true, also account for its seeming to me as though I’m typing. (They are thus, by design, full-fledged potential explanations of the fact that it seems as though I’m typing, no less than the hypothesis that I really am.) Nevertheless, it’s not as though I divide my credence 60–40 between the hypothesis that I’m actually typing and its competitors, in the way that I might assign a 60 percent probability to a particular political candidate triumphing over the rest of the field in a contested election. Rather, I find myself virtually certain that I’m actually typing, and I give (at most) a vanishingly small amount of credence to the totality of the skeptical possibilities. I assume that this kind of extreme one-sidedness is typical of non-skeptics more generally. Our bias against the skeptic and his possibilities thus seems to be a severe one. Consider also its entrenchment. It’s unsurprising that we give no credence to the skeptical hypotheses before we’re exposed to them. However, even after they’ve been brought to our attention, we continue to invest little if any credence in them and remain virtually certain of many of our ordinary beliefs about the world around us, at least whenever we’re not in the philosophy seminar room. In Chapter 1, we distinguished between the severity of a bias, its entrenchment, and the degree of contingency of its having arisen in the first place. We emphasized there that each of these can vary independently of the others, in potentially interesting ways. However, if it’s fair to describe us as biased against the skeptic and his hypotheses, the bias in question would
seem to be severe, deeply entrenched, and not historically contingent.10 But is the charge of bias fair? From certain vantage points, the suggestion that our rejection of skeptical hypotheses is yet another case of human bias looks preposterous. For example, some philosophers think that our most fundamental commonsense convictions provide something like an epistemic gold standard, and that the epistemic standing of those convictions tends to be underestimated when radically revisionary philosophical views are on the table.11 From this perspective, the suggestion that our rejection of skeptical hypotheses that are inconsistent with these rock solid claims is yet one more instance of bias is apt to seem perverse, and to elicit vigorous dissent. For example, when I’m in this frame of mind, I’m tempted to reply to the suggestion as follows: Look, at least during my better moments, I admit that I’m probably biased about all sorts of things—about the quality of my work in philosophy, about how talented my children are, about various political issues—perhaps even about how biased I am compared to other people. But my conviction that I have hands isn’t like that. And it’s agreed on all sides that that conviction is inconsistent with the skeptical hypothesis that I’m a handless brain-in-a-vat being deceived by neuroscientists. So my grounds for rejecting the skeptical hypothesis are about as rock solid as it gets. And so the suggestion that my rejection of the skeptical hypothesis is really one more case of bias, as when a bias in my own favor causes me to be too quick in rejecting criticism of my published work in philosophy, is simply perverse.
Although I feel the pull of this kind of response, I don’t think that it’s decisive. Indeed, the naturalness of describing us as biased against the skeptical possibilities seems to follow from the psychological robustness of our beliefs that they don’t obtain, regardless of whether they do. In order to warm up to this way of thinking about things, consider the following thought experiment. Suppose that when we encounter the Alpha Centaurians, they turn out to closely resemble us in almost every respect, except for one: their beliefs about whether they are in a given skeptical scenario, unlike ours, are sensitive to the actual facts of the matter. When they’re in the good case,12 they, like us, believe correctly that the skeptical hypotheses are false. However, when they’re in the bad case, a kind of “spider sense” alerts them to the fact that something is amiss, and they respond by giving up belief in the false propositions that we would continue to believe, were we in the bad case. Thus, although they’re no more reliable than we are about how things are in the actual world, their true beliefs to the effect that the skeptical scenarios don’t obtain are sensitive in a way that ours are not. When the Alpha Centaurian scientists study human beings, they’re surprised to find that we exhibit all of the same biases that they exhibit, including confirmation bias, status quo bias, and so on. In addition, however, they notice that when human beings are removed from their natural habitats and placed in certain highly atypical environments, we continue to hold certain beliefs that, although generally true in our natural habitats, are false. These characteristic errors are utterly predictable, patterned, and systematic. The Alpha Centaurian scientists thus conclude that human beings have, in addition to all of the usual biases that they share, one additional, exotic bias. At least in my case, my natural inclination to cry foul against the claim that we’re biased against the skeptical hypotheses diminishes when I reflect on how I would describe the relationship between human beings and skeptical hypotheses if I were an Alpha Centaurian.
For the Alpha Centaurians, it would be perfectly natural—indeed, irresistible—to describe us as biased against the possibility that the skeptical scenarios are actual, in a way that they’re not. (When the Alpha Centaurians list the issues about which human beings tend to be unbiased, whether the skeptical scenarios are true won’t be among those issues.) But the Alpha Centaurians aren’t confused about anything. Therefore, it seems that we should conclude, with them—and with the skeptic—that there is a perfectly good sense in which we’re biased against skepticism. Moreover, the sense in which we’re biased against skepticism seems to be a pejorative one. That is, the sense in which we’re biased against skepticism is not relevantly analogous to the way in which an agent who consistently breaks ties in Buridan’s ass type underdetermination cases by veering to the right as opposed to the left is biased in favor of the right. For it’s not simply that the Alpha Centaurians are different from us in lacking a bias that we have; rather, they are in this respect our superiors. Suppose then that we concede to the skeptic, at least for the sake of argument, the following claim: CONCESSION: We are biased against the skeptical hypotheses and in favor of common sense.
Can the skeptic leverage this concession in order to establish his ultimate conclusion(s)? As traditionally conceived, the skeptic targets our claims to knowledge, and/or our claims to have justified or reasonable beliefs about some subject matter. Granting CONCESSION to the skeptic might seem to hand him an easy victory. For on the face of it, CONCESSION would seem to be an extremely useful lemma on the way to establishing either of those conclusions, exactly the kind of thing that the skeptic was after all along. In fact, however, there is no clear path from the concession to either skeptical conclusion. Consider first the case of justified or reasonable belief. According to one common understanding of what’s true in “the bad case,” in a possible world in which the skeptical scenario is true, we systematically depart from the truth because we respond in the rational way to our evidence, which is itself systematically misleading. (The evil demon or neuroscientist in effect exploits our rationality in order to lead us into massive error by way of feeding us systematically misleading evidence about our actual situation.) On that way of understanding what’s true in the bad case, we end up with commonsense beliefs that are (largely) justified, although false. But if our commonsense beliefs are justified in the bad case, then they are justified in the good case as well. Thus, for the skeptic who targets our claims to have justified beliefs, it’s an obvious dead-end to grant that we have such beliefs in the bad case. In the light of this, the skeptic who appeals to bias in order to target justified belief should insist instead that we’re justified in holding our commonsense beliefs in neither the good case nor the bad case. Rather, given that we’re biased in favor of our commonsense beliefs and against their skeptical foils in both the good case and the bad case, we should be neutral or suspend judgment between them in both cases. In arguing in this way, the skeptic appeals to a principle of the following sort: (?) IF YOU’RE BIASED, THEN DON’T BELIEVE: When one is biased in favor of p over q, then one shouldn’t believe p instead of q.
Indeed, given that CONCESSION is taken to be common ground in this context, the skeptic might appeal to the following, weaker principle: (?) IF YOU KNOW THAT YOU’RE BIASED, THEN DON’T BELIEVE: When one knows that one is biased in favor of p over q, then one shouldn’t believe p instead of q.
However, even the weaker principle is false. Recall that the sense of bias at issue here is a matter of insensitive believing and disbelieving: our attachment to our commonsense beliefs is such that we would hold them, and disbelieve the skeptical possibilities, regardless of their truth. The principle to which the skeptic appeals is thus more or less equivalent to the following principle: (?) DON’T BELIEVE INSENSITIVELY: If one knows that one’s belief in p rather than q is insensitive, then one shouldn’t believe p rather than q.
However, DON’T BELIEVE INSENSITIVELY isn’t a genuine norm of belief revision. Consider again Vogel’s ice cubes case, the original purpose of which was to show that sensitivity isn’t a necessary condition for knowledge. In the original case, the protagonist who knows that she left a pitcher of ice cubes in her backyard on a sweltering summer day hours ago might very well know that the ice cubes have melted by now, even if she would still think this in the closest possible world in which for some reason the ice cubes haven’t melted. Suppose that the protagonist is reflective enough to realize that her belief is insensitive, and that she would still hold it even if it were false. Even given this stipulation, it doesn’t follow that the protagonist should suspend judgment about whether the ice cubes have melted by now. That is, it doesn’t follow that the protagonist should give up her belief that the ice cubes have melted by now, or that her continuing to think this is unjustified, given that she knows that she left the ice cube out in the heat hours ago, and she has absolutely no reason to think that anything or anyone has interfered with the process of their melting. Similarly, the parent in BIASED KNOWER might be informed, either by her therapist or by a hyper-reliable oracle, that her belief that her child is alive and well is insensitive, and that she would hold it even if it were false. Even so, it hardly follows that the parent is rationally required to suspend judgment about whether her child is alive and well, given that at the moment she can plainly see that he is.13 Here is the upshot. The sense in which it’s true that we’re biased against the skeptical hypotheses is that our disbelieving them (and our belief in their commonsense contraries) is insensitive to the truth. But insensitive believing, even insensitive believing that’s recognized as such by the believer, is perfectly compatible with justified believing. Therefore, there is no path from the putative fact that we’re biased against the skeptical possibilities to the claim that we’re not justified in believing that they don’t obtain, or in holding our ordinary commonsense beliefs. Parallel points hold in the case of knowledge. The case of BIASED KNOWER shows that genuine knowledge with respect to a question is possible even for a believer who is deeply biased about that question, even if her belief aligns with her bias. In short, recognizing the possibility of biased knowing allows us to concede that there is a sense in which we’re biased
against skepticism, while clearheadedly retaining our claims to have genuine knowledge and justified beliefs in the face of the skeptic’s challenges. Let’s conclude this section by noting how that point bears on an issue that inevitably arises in discussions of skepticism. An issue that divides non-skeptics in interesting ways is the issue of what, if anything, should be conceded to the skeptic. At one end of the spectrum, some non-skeptics think that nothing at all should be conceded: on their view, the skeptic is, as Alex Byrne puts it, “just another guy with a bad argument” (2004:299). Others think that, while the skeptic is ultimately wrong on every substantive point, the most formidable skeptical arguments serve as potential sources of epistemological wisdom and insight: by critically reflecting on such arguments, we learn, among other things, what’s not required for knowledge or justified belief, to distinguish between intuitively plausible epistemic principles that turn out to be false and subtly different variants that are actually true, and so on. But many prominent nonskeptics have been concessive to the skeptic on points of substance as well, often to a surprising extent. For example, on Lewis’ brand of contextualism ([1996]1999), virtually any skeptic will win any argument in which she participates: merely by articulating a radical skeptical possibility that we’re not in a position to rule out in some non-question-begging way, she dramatically raises the standards for knowing, and thereby divests us of almost all of our everyday knowledge.14 One potentially unappealing feature of this view is that, although it credits us with abundant everyday knowledge in most contexts, that knowledge tends to vanish in response to even relatively crude and unsophisticated challenges. Similarly, Dretske ([1970]2000:39–40) and Nozick (1981) explicitly claim that we don’t know that the skeptical possibilities don’t obtain. Although the skeptic is wrong to claim that we don’t know that we have hands, or that dinosaurs once walked the earth, the skeptic is right to claim that “for all we know, we’re handless brains-in-vats being imperceptibly deceived by evil neuroscientists,” and so on. An unattractive consequence of this view is that it forces those who accept it to countenance so-called “abominable conjunctions” such as the claim that “I do know that I have hands, but I don’t know that I’m not a handless brain-in-a-vat.”15 I believe that the view sketched here, according to which (i) the skeptic is right to claim that there is a sense in which we’re biased against his possibilities, but (ii) wrong to claim that our ordinary commonsense beliefs fall short of knowledge or being justified, and also (iii) wrong to claim that we don’t know that his possibilities don’t obtain, is a relatively attractive option with respect to the question of what should and shouldn’t be conceded to the skeptic. On the suggested view, we have abundant everyday knowledge, and our possession of that knowledge is relatively robust, inasmuch as it doesn’t vanish in response to even crude skeptical challenges. Because it also credits us with the knowledge that the skeptic’s possibilities don’t obtain, we’re not committed to countenancing “abominable conjunctions,” or to denying compelling closure principles about knowledge. Nevertheless, the skeptic is right that there is a genuine ideal that we fail to satisfy with respect to his hypotheses, even if we’re right in thinking that those hypotheses are false. The ideal in question is one that we in fact manage to satisfy in some cases, as when an unbiased scientist who believes a favorite theory nevertheless would have disbelieved the same theory if it had turned out to be false. The skeptic’s mistake is to think that fulfilling that ideal is a necessary condition for knowing or justified believing. That is, the skeptic’s mistake is to overlook the possibility of biased
knowing.
3. Reliability and Contingency According to one natural way of thinking about knowledge, knowing is a matter of reliable believing.16 If the relevant notion of reliability is understood in modal terms, the view that knowing is a matter of reliable believing is close to currently popular safety-theoretic accounts of knowledge. Even if it’s denied that reliable believing is enough for knowing— perhaps more is required—it’s natural to hold that it’s necessary: if one is too unreliable in one’s believings, then one doesn’t know even if one happens to be correct on a particular occasion.17 But regardless of the exact nature of the connection between reliability and knowledge—and even regardless of whether there is any interesting connection at all—the relationship between bias and cognitive reliability seems well worth investigating in any case. In earlier chapters, it was emphasized that a person (or a source of information, or a process) might be unreliable even if they are unbiased, as in the case of the unbiased but incompetent referee. The fact that bias has a non-random character guarantees that there will be a substantial gap between bias and unreliability, for enough random error will make for unreliability but not for bias. In general then, unreliability does not entail bias. What about the other way around? Does being biased, in the pejorative sense, make for unreliability? Here the answer is less straightforward, for there are conflicting considerations on the two sides of the issue. As an initial observation, we should note that many platitudes, or apparent platitudes, suggest a strong connection between bias and unreliability. Consider, for example, “Biased witnesses are unreliable witnesses” or, more generally, “Biased sources of information are unreliable sources of information.” Moreover, according to the norm-theoretic account of bias, someone who is biased in the pejorative sense is disposed to systematically depart from a genuine norm, so it follows more or less immediately that they are unreliable, at least when it comes to following the norm in question. All of this seems to suggest that being biased makes for unreliability. However, in order to be defensible, any claims that assert a strong connection between bias and unreliability must be interpreted so as to be consistent with the following facts. First, as noted in Chapter 5, there are at least some contexts in which an agent counts as biased because of the systematic character of their departures from some genuine norm, even though they are highly (albeit imperfectly) reliable when it comes to following that norm. As the following variant case demonstrates, the same point holds when we are concerned with distinctively cognitive reliability: COGNITIVELY RELIABLE ALTHOUGH BIASED. With respect to a given type of question, a person answers correctly 90 percent of the time, even though most people are less than 60 percent reliable. However, in the 10 percent of cases where she arrives at an incorrect answer, she consistently overshoots as opposed to undershoots. In contrast, other people, both the many who are less reliable than she is as well as the few who are as reliable, err
randomly when they err.
In a context in which we ask, “Who if anyone among the reliable people is biased?” it seems natural to count her as both. Given the facts stipulated in COGNITIVELY RELIABLE ALTHOUGH BIASED, it’s at least somewhat natural to hear the case as one in which the thinker manages to attain a high degree of reliability in spite of a bias which tends to make for unreliability by inclining her towards a certain kind of mistake. A more challenging and philosophically interesting type of case, I think, is one that’s naturally described as follows: a thinker is reliable because she’s biased in the pejorative sense. For example, consider a person who is biased against the members of a certain animal species, the Fs, in the following sense: in the absence of any empirical evidence, the person is disposed to believe that individual Fs are G, where G is some negative trait. Of course, whether that disposition leads the person to mostly true or false beliefs will depend on the environment that she’s in. In particular, if the person happens to be in an environment in which most or all of the Fs that they encounter are G, then this disposition will tend to lead to true beliefs. Indeed, on the face of it, it looks as though the biased person might turn out to be more reliable than a perfectly rational and unbiased person (who conscientiously waits for evidence before concluding: this F is G), with respect to the beliefs at which they arrive. More generally, the content of a bias might dovetail with the world in the right way, so that the biased thinker ends up reliable. As emphasized in Chapter 2, a paradigmatic way in which someone might count as a biased believer is by relying on a biased belief forming process. Given that the process on which the believer relies is biased, one might think that this guarantees that the biased believer will end up unreliable, since a biased process will be unreliable, and she will inherit the unreliability of the process by relying on it. However, most cognitive processes are such that whether they’re reliable or unreliable depends on the environment in which they’re operating. Typically, the same process will be reliable in some possible environments but unreliable in others—a point familiar from discussions of reliabilism in epistemology,18 and from the “rationality wars” in psychology.19 Given the right environment, it seems that even paradigmatically biased processes will be reliable, as will the biased thinkers who rely on them. Does the apparent fact that biased thinkers can end up reliable when their biases dovetail with their environment in the right way mean that we should reject putative platitudes like “biased thinkers are unreliable”? That conclusion would be too quick. The putative platitudes are consistent with the observations just offered, at least when the putative platitudes are charitably interpreted. There are at least two different (albeit compatible) moves that a theorist might make in attempting to preserve the intuitive connection between bias and unreliability. First, a theorist might appeal to the idea that, on the best interpretation of reliability, being reliable isn’t simply a matter of the actual relative frequency of true beliefs among total beliefs.20 According to this line of thought, on the relevant understanding of reliability, in order to count as reliable it’s not sufficient to actually do well with respect to arriving at true beliefs as opposed to false beliefs. Rather, reliability also includes a certain modal element: in
particular, it matters how the biased thinker would fare in certain possible but non-actual circumstances—including, crucially, possible environments in which the content of her bias does not dovetail with the world in the right way. However, there are significant limitations to how much mileage one can get out of this kind of move. After all, it might be a robust fact about the world (albeit one completely outside of the biased believer’s ken) that the Fs are Gs.21 If it’s a robust fact about the world that Fs are Gs, there will be no nearby possible worlds in which it fails to be true that the Fs are Gs, and so the biased thinker will arrive at generally accurate beliefs in those worlds as well, and thus, seemingly count as reliable in the actual world. Of course, a theorist determined to preserve an exceptionless connection between biased belief formation and unreliable belief formation could insist on moving out still further in the space of possible worlds, until we hit worlds in which the generalization fails to hold and so the disposition leads to false rather than true beliefs, for the purposes of evaluating its reliability. But this seems ad hoc if done in an unprincipled way; and if it’s done in a principled way, it threatens to set too high of a standard for reliability, a standard which paradigmatically reliable beliefforming processes will fail to meet. It seems best then, to admit that a thinker who is biased in the pejorative sense could be reliable, in a case in which the content of his bias dovetails with the character of his environment in the right way. Does this mean that alleged platitudes like “Biased thinkers are unreliable” turn out to be false, after all? No, for it’s plausible that, when properly interpreted, a claim such as “biased thinkers are unreliable” is not a universal generalization. (That is, its truth conditions are not of the form: “For any x, if x is a biased thinker, then x is unreliable.”) Rather, the better interpretation of such claims is that they are actually generic generalizations, which tolerate exceptions. In this respect, they are similar to generic generalizations such as “Tigers have stripes” or “Dogs have four legs” which are properly counted as true, notwithstanding the existence of tigers without stripes and dogs with fewer than four legs. So understood, the truth of “Biased thinkers are unreliable” is compatible with the existence of some thinkers who are both biased and reliable, and indeed, with the existence of some thinkers who are reliable because they are biased.22 Even if the platitudes that connect bias and unreliability can be preserved by understanding them in this way, it’s worth further probing the underlying philosophical issues, as well as certain related tensions in our thinking about bias. This is the task of the final section of this chapter.
4. A Tale of Three Thinkers Consider the generic generalization that pit bulls are dangerous, the truth of which is consistent with the existence of some non-dangerous pit bulls. I assume that this generalization is an empirical claim: if it’s known to be true, it’s known on the basis of experience with, and empirical evidence about, pit bulls. I also assume that it’s contingently true or false: in particular, it isn’t part of the essence of pit bulls that they’re dangerous, or
that they’re not. (Even if it’s true at our world that pit bulls are dangerous, some individual pit bulls are not. And we can assume that there are some possible worlds in which, whatever factors account for the non-dangerousness of some pit bulls in our world, those factors are sufficiently widespread among the general pit bull population so as to render the generalization false there.) Consider first: BIASED THINKER. A person acquires a confident and firm belief, in the absence of any empirical evidence, that pit bulls are dangerous, and he regularly employs that belief in reasoning about particular pit bulls (e.g. when it comes to interpreting their behavior).
I assume that this description is enough for the thinker to count as biased against pit bulls, in a perfectly familiar and straightforward sense of “biased,” regardless of which possible world he’s in. In particular, even if it turns out that the thinker is in a possible world in which it’s true that pit bulls are dangerous, that doesn’t mean that he isn’t biased against them. Inasmuch as BIASED THINKER is an example involving a relatively pure case of bias, I’ll sometimes refer to its protagonist as the paradigmatically biased thinker in what follows. Suppose that the protagonist in BIASED THINKER is in fact in a world in which pit bulls are dangerous, and there is abundant empirical evidence of this (although that empirical evidence plays no role in why he believes as he does). Suppose further that the same world also contains a thinker who is paradigmatically unbiased about the same topic: UNBIASED THINKER: Although she begins from a scrupulously neutral and unbiased starting point, another thinker arrives at an equally confident and firm conviction that pit bulls are dangerous. She holds this belief on the basis of compelling empirical evidence that’s more than enough to justify it. Indeed, her belief satisfies all of the conditions for knowledge that have ever been proposed by analytic epistemologists. She uses this belief to reason about particular pit bulls, in exactly the same way as her biased peer does.
The biased thinker and the unbiased thinker might be equally reliable when it comes to the opinions that they form about pit bulls. Indeed, in principle, they might draw all of the same inferences from that point on, and end up with all of the same pit bull beliefs. And their equally firm convictions might serve them equally well in navigating their pit bull infested environment. Given that the two are equally reliable, the biased thinker will satisfy any minimal reliability condition on knowledge that the unbiased thinker satisfies. Still, even though they both satisfy any necessary conditions for knowledge relating to their degree of reliability, and notwithstanding their other similarities, there remains a strong pull to the thought that the unbiased thinker will end up knowing more about pit bulls than the biased thinker, who often has mere true belief as opposed to knowledge. Here, the most obvious potentially relevant difference concerns the epistemic status of their respective tokenings of the belief that pit bulls are dangerous, which in one case, but not in the other, is supported by compelling evidence. But even if we focus not on their evidence but on their reliability, which by hypothesis is equal, there is a salient difference. In the case of the biased thinker, his reliability is an accident: if he were in a possible world or an environment in which the pit bulls were not dangerous, he would still hold the same belief and draw the same conclusions about particular pit bulls, and therefore be extremely unreliable. In contrast, in a possible world in which pit bulls aren’t dangerous (and the evidence suggests this), the unbiased
thinker would have formed the belief that they aren’t dangerous, and so wouldn’t be unreliable about particular pit bulls. Consider next what looks like an intermediate case: INTERMEDIATE CASE: A third thinker is born into the same world with an innate belief that pit bulls are dangerous. Because the belief is innate, it isn’t something that he’s learned about the world, and it isn’t held on the basis of empirical evidence. Nevertheless, the fact that he believes this generalization about pit bulls is both causally and counterfactually dependent on the fact that the world into which he’s born is one in which the generalization is true, in the following way. Because the possible world is one in which the generalization has held in the past, there were strong evolutionary pressures that favored those who held the belief or who were highly disposed to form it, even before they acquired any significant evidence that pit bulls are dangerous. Over time, the species evolved so that its typical members believed that pit bulls are dangerous from the get go; the fact that they ultimately encounter ample evidence to support this belief is at best a post hoc justification of a belief that’s already hard-wired in.
Employing a Bayesian model, we can imagine the relevant history unfolding as follows. Far back in the evolutionary past of the species, its members exhibited a deep diversity or liberal spread in the initial prior probabilities that they assigned to the proposition that pit bulls are dangerous. Some started off extremely confident that pit bulls are dangerous, others extremely confident that they are not, while still others invested some level of more middling credence in the same proposition. However, given that the possible world in fact turned out to be densely populated with dangerous pit bulls, those who started off confident that pit bulls are dangerous fared better than those who started off confident that they aren’t, and also better than those who started off more agnostic about their dangerousness. (Notice that, from the epistemic perspective, the last group might seem to be the most reasonable, given that the dangerousness of pit bulls is an empirical question. However, we can imagine that in this world the typical pit bull is not only dangerous but extremely dangerous, so waiting for empirical evidence of their dangerousness before forming the belief often proved practically disastrous; practically speaking, one would be much better off if one had started off irrationally prejudiced against pit bulls.) With respect to his pit bull beliefs, the third thinker might be just as reliable as the first two, and those beliefs might serve him equally well in navigating his own pit bull infested environment. In some respects, he resembles the paradigmatically biased thinker as opposed to the paradigmatically unbiased thinker. Notably, like the biased thinker but unlike the unbiased thinker, his true belief in the empirical generalization is not something that he learned about his environment, or adopted in response to evidence, or currently holds on the basis of evidence. Inasmuch as those things are the case, it seems irresistible to describe him as biased against pit bulls. However, in other respects, he resembles the unbiased thinker as opposed to the biased thinker. Notably, like the unbiased thinker and unlike the biased thinker, the fact that he’s reliable about pit bulls is no accident, inasmuch as his confidence that pit bulls are dangerous is tied to the fact that his world is one in which that generalization holds as opposed to one in which it doesn’t. According to some theorists, what’s crucial for knowledge is not mere reliability but rather non-accidental reliability.23 For such theorists, this difference between the protagonist in BIASED THINKER and the protagonist in the intermediate case might make all the difference in terms of who should be credited with knowledge and who shouldn’t.
Questions about what we should say about the protagonist in the intermediate case gain a certain urgency from the fact that his situation seems to be our situation, in important respects. Even if natural selection didn’t directly select for specific beliefs in our case, it’s plausible that (i) many of the paradigmatic processes by which we arrive at our beliefs are ones whose reliability depends on certain contingent features of our environment being as they are, and that (ii) this match is no accident, but that (iii) the match is also not one that we independently recognized prior to the epistemically successful use of those processes. Recall an earlier example: in our environment, but not in other possible environments, objects are generally illuminated from above rather than from below. Happily, the default assumption of our visual system in representing objects is that they’re illuminated from above. Presumably, this is no mere happy coincidence or accident. Rather, the fact that our visual system interprets inherently ambiguous stimuli as though they were produced by objects illuminated from above is because of the fact that we’re in a world in which this is the usual case. Yet it’s obviously not as though each of us first learns that we’re in a world in which this is the usual case, a discovery that then informs our perceptual system (a progression that would be analogous to the procedure of the initially unbiased thinker in the case of the pit bulls). Consider also the norms by which we reason. If there is a norm of reasoning that corresponds to the logical principle of modus ponens,24 then the reliability of that norm does not depend on the contingent features of the world or environment in which it’s used: so long as it’s fed true inputs, it will deliver true outputs.25 More generally, for any norm of reasoning that corresponds to a principle of deductive logic, its reliability doesn’t depend on the contingent features of the world or environment in which it’s used. Things are otherwise when it comes to non-deductive reasoning. As Hume (1748) observed and emphasized, whether inductive reasoning as we actually do it is a reliable way of arriving at true rather than false beliefs seems to depend very much on the actual character of the world that we inhabit. In some possible worlds, our usual principles will be reliable; in others, they won’t, even when fed true inputs. Our actual inductive practice is relatively reliable, and this reflects a match between features of that practice and the character of our environment. Presumably, this match isn’t a mere fortuitous accident or coincidence: our actual inductive practice is what it is at least in part because of the character of our world. (In this respect, we’re not analogous to the protagonist in BIASED THINKER who simply gets lucky.) Of course, as we learn more about the character of our world, we can refine and sharpen our inductive practice based on that evidence. To the extent that we do this, we resemble the protagonist in UNBIASED THINKER. But that’s not the general case: it’s not as though we as individuals first gathered evidence and learned about the character of our world, and then began to reason inductively, in ways that made sense given this learning. So we’re not, in this respect, analogous to the protagonist in UNBIASED THINKER, either. Once again, our situation seems to most closely resemble that of the intermediate case in salient respects. Ordinarily, we take our most fundamental ways of arriving at beliefs, such as perception and inductive reasoning, to be capable of delivering full-fledged knowledge, independently of our having empirical evidence that suggests that they are or will be reliable, or that they’re likely to be reliable given the character of the environment. Rather, it’s sufficient that (i) they’re in fact reliable in the environment in which they’re used, and (ii) this reliability is no
accident.26 When evaluated by the high standards set by the paradigmatically unbiased thinker, many of what seem like paradigmatic instances of knowledge acquisition fail to measure up. On pain of an implausible and sweeping skepticism then, I think that that we should count the thinker in the intermediate case as in a good enough position to know that particular pit bulls are dangerous when they act in ways that suggest this, even if the behavior in question is also consistent with their harmlessness and might reasonably inspire agnosticism in someone who begins from a perfectly neutral starting point. But isn’t the thinker in the intermediate case biased against pit bulls in virtue of being biased in favor of the hypothesis that they’re dangerous, an empirical generalization to which he assigns a relatively high credence in advance of any empirical evidence in its favor? Notice that even if we count the thinker as biased for this reason, it doesn’t follow that his beliefs about particular pit bulls will fall short of knowledge, given the possibility of biased knowing. Still, one might press the point about the beliefs themselves. After all, in the clearest cases of biased knowing, the biased person’s bias plays absolutely no psychological role in a normatively impeccable cognitive process that leads to the formation of a belief; therefore, notwithstanding the fact that the person is biased, their belief is both unbiased and known. Contrast the case in which the person’s relatively high initial confidence in the generic generalization that pit bulls are dangerous plays a crucial role in the psychological process that leads him to conclude that a particular pit bull is dangerous on the basis of inherently ambiguous behavioral evidence that gets interpreted as evidence of dangerousness (although it would not be so interpreted otherwise). Doesn’t that mean that the resulting belief is biased in the pejorative sense and therefore disqualified from counting as knowledge? Again, notwithstanding the plausibility of that line of thought, I think that the similarities between the process so described and paradigmatic episodes of knowledge acquisition argue for a non-skeptical verdict. Consider the way in which the conventional Bayesian, who rejects strong indifference principles for assigning prior probabilities (see Chapter 8, §4), will regard a case in which a thinker invests a relatively high but rationally permissible prior probability in a contingent empirical claim in advance of any empirical learning about the world. My suggestion is that we should think of the thinker in the intermediate case along these lines. The conventional Bayesian will insist that, so long as a high prior probability is rationally permissible, a thinker doesn’t violate any genuine norms by assigning it (even if other possible assignments would also have been rationally permissible). I suggest that we adopt the same stance towards the protagonist in the intermediate case. Inasmuch as there is a good sense in which the relevant process counts as “biased,” it’s in virtue of departures from contextually salient egalitarian standards that don’t correspond to genuine norms. Therefore, beliefs arrived at in this way are not necessarily biased beliefs, and therefore, not disqualified from counting as known. Notice, however, that if we do credit the protagonist in the intermediate case with knowledge that’s underwritten by his innate predispositions in this way, we shouldn’t let that blind us to the ways in which he resembles the paradigmatically biased thinker. In particular, the more one emphasizes “internal” factors about how things look from the believing subject’s own perspective, the greater the resemblance between the two, for in any such
respect they seem on a par. It’s not as though the believer in the intermediate case need have consciously accessible evidence for his relatively high degree of belief that pit bulls are dangerous, evidence that (e.g.) he could cite in defense of that conviction, if he were challenged by someone who accused him of being irrationally biased against pit bulls. Indeed, from the inside or subject’s point of view, the two convictions might look and feel exactly the same. Imagine that the thinker in the intermediate case is a self-critical and reflective individual, who introspects and finds himself without respectable evidence for what’s obviously an empirical generalization. In that case, it would be natural for him to view his conviction as an irrational prejudice. That is, the (arguably) relevant difference between the protagonist in the intermediate case and the protagonist in BIASED THINKER, which might be thought to favor the former over the latter, is, like reliability itself, an external matter: it need not be cognitively accessible to the thinker herself. The fact that we’re often not in a good position to tell whether we ourselves are biased (and in fact are generally unreliable judges of this) is a prominent theme in much psychological research, as well as in the earlier chapters of this book. On the view taken here, even if things look the same from the inside, the fact that one person is biased but another isn’t can make a crucial epistemic difference, for example with respect to what they know or are in a position to know. The idea that similarity from the internal point of view is compatible with significant epistemic differences because of differences in who is in fact biased will also figure prominently in the next chapter, in which I take up questions about the connections between bias, bias attributions, and the epistemology of disagreement. Bias: A Philosophical Study. Thomas Kelly, Oxford University Press. © Thomas Kelly 2022. DOI: 10.1093/oso/9780192842954.003.0010
1 I date its ascendancy from the publication of Rawls (1971), which was enormously influential in this as well as in other respects. 2 In addition to Rawls (1971), see especially the characterizations offered in Rawls (2001[1975]), Daniels (1996, 2018), and Scanlon (2002, 2014). 3 For considerations in favor of the knowledge platitude, see especially Williamson (2000) and also Kelly (2008). 4 This paragraph is borrowed and adapted from Kelly and McGrath (2010), with the permission of my co-author. 5 More generally, it doesn’t follow that they will be at any disadvantage when it comes to answering either descriptive or normative questions relating to their potential marginalization on the basis of race. Going beyond this, some standpoint theorists maintain that they will be cognitively advantaged or privileged with respect to such questions. For an overview of the relevant kind of standpoint epistemology, see Toole (2021). The idea that people who are socially marginalized (or at risk of social marginalization) in virtue of their identities might be epistemically privileged with respect to related questions goes back at least to Du Bois (1903). See also Hooks (1984), Collins (1990), and Kukla (2006); for critical discussion, see Dror (2022). 6 Notice that this idea would not be endorsed by the prototypical proponent of the method of reflective equilibrium, since such theorists typically hold that no considered judgment is privileged over any other simply in virtue of its level of generality. For discussion of this theme, see especially McGrath (2019:Ch. 2), from whom I borrow the two provocative quotations from Peter Singer discussed below. 7 For discussion, see Vogel (2004). 8 For the sake of concreteness, I’ll focus in what follows on the case of skepticism about our knowledge of the external world. But this is inessential: all of the same points could be made, mutatis mutandis, with respect to other forms of
skepticism, e.g. about induction, or about other minds, or any other kind of skepticism that’s naturally presented in terms of concerns about underdetermination. 9 In this respect then, the usual terminology of “skeptical hypotheses” or “skeptical scenarios” is potentially misleading, insofar as it suggests that the skeptic is partial to those hypotheses as opposed to their commonsense competitors. 10 Consider historical contingency. A person might have a certain bias about politics or religion; in many actual cases, this might be due to their having grown up in a certain family or raised by certain parents. If the person had been adopted into a different family, then they might very well have lacked that bias. However, in whatever sense we can be said to be “biased in favor of common sense” in virtue of retaining our ordinary beliefs, it’s not as though the person would have lacked this bias if they had been raised by a very different family from their actual one. 11 See, e.g., Armstrong (1999), Lemos (2004), Lewis ([1996]1999:418), and Lycan (2001). I count myself among their number; see my (2005b) and (2008). Of course, the patron saint of the perspective in the analytic tradition is G.E. Moore (1993). 12 Following Williamson’s (2000) useful convention, I’ll call a situation in which the skeptical hypothesis actually obtains “the bad case,” and a situation in which it doesn’t obtain (and things actually are as they appear to be) “the good case.” 13 If it’s conceded that the parent in BIASED KNOWER knows that her child is alive and well, then it follows that she’s justified in believing that he is, given the truth of the more general principle that knowledge entails propositional justification. That principle is widely, although not universally, accepted. (Notable dissenters include Audi (2003:235–9), and Lewis (1999:421–2.) Although I don’t rely on the principle here, I believe that it’s true. As McGrath notes (2019:8), assertions of the form “It’s true that you know that it rained this afternoon, but you’re not justified in believing that it rained this afternoon” seem infelicitous, and the most straightforward explanation for their infelicity is the following: if you know that p is true, then you’re justified in believing p. On the other hand, if unjustified beliefs can qualify as knowledge, then we would expect assertions of this kind to be perfectly acceptable. In any case, the idea that knowledge entails justified belief will be popular among skeptics, who frequently target knowledge by way of targeting justified belief. 14 More precisely: she raises the standards governing knowledge attributions to the point that almost all of our ordinary knowledge attributions go false in the conversational context. 15 See especially DeRose (1995) and Kripke (2011). I believe that the phrase “abominable conjunctions” is originally due to DeRose. Relatedly, the Dretske-Nozick position is incompatible with plausible closure principles about knowledge (including “single premise closure”) that would allow us to come to know that “I’m not a handless brain in a vat” by validly inferring it from the known claim that “I have hands.” Although this consequence is explicitly embraced by both Dretske and Nozick, many others have taken its cost to be prohibitive. On the case for closure, see especially Hawthorne (2014). See also Dretske’s (2014) reply in the same volume. 16 For a particularly vigorous presentation of this picture, which explicitly contrasts it with its most prominent historical rival, see Dretske (1991). Armstrong (1973) suggests that we understand knowledge on the model of a reliable thermometer: the beliefs of the person who knows are reliable indications of the way things are, just as the readings of a reliable thermometer are trustworthy indications of the ambient temperature. The most well-known reliabilist view in philosophy, Goldman’s (1979, 1986) “process” reliabilism, is offered as an account of epistemic justification as opposed to knowledge. 17 Compare McGrath (2019:136) on the plausibility of “the minimal reliability condition on knowledge.” For a defense of the idea that reliability is at least a necessary condition on knowledge, see Williamson (2000):98–102. 18 For an overview of the issues, see Goldman and Beddor (2016). Henderson and Horgan (2001) is a useful discussion. Perhaps if one gets liberal enough with what counts as a process, one can describe some processes that will be reliable in any possible environment, and others that will be unreliable in any environment. But at best, these will be atypical cases. 19 See, e.g., Gigerenzer (1991) on the reliability of heuristics in adaptive circumstances. 20 Again, this is a point that is familiar from the reliabilism literature in epistemology, see, e.g., Goldman and Beddor (2016). 21 Begby (2021) provides an extended discussion of how widespread prejudiced beliefs or stereotypes might be nonaccidentally true in certain social contexts, if only because they tend to be causally implicated in bringing about the various phenomenon that they describe. (For example, a widespread prejudice that “women are bad at math” might create social circumstances in which relatively few women will seek to develop their mathematical talents.) On this point, see also Haslanger’s (2011) discussion, although I wouldn’t endorse some of the stronger conclusions that she goes on to draw from the phenomenon.
22 On generic generalizations, see especially Leslie (2008) and Leslie and Lerner (2016). For discussions connecting generics to issues about bias and prejudice, see especially Begby (2013, 2021), Haslanger (2011), Anderson, Haslanger, and Langton (2012), Wodak and Leslie (2017), Ritchie (2019), and Saul (2017). 23 For a recent, sophisticated development of such a view, see Setiya (2012). The claim that merely accidental reliability is incompatible with knowing is also endorsed by Plantinga (1993) and Yamada (2011). Majors and Sawyer (2005) and Bedke (2010) argue that non-accidental reliability is a necessary condition for justified belief. 24 For reasons familiar since at least Harman (1986), it isn’t plausible that a principle like “If you believe that p, and if you believe that if p then q, then you should believe that q” is a genuine norm of belief revision. In some cases, one shouldn’t believe q; rather, one should give up one of the other beliefs. Of course, this observation is compatible with the possibility that there is some genuine norm of belief revision corresponding to the logical principle of modus ponens. 25 As this formulation suggests, the sense in which the reliability of the modus ponens norm doesn’t depend on the character of the world in which the norm is used concerns its conditional reliability as opposed to its unconditional reliability. In a world in which most of our beliefs are false (say, because we’re in a skeptical scenario), reasoning in accordance with modus ponens will be highly unreliable, for using it will often take us from false beliefs to further false beliefs. Even in that world, however, reasoning in accordance with modus ponens will be conditionally reliable. The importance of the distinction between conditional and unconditional reliability has been appreciated from the earliest discussions of reliabilism; see Goldman (1979). 26 Some would hold that it’s also necessary in order to arrive at a sufficient condition that we have no positive reason to think that they’re unreliable, but I’ll ignore this possibility in what follows.
10 Bias Attributions and the Epistemology of Disagreement 1. On Attributing Bias to Those Who Disagree with Us Consider the following case: DISAGREEING WITH A SUPREME COURT JUSTICE: You believe that legal access to abortion is a constitutionally protected right under the United States Constitution, given a correct understanding of the Constitution and what it requires. However, you’re no expert in constitutional law; in fact, you lack any formal legal training. Moreover, you know that the late Supreme Court Justice Antonin Scalia was very confident that there is no such constitutional right. On Facebook, a friend of a friend appeals to these facts in order to challenge the propriety of your believing as you do. (“Look, what business do you have thinking this, given that Scalia emphatically denied it?”)
How might you respond to the challenge? Setting aside the question of what, if anything, might productively be said on Facebook to such an interlocutor, what might you tell yourself, about whether and why it makes sense to go on believing as you do, given that you don’t deny any of the facts to which the interlocutor appeals? One natural response is to appeal to the expertise of others who are like-minded. Even if your own legal credentials are no match for Scalia’s, Ruth Bader Ginsburg’s were, and she shared your opinion about the question, not his. Is this enough to justify your continuing to believe as you do? On the face of it, it seems as though it isn’t: the more natural view is that the fact that the two experts disagreed with each other about the issue warrants suspension of judgment on your part, as opposed to justifying your siding with one of them as opposed to the other. In response to this kind of concern, a next natural move is to appeal to the numbers. Does it matter that Scalia’s opinion was rejected by a majority of the Supreme Court justices who voted on the issue over the years? Presumably, “members of the Supreme Court” is not ultimately the right reference class to consider, given that at least some people who aren’t members of the Court are as well-qualified to pronounce on the issue as its members, even if what the non-members think is less politically important. However exactly the relevant reference class is specified, does the epistemic legitimacy of your continuing to hold your opinion depend on its popularity within that group? Although the possibility of appealing to experts in such cases raises a number of interesting and important questions for social epistemology, I won’t pursue them here.1
Rather, I want to focus on another very natural way of responding to the original challenge, one that’s perfectly compatible with the first type of response, although of independent philosophical interest. Imagine that the protagonist in DISAGREEING WITH A SUPREME COURT JUSTICE reasons as follows: Look, it’s true that, in many respects, Scalia was hyper-qualified to pass judgment on the issue. With respect to his knowledge of the US Constitution, relevant legal precedents, and so on, he was off the charts. But in thinking about how reliable he was likely to be about this particular issue, there is another crucial dimension where I don’t have any reason to think that he scored very highly: namely, the extent to which he was free from any kind of distorting bias. After all, Scalia was known to believe, not only that there is no constitutionally protected right to an abortion, but also that abortion is seriously morally wrong. On anyone’s view, the question of whether abortion is a constitutionally protected right and questions about its morality are at least different questions. Indeed, on Scalia’s own theory of constitutional interpretation (Scalia 1997), they’re utterly distinct questions. But we also know that, in general, people’s views about the constitutional question tend to be influenced by their views about the moral question. Of course, it’s perfectly coherent to think, as some do, that abortion is both morally wrong and constitutionally protected. Conversely, it’s perfectly coherent to think, as some do, that abortion is neither morally wrong nor constitutionally protected. But, consistent with these possibilities, we also know that people who think that abortion is seriously morally wrong tend to think that it isn’t constitutionally protected, and that people who think that it isn’t morally wrong tend to be more likely to think that it is. Given then that Scalia was a person who thought that abortion is seriously morally wrong, how confident am I that his view about the constitutional question wasn’t influenced by his moral view? I’m not confident of that, and I don’t have any reason to be.
Consider the question: How confident am I that Scalia’s view about the constitutional question wasn’t influenced by his moral view?
This question is close to, if not simply identical to, the following question: How confident am I that Scalia would still have thought what he did about the constitutional question if he did not believe that abortion is morally wrong?
So understood, the question concerns one’s confidence in a counterfactual speculation about the psychology of a stranger. The claim that one has little reason for confidence that the relevant possibility would have obtained in the relevant counterfactual scenario thus seems eminently plausible. Abstracting away from the details of the particular example, I take it that this general type of move is a familiar and common one. Often in cases of disagreement, we discount someone’s opinion, either wholly or in part, on the grounds that they’re biased; or on the grounds that there is a significant chance that they are; or because the possibility that they are is salient and we’re not in a position to rule it out. In this way, the psychological pressure to revise or abandon one’s view in response to the disagreement is mitigated. The phenomenon seems especially significant in cases in which those with whom we disagree are formidable people whom we recognize as such, the kind of people whose dissent would seem to provide a serious threat to the rationality of our continuing to believe as we do, at least in the absence of the attribution of bias or potential bias. What should we make of our tendency to attribute bias to those with whom we disagree in such cases? Before considering some of the potential pitfalls of such reasoning, let’s examine two aspects of it that plausibly contribute to its popularity, and that are noteworthy from an
epistemic point of view. First, the relevant kind of reasoning is potentially quite powerful, in the following sense. When it comes to assessing a person’s credibility with respect to a given issue, their scoring poorly along “the bias dimension” tends to trump or neutralize their scoring well with respect to other salient dimensions, dimensions that would otherwise be highly relevant. Here is a certain natural picture that one might have in mind: for any given issue, there will typically be a number of different dimensions that jointly determine how well positioned a person is to arrive at an accurate view about it. For example, it will generally be desirable for the person to score well with respect to their familiarity with relevant evidence, as well as for them to have a high degree of intelligence, and so on. Given that there are multiple dimensions, one might also be tempted to think that scoring exceptionally well with respect to some dimensions might help compensate for scoring not so well with respect to others. However, the dimension “free from bias” doesn’t seem to work that way: as soon as one thinks that another person scores poorly with respect to it, one won’t be impressed by how well they score along the other dimensions. In terms of the Scalia example: once one starts to believe Scalia might very well have been biased with respect to the constitutional question, it seems not to matter any longer how intelligent he was, or how thoroughly familiar with putatively relevant legal precedents, or with the best arguments on both sides of the issue, and so on. Although these things would speak in favor of the reliability of a person who is known to be unbiased, they won’t be treated as reasons to credit his opinion. If anything, once one comes to think that the person might be biased about the issue, the fact that they excel in these other ways might make them seem more epistemically dangerous than they would be otherwise. For these attributes will put them in a strong position to construct what look like compelling considerations in favor of their biased opinion, both for their own consumption and for the consumption of others.2 As emphasized in Chapter 8, §1, one way in which cognitive biases manifest themselves is by making our beliefs insensitive to the truth. Consider an idealized case in which the usual uncertainty that attaches to an attribution of bias is completely factored out: you know, with certainty, that an impeccably credentialed expert is biased about some issue, so that his belief about it is insensitive; he would believe as he does, regardless of whether the content that he believes is true. (Perhaps you’ve been informed of this by an infallible oracle.) What can you conclude? You can’t infer that the proposition that the biased expert believes is false. (Indeed, even if the oracle had told you not only that the expert himself is biased but also that the expert’s belief is biased, you couldn’t infer that it’s false, for a biased belief, or a belief that’s arrived at via a biased process, might nevertheless be true.) On the views defended in this book, you’re not even in a position to infer that the expert fails to know the relevant proposition, for the fact that the expert’s belief is insensitive to the truth doesn’t mean that it isn’t knowledge, and nothing that you’ve been told about the case rules out the possibility that it’s a case of biased knowing. Here’s something that you can infer from what you’re told by the oracle: the fact that the expert believes insensitively means that his believing as he does is evidentially worthless to you. In particular, his believing H isn’t a piece of evidence (even a tiny piece of evidence) that speaks in favor of your believing H as opposed to not-H.3
Once it’s allowed that insensitive beliefs can be knowledge, as on the orthodox view, this makes salient the following question: what, if anything, is bad about insensitive believing, from the epistemic point of view? On the current account, the epistemic costs of insensitive believing don’t come fully into view until the focus is broadened from the situation of the individual believer to the social case, and we look at the way that insensitive believing affects the individual’s capacity to serve as an epistemic resource to others. In the case described above, even if the biased expert genuinely knows (let’s stipulate that it’s a case of biased knowing), the fact that he believes as he does isn’t genuine evidence for the truth that he knows; it doesn’t give you a reason to increase your confidence in the relevant proposition. Thus, the first noteworthy aspect of our tendency to attribute bias to those with whom we disagree is the potential power of such attributions: to the extent that an attribution of bias is warranted, this seems to effectively negate the person’s scoring well, even exceptionally well, with respect to other salient dimensions that would otherwise be highly relevant, for example their intelligence, familiarity with relevant evidence, or their expertise or credentials to pronounce on the issue. Consider next a second reason why appealing to the possibility of bias on the part of those with whom one disagrees will often be an attractive tactic. Namely, so long as one’s concern is with regulating one’s own beliefs, it can sometimes be rational to discount for bias even in the absence of anything like a “smoking gun” or rationally compelling evidence. There is, I think, no general presumption of innocence here, to the effect that people and their opinions should be assumed to be unbiased in the absence of hard evidence to the contrary. Plausibly, the evidential standards are much higher if what’s at issue is making and sustaining a public charge of bias. If one publicly accuses another person of being biased in such-and-such a way, one is expected to be able to produce substantial evidence that they are, of a sort that would justify others in believing the accusation. (Indeed, it’s plausible that, given the mechanics of implicature, even the weaker public accusation that “So-and-So might be biased” requires non-negligible evidence that So-and-So is biased, if it’s to be in order.) Someone who publicly charges another person with bias bears the burden of proof, and even if the evidential standard for discharging that burden is weaker than “proof beyond a reasonable doubt,” it’s still substantial. Mere reasons for suspicion aren’t enough. Plausibly, things are otherwise when it comes to one’s own deliberations about how much (or how little) to credit other people’s opinions in making up one’s own mind. Here, mere reasons for suspicion are enough to begin discounting their opinions. (Of course, the stronger one’s reasons are, the more one should discount for the possibility of bias.) Indeed, it’s plausible that significant discounting might be in order so long as the possibility of bias is salient (about this kind of issue, many people are biased, etc.) and one has no way of ruling it out—as one often won’t. Thus far, the tactic of attributing (possible) bias to the other side might seem to be an epistemically potent weapon. On the one hand, when such attributions are in order, they tend to neutralize the evidential relevance of what would ordinarily be highly relevant epistemic virtues and advantages enjoyed by those on the other side. On the other hand, the tactic would also seem to have relatively wide applicability, inasmuch as one often will have at least some reason for thinking that bias is or might be present. What might be said against it? An obvious worry about the tactic is that we will be too quick to attribute bias to those
with whom we disagree. However, although this no doubt often happens, I think that the more pressing worry lies elsewhere. After all, particularly if we focus on controversial questions about morality, or politics, or history, or constitutional interpretation, or sports, or numerous other topics, it’s plausible that biased thinking is extremely common, perhaps even pervasive. Perhaps when it comes to such issues, arriving at one’s beliefs in a way that’s distorted by bias to at least some significant degree is the norm, and those who arrive at their views via the light of pure, uncorrupted reason and an objective assessment of the evidence are the exceptions. If so, then those who are liberal in attributing bias to people on the other side might very well be more reliable about whether such people are biased than those who do so only when “smoking gun” evidence emerges. The more pressing worry in the vicinity, I think, is an essentially comparative one. According to this worry, it’s not so much that we tend to be too sensitive to the possibility that those with whom we disagree are biased, but that we tend to be comparatively insensitive to the possibility that we ourselves are biased. Recall again the bias blind spot, the higher-order bias that we examined in detail in Chapter 4. As emphasized there, and regardless of the exact mechanisms that give rise to the phenomenon, psychologists have found abundant evidence of the fact that we tend to see bias in others in a way that we don’t see bias in ourselves, even when such bias exists. Given this, a natural thought is that, even if the readiness with which we attribute bias to those who disagree with us is defensible when it’s considered in isolation, or if it were paired with an equal readiness to attribute bias to ourselves, it’s not defensible when paired with our relative reluctance to attribute bias to ourselves. Indeed, one might think that facts about the bias blind spot provide the materials for vindicating certain skeptical or strongly conciliatory views about the proper responses to disagreement. This possibility is among the issues that I address in the next section.
2. The Case for Skepticism In order to help fix ideas, let’s introduce a toy case. Although highly abstract, the case is intended to have the same structure as many actual, real-world disagreements: POLITICAL DISAGREEMENT: You and I frequently disagree about politics. Moreover, our disagreements have a systematic character. You would describe many of my judgments about politics as Too Far to the Right, while I would describe many of yours as Too Far to the Left. Because of this, when it comes to controversial political issues where you don’t yet know of my view, you expect me to hold a view that’s too far to the right (or at least: conditional on my holding a view that is by your lights mistaken, you expect it to be too far to the right, as opposed to mistaken in some other way). I expect the same of you, mutatis mutandis. Moreover, each of us generally sticks to our guns in the face of the other’s conflicting views, moving little if at all in the other’s direction.
According to the perspectival account of bias attributions developed in Chapter 3, given that each of us sticks to our guns, we’re rationally committed to thinking that the other party is not only frequently mistaken but also biased about political questions—although we’re not similarly committed to thinking that the other person (or their beliefs) are irrational or unreasonable.4
Suppose that each of us in fact judges that the other has the relevant bias. Notably, even with so few details about the case specified, some of the most prominent views in the epistemology disagreement literature entail that it’s rational for each of us to remain steadfast in response to our political disagreements. For example, according to Adam Elga’s Equal Weight View (2006), the extent to which any particular disagreement gives you a reason to revise your view depends on your hypothetical prior assessment of who is more likely to be correct in the event that such a disagreement arises. In POLITICAL DISAGREEMENT, I believe that you suffer from a left-leaning bias and so are more likely to be the one who is mistaken in a case in which we disagree because your view is to the left of mine; when such a case arises, I discount your view and steadfastly retain my own. You reason in a parallel way about me. Things are perfectly symmetrical between us; the fact that each of us attributes bias to the other allows us to rationally discount the other’s opinion and steadfastly retain our own. Similarly, to return to the case with which this chapter opened: on this version of the Equal Weight View, the layperson without legal education can rationally dismiss the conflicting opinion of the Supreme Court justice whom she views as biased, so long as she expects that the postulated bias would sufficiently impair the justice’s reliability about the issue in question. As that suggests, on this version of the Equal Weight View, our tendency to see formidable people with whom we disagree as biased will typically be highly efficacious in neutralizing whatever rational pressure on our opinions their conflicting views might otherwise have provided. A natural worry is that this view makes it too easy for us to rationally dismiss the conflicting opinions of other people so long as we view them as biased. The worry that such a view lets us off the hook too easily is further encouraged by consideration of the bias blind spot and the mechanisms that give rise to it. Indeed, one might see certain empirical facts about disagreement and the bias blind spot as providing the materials for a kind of skeptical argument, one that targets the rationality of steadfastly holding controversial opinions quite generally.5 Let’s see how such an argument might proceed in terms of the POLITICAL DISAGREEMENT case. In POLITICAL DISAGREEMENT, one of the few things that will be common ground between you and me is that someone is biased. To the extent that I view our disagreements through the lens of my first-order political judgments, I will see you as disposed to systematically depart from the truth and hence as biased. Conversely, to the extent that you view our disagreements through the lens of your first-order political judgments, you will see me as biased. However, especially once we become aware of and take seriously the bias blind spot, this might naturally give us pause. For then each of us will be in a position to reason as follows: If I were biased, I wouldn’t be in a good position to recognize this from the inside. Indeed, in that case I would in all likelihood see myself as unbiased and you as biased, just as I do now. Given that things are in these respects exactly as they would be if I were biased, what reason do I have to suppose that I’m not?
Of course, in some cases with the relevant structure, one or the other of us might have some independent reason to think that the other person is more likely to be biased—that is, some reason for thinking that the other person is more likely to be biased that isn’t simply a matter
of the fact that they look biased when judged from a perspective that takes for granted the truth of one’s political views on the issues about which we disagree. Suppose, for example, we fill in the original case as follows: POLITICAL DISAGREEMENT, CONTINUED, Variant #1: According to a well-confirmed scientific theory, people in circumstances C1…Cn tend to diverge from the truth in such-and-such a direction. You recognize that I’m circumstances in C1…Cn, and that my judgments seem to you to diverge from the truth in that direction, just as the well-confirmed theory predicts. There is no similarly well-confirmed theory that applies to you and your beliefs. For this reason, you conclude that I’m more likely than you are to be biased about the questions over which we disagree, and you discount my views (but not your own) for that reason.
Here it seems you’re in a position to discount my views in a way that’s independent of your first-order views, and thus in a way that doesn’t beg the question in favor of those views. However, one might doubt how often we’re in possession of solid non-question begging reasons of this kind.6 It might be thought that, in the absence of some such reason, neither of us has any business steadfastly maintaining our original convictions in the face of the disagreement, for it seems that in that case neither of us will have any way to eliminate the possibility that our systematic disagreements are due to our own bias, as opposed to bias on the part of the other person. Contributors to the epistemology of disagreement literature sometimes draw a rough but serviceable distinction between “steadfast” and “conciliatory” views. For example, David Christensen opens his (2009) critical survey with the following: Subtleties aside, a look at the topography of the disagreement debate reveals a major divide separating positions that are generally hospitable to maintaining one’s confidence in the face of disagreement, and positions that would mandate extensive revision to our opinions on many controversial matters. Let us call positions of the first sort “Steadfast” and positions of the second sort “Conciliatory” (1).
The line of thought currently under consideration clearly pushes in a conciliatory direction, albeit not necessarily for standard conciliatonist reasons. As a rough generalization, those who defend conciliatory views about the proper response to disagreement tend to endorse theses like the following: (i)
We should be much less confident than we are about many controversial issues.
(ii)
In many cases, we should give significantly more weight than we ordinarily do to the views of competent people who disagree with us.
(iii)
is true because (ii) is true.
Notice that the current line of thought seems to support (i) even if it does not support either (ii) or (iii). With respect to (ii): if, as I’ve suggested, it’s rationally defensible, given how common bias is in many areas, to discount sharply for the salient possibility of bias, it’s not clear that we should give significantly more weight to the views of even wellcredentialed and seemingly well-qualified others who disagree with us. For notwithstanding their credentials and apparent qualifications, we’ll often lack strong reasons to believe that
they’re reliable, all things considered. On the other hand, the same kinds of considerations, to the extent that they justify a policy of not giving much weight to the views of others in such cases, also seem to provide reasons to doubt one’s own reliability and freedom from bias, and thus give one reasons to be less confident of one’s own conclusions. In short, if we were more consistent in applying to our own case the kinds of considerations that we often use to try to minimize the epistemic significance of the disagreement of others, then this would lead us to be much less confident of our views about controversial issues—and hence, vindicate (i), even in the absence of support from (ii) and (iii). Having presented this skeptical line of thought, I want to conclude this chapter by making some suggestions about its limitations, and how it might be resisted.
3. Against Skepticism First, it’s worth querying just how much skeptical or conciliatory mileage can ultimately be derived from facts about the bias blind spot. Notably, there is nothing in the bias blind spot literature that suggests that people are or will be equally biased in their judgments about controversial issues. Nor is there anything that suggests they will be equally reliable in their judgments about who is and who isn’t biased about some question. Indeed, on the views defended in this book, we should expect people to differ greatly in their reliability about who is and isn’t biased so long as they differ significantly in the accuracy of their first-order views about the world, since the reliability of one’s judgments about who is biased will generally depend a great deal on the accuracy of one’s first-order views. At most, insofar as reflection on the bias blind spot phenomenon suggests a negative conclusion that’s generally applicable to people regardless of the accuracy of their first-order views, it’s this: if someone is biased about an issue or cluster of issues, it will still seem to them as though they are not. Notice that this conditional claim might be true of the unbiased person, who is relatively reliable about who is and isn’t biased, no less than it’s true of the biased person, who is unreliable about who is and isn’t biased. (As applied to the unbiased person, the conditional claim is best understood as a true counterfactual: if the person were biased about this cluster of issues, it would still seem to them as though they were not…) Philosophers who argue for strongly conciliatory views characteristically emphasize salient symmetries between the disagreeing parties in cases where they think a conciliatory verdict is in order. In this context, a salient symmetry is provided by the parties’ susceptibility to the bias blind spot: of each party it’s true that, in the event they are biased, they won’t be in a good position to recognize this, and indeed, they will be inclined to think that they are not biased. A proponent of the conciliatory line of thought sketched above will seek to argue from this putative symmetry to the rational impermissibility of steadfastly maintaining one’s own view. However, given that this symmetry is perfectly compatible with the existence of other salient and seemingly significant asymmetries between the parties (e.g. asymmetries with respect to who is in fact biased and who isn’t; or with respect to who will be more reliable in their judgments about who is biased and who isn’t), it’s worth pressing on why it
should be given so much emphasis or weight in this context, as opposed to these other factors. Let’s see how this issue plays out in the toy case of POLITICAL DISAGREEMENT. Given the original bare bones description of the case, things are perfectly symmetrical between you and me. One possibility is that they remain perfectly symmetrical when further facts about the case are stipulated, as in the following: POLITICAL DISAGREEMENT, continued. Variant #2: Symmetry. Given the political facts, your political judgments tend to lean too far to the left, while mine tend to lean too far to the right. Although we both hold many mistaken first-order views about politics, your view that I’m biased-to-the-right is correct, as is my view that you’re biased-to-the-left. Insofar as either of us thinks of themselves as relatively unbiased about politics, that person is wrong to do so.
By my lights, the more interesting case for the epistemology of disagreement is when there is an asymmetry between us with respect to bias, as in the following case: POLITICAL DISAGREEMENT, continued. Variant #3: Asymmetry. Given the political facts, your political judgments are relatively accurate compared to mine, which tend to be inaccurate in virtue of being too-far-to-theright. Your assessment that I’m biased-to-the-right is thus correct, while my assessment that you’re biased-to-the-left is incorrect. The fact that I mistakenly judge that your political judgments betray a left-leaning bias is an artifact of my own biased perspective. On the other hand, your correct assessment that my political judgments betray a rightlearning bias is not an artifact of bias on your part—although it appears that way from my mistaken and biased perspective. Rather, your correct assessment that my political judgments betray this bias is due to your evaluating them in the light of your relatively unbiased and accurate first-order judgments.
In this case, I think, it is permissible for you to stick to your guns and discount my conflicting opinions on the grounds of bias, at least all else being equal. Presumably, a hardline conciliationist will deny this, on the grounds that it’s not enough that you believe that I’m biased in a way that you’re not (something that I also believe about you); nor is it enough for you to truly believe this. If true belief about these matters isn’t enough according to the conciliationist, what more might be required? The natural answer is: in order for you to be justified in sticking to your guns, you must have some good reason for thinking that I’m the one who is biased—for example, a well-confirmed scientific theory about the circumstances in which people are likely to be biased that applies to me but not to you. Of course, in any realistic case of this kind, you will think that you do have good reasons to think that I’m biased when I make a political judgment that by your lights is too far to the right: namely, all of the other cases in which my judgment about an issue is (by your lights) too far to the right. Of course, I too will take myself to have good reason to think that you’re biased in any particular case in which your judgment is by my lights too far to the left: namely, all of the other cases in which (by my lights) your judgment is off to the left. The difference is that when I cite other cases as examples of your left-leaning bias, I speak falsely, whereas when you cite other cases as examples of my right-leaning bias, you speak truly (at least, often enough). Does this difference matter? The hard-line conciliationist will deny that it does, on the grounds that, even though you speak truly and I speak falsely when we attribute bias to the other person but not to ourselves, neither of us is in a position to offer a non-question begging argument for our respective
assessments. That is, it’s true of you, no less than of me, that your assessments of bias in effect take for granted or presuppose the substantive correctness of your first-order views (and the incorrectness of mine) in at least some cases in which we disagree. Here, I think, is where the crux of the issue lies. And it’s here, I think, where the sweeping skeptical argument that proceeds from empirical facts about disagreement and the bias blind spot can be resisted. The crucial fact is this: given that I’m biased in the way you take me to be, but you’re not biased, the propriety of your sticking to your guns doesn’t require you to be able to offer some non-question-begging evidence that this is so. More generally: even if the only reason that the unbiased person can offer for her (correct) assessments of bias beg the question against the biased person, this doesn’t mean that those assessments are unwarranted, or that she’s unjustified in sticking to her guns with respect to the first-order issues in dispute. Indeed, I think that we should accept a stronger conclusion: even if the only reasons that the unbiased person can offer for her correct assessments of bias will inevitably seem to beg the question from a neutral perspective (i.e. a perspective that scrupulously abstains from taking any stand on the disputed first-order questions), it doesn’t follow that those assessments are unwarranted, or that the unbiased person reacts improperly by sticking to her guns. Let’s briefly compare and contrast the suggested picture with the two alternative pictures mentioned earlier in this section. According to Elga’s version of the Equal Weight View, in cases of the relevant kind things will generally be symmetrical between the person who is in fact biased and the person who is in fact unbiased, so long as each of the two consistently attributes bias to the other; given this, it’s rational for each to stick to their guns in the light of the disagreement. Similarly, on the hard-line conciliationist view sketched above, things are symmetrical between the biased person and the unbiased person, so long as neither is in a position to recognize who is biased and unbiased in a way that doesn’t beg the question against the other. However, while Elga’s version of the Equal Weight View would allow each of the two parties to stick to their guns in response to the disagreement (including the person who is in fact biased and therefore relatively unreliable about who is biased and who isn’t), the hard-line conciliationist view would require both parties to conciliate, or give up their original first-order views in response to the disagreement (including the person who is in fact unbiased and therefore relatively reliable about who is biased and who isn’t.). In contrast to both of these approaches, the picture that I’ve sketched suggests that the unbiased person will generally and all else equal be in a stronger position than the biased person to rationally resist skeptical pressure from disagreement, even if both are equally susceptible to the bias blind spot in the sense explained above, and even if neither is in a position to offer non-question begging reasons to think that they, but not the other, is the unbiased party. A proponent of the suggested picture might find encouragement in some apparent lessons of recent epistemology, in contexts in which neither disagreement nor bias is under discussion. For example, in recent decades, a major theme in discussions of skepticism is that a believer might have knowledge about the external world even if they are not in a position to offer non-question begging arguments against a skeptic who denies that we have such knowledge.7 A related theme is that of non-trivial epistemic asymmetries between “the bad case,” in which some skeptical scenario actually obtains, and “the good case,” in which it doesn’t. For example, even if one wouldn’t know that one is merely dreaming if one were
merely dreaming (i.e. if the bad case obtained), and indeed, would in that event falsely believe that one is not merely dreaming, it doesn’t follow that one fails to know that one is not merely dreaming given that one isn’t, i.e. given that one is in fact in the good case (Williamson 2000). A proponent of the line of thought that I’ve sketched here can pick up on both of these themes and apply them to the case of bias. First, on the suggested view, one might be warranted in thinking that the other party is biased in a way that one isn’t, even if the only considerations that one can offer for that judgment will inevitably seem to beg the question. Second, from the fact that, if one were biased, one would still believe that one isn’t (the bad case), it doesn’t follow that: if one is not biased (i.e. one is in the good case), one is not in a position to recognize this. Still, even if some of the lessons from recent work on external world skepticism provide encouragement for the approach to bias attributions and disagreement that I’ve sketched here, other aspects of the comparison aren’t as favorable. Notably, for example, in the case of external world skepticism, it’s often claimed that there is no positive reason or evidence to believe that we are actually in the bad case. In contrast, given the fact that on anyone’s view many people are actually biased about many things, it seems that there is (or often will be) positive reasons or evidence for thinking that one actually is in the bad case.8 Moreover, a critic of the suggested picture might insist that there is a preferable alternative to simply begging the question in favor of one’s own beliefs in cases of systematic disagreement. Namely, one can execute the following three step procedure: (i)
Bracket: First, one should bracket one’s first-order beliefs about the subject matter.
(ii)
Assess and Discount for Bias: Second, one should assess which parties to the dispute (including oneself) are likely to be biased, and to what extent and in what ways, by relying on one’s general background theories about the circumstances in which people tend to be biased (etc.), in a way that is not influenced by one’s firstorder beliefs. One should then discount their beliefs (including one’s own initial beliefs) as appropriate, in the light of that assessment.
(iii)
Correct and Revise: Finally, one should arrive at new first-order beliefs about the subject matter by giving due weight to people’s discounted beliefs, as appropriate.
On this view, the correct vantage point for attributing bias about a subject matter is one that in effect ignores one’s own first-order views about that subject matter.9 What should we think about this proposal? At least in some relatively simple and straightforward cases, the strategy might very well be an effective heuristic. For example, if I notice that my beliefs about the quality of my children’s athletic and musical performances are consistently more positive than the judgments of people who aren’t related to my children, it makes sense for me to revise my beliefs in the obvious direction; and it’s plausible that the reasoning that I engage in when I do this can be reconstructed along the lines suggested by the bracketing procedure. (Although even in this case, alternative reconstructions are also possible.) However, although the strategy might be an effective
heuristic in some cases, let me close by noting some of its limitations, and why I’m skeptical that there is some interesting and true general epistemological principle that requires the relevant kind of bracketing and independent assessment. First, it’s worth bearing in mind that it’s ultimately an empirical question how well such a strategy works in practice, and how it compares to allowing one’s first-order views about the target domain to play some role in influencing one’s assessments of bias. Presumably, how well the strategy works in practice will depend on the quality of one’s general theories of bias and how competently one applies those theories to particular cases as they arise. With respect to the comparative question, there are at least two cases to consider. First, there is the case in which the believer’s original first-order views about the target domain are largely biased; second, there is the case in which they are not. Plausibly, if one’s first-order judgments are largely unbiased to begin with, taking those judgments into account in making higher-order assessments of bias would be helpful, or at least, not harmful. Particularly in cases in which one’s original, first-order views about the domain are largely accurate, it would be advantageous to be able to make use of those views in reasoning about who is likely to be biased. In those favorable circumstances, the bracketing procedure in effect amounts to throwing away valuable information. If there is a case to be made for the bracketing procedure then, it seems as though it will largely rest, as one would expect, with its beneficial effects in those cases in which one’s original judgments are biased in ways that are undetectable from the inside. Moreover, given that one is uncertain whether one’s first-order judgments are biased in this way, the bracketing procedure is in effect a hedge against this possibility. While fully acknowledging that it’s ultimately an empirical question how well this will work in particular cases, let me record a pessimistic speculation. Again, the strategy will be vindicated if, often enough in practice, people who have inaccurate and biased views about the original subject matter manage to arrive at relatively unbiased and accurate views by applying it. The pessimistic speculation is simply this: once we move away from relatively simple and straightforward cases such as the parent who is biased about the quality of their child’s performances, then, generally speaking, when a person whose original first-order views about a subject matter x are biased to begin with, they will not be good at arriving at accurate and unbiased views about x by successfully executing the relatively sophisticated and subtle kind of reasoning that’s called for by the bracketing technique. For example, generally speaking, someone with biased political opinions will in practice not be good at arriving at unbiased and accurate political opinions by first figuring out who is likely to be biased about political questions (and to what extent and in what ways) by applying their substantively neutral background theories about how political bias operates, and then discounting and revising their political opinions in the light of those assessments. Instead, I suspect that what we will find is the following. First, as is suggested by the bias blind spot literature,10 people’s first-order views about a topic will in practice more or less inevitably influence their assessments of bias. Second, this will give rise to a “rich get richer” and “poor get poorer” effect, in accordance with the general picture sketched in this book. In brief:
The Rich Get Richer: A person who at a given time has relatively accurate and unbiased first-order views about a topic will be well positioned to arrive at relatively accurate and unbiased views about which people and sources of information are biased and unbiased about that topic. This in turn will increase their chances of arriving at further accurate and unbiased views about the topic in the future, and so on, in an epistemically virtuous cycle. The Poor Get Poorer: In contrast, a person who begins with biased views about a topic will tend to be unreliable and biased in their assessments of which people and sources of information are biased about that topic. They will tend to judge, incorrectly, that people and sources of information whose biases align with their own are unbiased. And they will tend to judge, also incorrectly, that people and sources who don’t share their biases—and in fact are relatively free from bias—are biased. These mistaken assessments of bias will in turn tend to further reinforce and amplify the characteristic biases displayed by their first-order views about the topic in the future, views that they arrive at because they rely on biased sources and avoid unbiased ones, in an epistemically vicious cycle.
In this way, bias not only perpetuates itself but grows deeper and stronger over time. On that perhaps depressing note, let us conclude. Bias: A Philosophical Study. Thomas Kelly, Oxford University Press. © Thomas Kelly 2022. DOI: 10.1093/oso/9780192842954.003.0011
1 For relevant discussion, see Goldman (2001), Lackey (2013), Gallow (2018), Barnett (2019), and Grundmann (forthcoming). 2 On the epistemic dangers of the practice of reason-giving in contexts in which bias is a concern, see especially Kornblith (1999). 3 This is particularly clear on a Bayesian probabilistic explication of evidence, according to which a fact E is evidence for a hypothesis H only if the conditional probability of H on E (Pr[H/E]) is greater than the unconditional or prior probability of H (Pr[H]). In the present example, let E be the fact that the expert believes H. It’s stipulated that the expert would believe H regardless of whether H is true or false, so both the conditional probability of E on H (i.e. the likelihood Pr(E/H)) and the prior probability of E (Pr[E]) are 1. By Bayes’ theorem, this guarantees that Pr(H/E) will be equal to the prior probability of H, so it follows that E isn’t evidence for H, since it does nothing to raise its probability. 4 As noted there, in principle, each of us might think the following of the other: their political beliefs are exactly what it’s reasonable for them to think given the biased sources of information on which they’ve unfortunately come to rely. In general, although each of us will see the other as biased for believing as they do, we might or might not see the other as unreasonable, depending on how the details of the case are filled in. 5 The most careful and detailed development of this idea that I know of is due to Nathan Ballantyne (2019:Ch. 5). The line of thought that I sketch and critically examine here is very much in the spirit of Ballantyne’s argument, without following it in detail. 6 For doubts of this kind, see Ballantyne (2019). In some cases where the disagreeing parties know each other well, the challenging aspect of the situation might instead be that they’re both in a position to offer plausible and non-questionbegging debunking explanations of the other’s beliefs, something that would maintain the symmetry between them. For example, there are some members of my extended family with whom I systematically disagree about politics of whom the following is true: in their case, I believe that I could offer plausible explanations for why they would believe as they do regardless of the actual facts of the matter, explanations that don’t presuppose the correctness of my first-order political views. The reason why I don’t put much stock in this fact is that I strongly suspect that they could do the same for me. 7 Representative statements of the theme include DeRose (1995), Pryor (2000), and Byrne (2004). 8 In the context of a discussion of implicit bias, Saul (2013) emphasizes this apparent contrast between traditional forms of skepticism and those that are motivated by empirical discoveries about bias. For the point in the context of interpersonal disagreement, see again Ballantyne (2019). 9 In requiring that one bracket or set aside one’s own first-order views for the purposes of making judgments about bias, the procedure outlined here has clear affinities to the kind of “Independence” principles that have played a central role in the epistemology of disagreement literature. To a first approximation, Independence principles require that, in a case in which
you believe p and I believe not-p, you bracket your belief that p and the reasoning behind it for purposes of assessing the epistemic credentials of my belief that not-p, and vice versa. (Compare, e.g., Christensen 2009:758). Notice, however, that the kind of bracketing called for by the procedure outlined here seems to go well beyond anything that’s required by standard Independence principles, notwithstanding the fact that such principles are themselves very controversial. For example, suppose that you discount my belief that not-p because you believe that I’m biased about the relevant domain, a belief that depends on your judging that I’ve systematically erred with respect to other questions that belong to the same domain. Suppose further that (as will in practice almost always be the case) your judging that I’ve erred with respect to these other questions depends on your own first-order views about those questions. So long as you don’t appeal to your belief that p or the reasoning behind it in judging that I’ve erred with respect to these other questions, your attribution of bias to me in the context of downgrading my belief about p will be consistent with standard Independence principles. However, it would be disallowed by the more demanding procedure outlined above. For endorsements and defenses of Independence principles, see Elga (2007), Kornblith (2010), Christensen (2011), Cohen (2013), Vavova (2014), and Matheson (2015). For criticism, see Kelly (2013), Lackey (2010), Lord (2014), and Sosa (2010). The most determined effort to work out the details of a defensible version of Independence is Christensen (2019); see also Moon (2018). 10 Particularly relevant here is the remarkable robustness of the bias blind spot phenomenon, e.g., its persistence even when people are explicitly made to attend to biases in the processes by which they arrived at their views. See Hansen et al (2014) and the further references cited there.
11 Main Themes and Conclusions 1. Five Themes In the Introduction, I noted that much of this book can be read as a series of independent proposals addressed to distinct philosophical questions. Nevertheless, it’s also true that a number of recurring themes have emerged over the course of its pages. By way of summary, here are five such themes. The first theme is that of a robust pluralism about bias. The fact that in the normal course of everyday life we unhesitatingly attribute bias to a radically diverse collection of things isn’t a matter of overly casual speech or sloppy thought on our part. Rather, it reflects reality. Moreover, of the many different types of things that are genuinely biased, no one of these types is fundamental in every context. The second theme concerns the deep connections between bias and norms or standards of correctness, as represented in the norm-theoretic account of bias. An important link between the first and second themes is this: the fact that many radically diverse things can be genuinely biased reflects the fact that many radically diverse things can systematically deviate from genuine norms or standards of correctness in the way that’s characteristic of bias. The norm-theoretic account is concerned with the nature of bias: it’s a (partial) theory of what bias is, or what it is for something or someone to be biased. As I see it, we should distinguish sharply between theories of bias and theories of bias attributions, or theories about the norms that govern our practices of attributing bias. Nevertheless, the normtheoretic account naturally gives rise to certain ideas about how bias attributions work. This leads to a third theme of the book, the perspectival character of bias attributions. The perspectival character of bias attributions has both a psychological and a rational aspect. At the psychological level, first-order disagreements about a topic naturally bleed over into disagreements about who is and who isn’t biased about that topic. But what holds at the level of psychology holds also for rationality: our first-order views about the world rationally influence and constrain our higher-order judgments about who is and isn’t biased about the world (and vice versa), in characteristic and systematic ways. A theorist who endorses the idea that bias attributions are perspectival in this sense could consistently—and indeed, quite naturally—combine that idea with some form of subjectivism or radical egalitarianism about bias and/or bias attributions. (“Bias is in the eye
of the beholder.”) However, I’ve consistently eschewed such views. Instead, I’ve argued for and emphasized the importance of the objectivity of bias and bias attributions. Notwithstanding its perspectival character, the truth of a paradigmatic attribution of bias doesn’t depend on the identity of the attributor, any more than the truth of an ordinary, runof-the-mill first-order claim about the world depends on the identity of the person who makes it. Similarly, whether a person is biased or unbiased is generally independent of any attitude that they might or might not have towards norms of objectivity that proscribe bias: even people who disavow norms against bias might be unbiased, just as people who sincerely endorse such norms might be biased. Far from embracing any kind of radical egalitarianism in this area, I’ve suggested that when it comes to bias, we should expect to find strong “rich get richer, poor get poorer” effects: bias will tend to breed more bias, and initial differences in the extent to which people are biased will often ramify and intensify with time, in systematic and predictable ways. Moreover, when it comes to interpersonal disagreement, I’ve held that a person who is in fact unbiased will generally and all else being equal be in a stronger position to rationally resist skeptical pressure than a person who is in fact biased; and this is so even if things seem the same from the inside, and even if neither person can supply a non-question-begging reason for thinking that she, but not the other, is unbiased. Although these last claims about disagreement are certainly not entailed by the kind of objectivity about bias and bias attributions that I favor, they are hopeless without it. What is the connection between the theme of objectivity and the norm-theoretic account? According to the norm-theoretic account of bias, the norms relative to which someone counts as biased in the pejorative sense are objective norms, and whether someone departs from them systematically is itself an objective matter of fact, as opposed to a matter of opinion or perspective. A fifth and final theme, connected in obvious ways to the previous ones, is that of a thoroughgoing externalism about bias, with respect to both its epistemology and its metaphysics. A person might be biased, even severely biased, without being in a position to recognize that they are. Indeed, unlike many other personal shortcomings or deficiencies, the bias itself might play an active and essential role in the person’s inability to recognize that they have it. Even if the person isn’t in a position to recognize their bias, it might compromise the epistemic status of their beliefs, as when a belief that’s true nevertheless fails to count as knowledge because it’s a manifestation of bias. While many would accept this kind of externalism about the epistemology of bias, the kind of externalism that I advocate is more radical and extends to its metaphysics as well. When a person is biased because they systematically depart from a norm, the biasing mechanism that’s responsible for this might be located outside of the person herself. Of course, in many paradigmatic cases of bias, the biasing mechanism will be located inside the person. This is true, for example, in cases of wishful thinking, in which a person’s desires distort their reasoning processes in a particular direction. But not all cases of bias are like wishful thinking in this respect. Another paradigmatic case of bias involves the thinker who has unfortunately and unknowingly come to robustly rely on what is in fact a biased source of information about some topic. In such cases, the thinker both systematically departs and is disposed to systemically depart from the truth (and so counts as biased), but the biasing
mechanism is located outside of their own cognition. Although in many such cases the fact that the thinker has come to robustly rely on the biased source might be due to their own, internal biases, that’s certainly not an essential element of the case. Indeed, there can be cases in which the thinker’s robust reliance on the biased source of information is a perfectly rational response to misleading evidence. Even if that diminishes or eliminates the extent to which the thinker is blameworthy in some respect, it doesn’t mean that they are unbiased.
2. Conclusions Sarah McGrath concludes her book Moral Knowledge (2019) by summarizing the main claims that she endorses in the course of its pages, along with references to the specific chapters and sections where those claims are discussed. I think that her example is a good one, and that the relevant practice would have a salutary effect on philosophical discussions if it were adopted more widely. For that reason, I will shamelessly steal it here.
Part I: Conceptual Fundamentals
Chapter 1. Diversity, Relativity, Etc. 1. (1) (2) 2. (3) 3. (4) (5)
(6) 4. (7) 5. (8) (9) (10) 6. (11)
(12)
(13)
Diversity Biased and unbiased are contraries, not contradictories. *We attribute bias to a wide variety of things belonging to diverse ontological categories.* Relativity *Whether someone or something counts as biased can be a relative matter.* Directionality Any bias has a direction or a valence. Claims of the form “So-and-so is biased” should generally be understood as elliptical. Negative biases and positive biases typically come paired with one another in a complementary package. In some cases, the negative bias is more fundamental and explains the positive bias, while in others the positive bias is more fundamental. In still other cases, neither bias is more fundamental nor explains the other. A judgment might be biased in favor of X and against Y even if its content doesn’t favor X over Y, and even if its content favors Y over X. Bias about Bias People and groups often exhibit higher-order biases. Biased Representation Unbiased judgments can be false, and biased judgments can be true. An account might be biased even if it’s entirely true and known to be true by both the person who offers it and their audience. *In order to sustain a charge of bias, the person making the charge often incurs a commitment to substantive normative claims.* Parts and Wholes Often, when a whole is biased, the explanation for this will be that at least some of its parts have the relevant bias themselves, or some relevantly similar bias. Similarly, if a whole is unbiased, the explanation for this will often be a lack of bias among its parts or members. Particularly when we’re interested in explaining why a particular bias persists through time, a comprehensive explanatory story might very well include both “bottom-up explanations” (in which we explain the bias of the whole in terms of bias at the level of its parts) as well as “top-down explanations” (in which we explain the bias of the parts in terms of bias at the level of the whole). Often, even a successful top-down explanation of why bias is present or widespread among the parts won’t explain why any of the parts has the relevant bias in the first place.
(14) (15) (16) (17) (18) (19)
We should distinguish between the severity, the entrenchment, and the historical contingency of a bias, each of which can vary independently of the others. A whole might be unbiased even if its parts are biased to a high degree. Conversely, a whole might be biased even if it has no biased parts. *EMERGENT BIAS: A whole might be biased even if (i) it has at least some proper parts that are either biased or unbiased, and (ii) all of those parts are unbiased.* *Thus, having biased parts is neither a necessary nor a sufficient condition for a whole’s being biased.* Even if a news organization offers perfectly unbiased coverage of every issue that it covers, its overall coverage of the news might still be biased.
Chapter 2. Pluralism and Priority 1. (20) 2. (21)
(22)
(23)
3. (24)
4. (25) (26) (27)
(28) 5. (29)
Explanatory Priority *ROBUST PLURALISM ABOUT BIAS: (i) many different types of things are genuinely biased, and (ii) no one of these types is fundamental in every context.* Are People (Ever) the Fundamental Carriers of Bias? PERSONS NOT ALWAYS FUNDAMENTAL: In at least some cases, the fact that a person is biased is a derivative matter: it depends on the way they are related to something else that’s biased, and the fact that they count as biased is grounded in the more fundamental fact that this other thing is biased. PERSONS NEVER FUNDAMENTAL: Whenever a person is biased, this is a derivative matter: it depends on the way in which they are related to some other thing or things that are biased, and the fact that the person is biased is grounded in more fundamental facts about these other biased things. When seeking a causal explanation of why a particular biased outcome occurred, the best explanation will often invoke the biases of the people involved, for alternatives will make the outcome seem more contingent or fragile than it really was. Processes and Outcomes In some cases, whether a judgment or belief counts as biased seems to depend entirely on the process by which it’s produced, irrespective of its content; but in other cases, whether a judgment or belief counts as biased does seem to depend on its content. Unbiased Outcomes from Biased Processes? *UNBIASED OUTCOMES FROM BIASED PROCESSES: In some cases, biased processes can produce unbiased outcomes.* Even when a biased process produces an outcome that aligns with the content of its bias, that outcome might still be unbiased. An outcome of a process is biased if: • the process that produces it is biased, and • the outcome is the type of outcome that’s favored by the process’s bias, and • the fact that the process is biased in the way it is is causally responsible for the fact that the outcome aligns with the content of the process’s bias. One bias might be realized by another bias. Biased Outcomes from Unbiased Processes? *BIASED OUTCOMES FROM UNBIASED PROCESSES: Unbiased processes can sometimes produce biased outcomes.*
6. (30)
Pluralism *We can explain why robust pluralism about bias is true in terms of: • the fact that many fundamentally different kinds of thing are subject to and can systematically depart from norms or standards in a way that’s characteristic of bias; and • the fact that the norms or standards relative to which something can count as biased don’t all apply to the same type of thing in the first instance.* Part II: Bias and Norms
Part II: Bias and Norms
Chapter 3. The Norm-Theoretic Account of Bias 1. (31)
(32)
2. (33)
3. (34) (35)
(36) (37)
(38)
(39)
(40)
The Diversity of Norms *In both everyday life and in the sciences, much of our thought and talk about bias seems to be captured by the following idea: a bias involves a systematic departure from a norm or standard of correctness.* The relevant norm varies from context to context. Among the most important are norms of practical rationality, epistemic norms, moral norms, norms of justice, and the norm of truth or accuracy. Disagreement Disagreements about whether a person or thing is biased might involve disagreements about: • whether they actually depart from some contextually salient norm, or • whether their departures from that norm are sufficiently systematic, or • whether the alleged norm from which they systematically depart is a genuine norm at all. The Perspectival Character of Bias Attributions *Accusations of bias often inspire not only denials but also countercharges of bias. This phenomenon is best explained by the perspectival character of bias attributions.* *In whatever sense believing something rationally commits one to thinking that anyone who disagrees is departing from the truth, one is similarly rationally committed to thinking that anyone who systematically disagrees with one’s beliefs about some topic is biased about that topic.* *Attributions of bias have a perspectival character. This perspectival character has both a psychological and a rational aspect.* Psychologically, our first-order views about a topic naturally influence our higherorder judgments about who is and who isn’t biased about that topic (and vice versa), in predictable ways. *At the level of rationality, our first-order views about a topic rationally influence and constrain our higher-order judgments about who is and who isn’t biased about it (and vice versa), in systematic ways.* Disagreements about first-order questions—for example, about the merits of alternative political policies—naturally bleed into disagreements about who is biased and who isn’t. In certain kinds of systematic disagreements, one is more or less forced to see those on the other side as biased. Because we appreciate this on some level, we naturally diagnose attributions of bias that are mistaken from our point of view as an artifact of the attributor’s own, opposite bias.
(41) (42)
4. (43) (44) (45) (46)
(47)
(48)
(49)
*Notwithstanding its perspectival character, the truth of a claim of bias is no more relative to the identity of the person who makes it than is the truth of an ordinary, first-order claim about the world.* *In many cases of persistent disagreement, we are rationally committed to viewing those who disagree with us not only as mistaken but also as biased, even if we know nothing about how they arrived at their views, or why they currently hold those views.* When Norms Conflict In some circumstances, the only way of successfully complying with one salient norm might guarantee that one systematically departs from another salient norm. *RATIONALITY SOMETIMES REQUIRES BIAS: At the level of both thought and action, rationality sometimes requires bias, in the pejorative sense.* *MORALITY SOMETIMES REQUIRES BIAS: In some circumstances, one is morally required to be biased, in the pejorative sense.* Whether a person counts as biased does not track either the importance or the fundamentality of the norm from which they depart. In some cases, a person counts as biased in virtue of systematically departing from a norm even though their doing so is motivated by the need to comply with another norm that’s of overriding importance. The truth in the neighborhood of “the liberal view”: When a systematic departure from a norm would otherwise amount to a bias, the mere fact that departing in that way is an inevitable consequence of complying with some other norm doesn’t make the original charge of bias inapplicable or inapposite. The truth in the neighborhood of “the relativist view”: Sometimes, we evaluate whether a person counts as biased from a perspective defined by a particular set of norms, but we could equally well take up an alternative perspective, in which case the correct thing to say about whether the person counts as biased might very well reverse. The truth in the neighborhood of “the priority view”: In some contexts, a given set of norms can have a kind of de facto privileged status in determining whether someone counts as biased or unbiased.
Chapter 4. The Bias Blind Spot and the Biases of Introspection 1. (50)
2. (51)
3. (52) (53) (54) 4. (55) (56) 5. (57) (58)
Introspection as a Source of the Bias Blind Spot *Insofar as introspection contributes to the bias blind spot, the important point is not that relying on introspection is an unreliable method for detecting bias, but that it’s a biased method for detecting bias.* Why We’re More Likely to See People As Biased When They Disagree with Us The perspectival account of bias attributions offers a compelling explanation for why we tend to see people who disagree with us as more biased than people who agree with us. Is It a Contingent Fact That Introspection is an Unreliable Way of Telling Whether You’re Biased? *The unreliability of introspection as a way of detecting bias is not a contingent fact.* *The perspectival account of bias attributions offers a compelling explanation for why introspection is, and had to be, an unreliable detector of bias.* *Metaphysical Externalism About Bias: One’s biases do not supervene on one’s internal states and the causal relationships among those states.* How the Perspectival Account Explains the Bias Blind Spot, As Well As the Biases of Introspection *Independently of anything having to do with introspection, the perspectival account of bias attributions offers a compelling explanation for the bias blind spot.* *In addition, the perspectival account offers a compelling explanation of why introspection is, and had to be, a biased detector of bias.* Against “Naïve Realism”, For Inevitability *Explanations of the bias blind spot in terms of “naïve realism” (in the social psychologist’s sense) are largely vacuous.* *INEVITABILITY THESIS: Even if no other psychological mechanism were operative, the perspectival character of bias attributions guarantees that human beings would still suffer from “a bias blind spot.”*
Chapter 5. Biased People 1. (59) (60) (61)
(62) (63) (64)
(65) (66) (67)
2. (68) (69) (70)
(71)
(72) (73)
Biases as Dispositions *Typically, when one attributes bias to an individual person or to a group of people, one attributes to that individual or group a certain tendency or disposition.* *A biased person is disposed to systematically depart from a norm or standard of correctness.* When a person counts as biased in virtue of having a certain disposition, the disposition might be a robust aspect of their psychology, or it might disappear as soon as the external circumstances change. The fact that a person is biased in a certain way at a particular time might be a matter of their being biased in other ways at that time. *Biases are typically multiply realizable.* Because biases are multiply realizable, questions about what realizes or instantiates a given bias in human beings should be sharply distinguished from questions about what that bias is. A human being might share the same bias with an abstract entity that lacks any mental states, such as a forecasting model. Biases are gradable: they admit of degrees. A person or thing might count as unbiased even if they aren’t perfectly unbiased. Generally speaking, how close to the ideal one needs to be in order to count as unbiased is both vague and context-sensitive. Bias as a Thick Evaluative Concept In both ordinary and academic discourse, “bias” and its cognates often function like thick evaluative terms, in the ethical theorist’s sense. Attributions of bias are frequently used to make or implicate normative claims. *When a person is biased in the pejorative sense, there will generally be some other failing or shortcoming of which they are guilty, which can be characterized independently of the bias, and that is in some respects more fundamental.* Because of this, the claim that someone is biased will typically presuppose substantive and potentially controversial evaluative or normative claims about how it’s appropriate to think or act in given circumstances, claims that are not themselves about bias. Claims of bias will thus inherit whatever contentiousness attaches to the conceptually more fundamental evaluative or normative claims on which their truth depends. Although a biased agent is typically guilty of some more fundamental shortcoming,
3. (74) (75)
4. (76)
5. (77) (78)
(79)
this need not diminish the significance (moral or otherwise) of the fact that they are biased as opposed to merely guilty of the more fundamental shortcoming. Even in a case in which an unbiased agent who departs from a genuine norm would be an inappropriate object of censure or blame, it doesn’t follow that the agent who commits the same error out of bias is. Biased Believers, Biased Agents A person might count as biased against some person, group, or thing because of what she thinks, how she acts, or what she feels. RADICAL HETEROGENEITY: In principle, two people might both count as biased against Xs (alternatively: biased in favor of Xs), even though they share none of the same X-related mental states, perform none of the same X-related actions, and share none of the same X-related behavioral, cognitive, or affective dispositions. Biased Agents, Unreliable Agents In some contexts—for example, some competitive contexts in which considerations of fairness are paramount—we will have reasons to prefer an unbiased agent to a biased one, even if the unbiased agent is less reliable. But in other contexts—for example, contexts in which we are particularly concerned with predictability—we might have good reasons to positively value the systematicity of the errors committed by the biased agent, as opposed to the unsystematic errors committed by the unbiased agent. Overcompensation An agent might count as biased in virtue of systematically departing from a norm because they sincerely endorse and actively try to follow that norm. Given that “biased” and “unbiased” are contraries and not contradictories, one and the same fact might be evidence that something is biased and also evidence that it’s unbiased. Although one and the same fact might support both the claim that an agent is biased and also the claim that they are unbiased, in no case will it be evidence for the claim that they are biased in such-and-such a way at a certain time, and also evidence that they are unbiased in that very same way, and at that very same time.
Chapter 6. Norms of Objectivity 1. (80)
(81) (82) (83)
(84) 2. (85) (86)
(87)
3. (88)
(89)
(90) (91)
Some Varieties Although typical cases of bias involve the violation of norms that are not specifically concerned with bias, some norms—norms of objectivity—are specifically concerned with bias. Norms of objectivity include norms of preemption, norms of remediation, and constitutive norms of objectivity. Norms of preemption include norms of blinding, norms of recusal, and some norms of public reason. It is characteristic of norms of remediation that they call for a course of action that would itself be subject to the charge of bias, if not for the fact that the course of action is a response to past instances of bias. Some norms of objectivity, including some norms of representation and inclusion, are best understood as both norms of preemption and norms of remediation. Constitutive Norms of Objectivity *The distinguishing mark of a constitutive norm of objectivity is that any departure from it ipso facto amounts to a case of bias.* Among norms of objectivity, constitutive norms have a certain priority to both norms of preemption and remediation: the reasons that we have to follow norms of preemption and remediation derive from the reasons that we have to follow constitutive norms. A person can be unbiased even if they don’t endorse norms that proscribe bias, and even if they sincerely disavow such norms. Conversely, a person might be biased even if they sincerely endorse norms that proscribe bias. Following the Argument Wherever It Leads Although it’s characteristic of the biased thinker to be dogmatically committed to believing certain things and dogmatically averse to believing other things, it doesn’t follow that their beliefs about those topics are unreasonable. The intellectual ideal of following the argument wherever it leads is a more demanding standard than believing reasonably or having beliefs that are proportioned to the evidence. Similarly, the ideal of following the argument wherever it leads is a more demanding standard than the mere absence of motivated irrationality. *The ideal of following the argument wherever it leads is best understood as a kind of modalized reasonableness.*
(92)
*FOLLOWING THE ARGUMENT WHEREVER IT LEADS: One who is engaged in an inquiry is following the argument wherever it leads if and only if: (1) For any proposition at issue in the inquiry which one believes: (i) One’s belief is reasonable, and (ii) One is disposed to abandon the belief in response to its becoming unreasonable to hold it, and (2) For any proposition at issue in the inquiry which one doesn’t believe: (iii) One’s refraining from belief is reasonable, and (iv) One is disposed to acquire the belief in response to its becoming unreasonable to continue refraining, and (3) One isn’t dogmatically averse to considering evidence that bears on any proposition that’s at issue in the inquiry.*
Chapter 7. Symmetry and Bias Attributions 1. (93) 2. (94) 3. (95)
(96)
5. (97)
Two Challenges A good account of our bias-attributing practices should explain what pejorative and non-pejorative uses of “bias” have in common, as well as how they differ. Norms without Bias? Some systematic departures from norms don’t seem to involve biases. Symmetry Symmetry considerations play a central role in our thinking about bias. (i) Many familiar biases are naturally conceptualized in terms of symmetry violations. (ii) The importance of symmetry considerations in our thinking about bias is also seen in various heuristics or tests that have been proposed to help us detect or overcome bias. (iii) When a person is disposed to systematically depart from a norm, the more that norm is naturally conceptualized as a symmetry standard, the more natural it will be to consider the disposition a bias, all else being equal. 4. Bias Without Norms? We are often willing to attribute bias to an agent or believer when they are disposed to systematically depart from a contextually salient symmetry standard, even if we deny that that standard is a genuine norm. Pejorative vs. Non-Pejorative Attributions of Bias Cases in which a person counts as biased in the pejorative sense can be understood as a special case of a more general phenomenon: the special case in which the standard from which the agent is disposed to systematically depart is a genuine norm, as opposed to a standard that might or might not be a genuine norm.
Part III: Bias and Knowledge
Chapter 8. Bias and Knowledge 1. (98)
2. (99) 3. (100)
4. (101)
5. (102)
(103)
Biased Knowing *BIASED KNOWING: Even if a bias is sufficiently strong to make a given belief inevitable, it doesn’t follow that that belief is not knowledge. Biased believers can sometimes know, even when they believe in accordance with their biases, and even if those biases guarantee that they would believe as they do even if the truth were otherwise. In this respect, being biased is consistent with knowing.* Can Biased Beliefs Be Knowledge? Even if a believer is heavily biased about a question, so that they would believe as they do regardless of the truth, their belief about that question might still be unbiased. Are Biases Essential to Knowing? Biases—or at least, things that resemble paradigmatic biases in central respects—are deeply implicated in paradigmatic modes of knowledge acquisition, including sense perception, inductive reasoning, language learning, and scientific inquiry. Knowledge and Symmetry Insofar as “bias” can be attributed in paradigmatic cases of knowledge acquisition, this is because the relevant cognitive processes depart from contextually salient symmetry standards that are not genuine norms. However, the fact that such a standard is violated in the process of arriving at a belief has no tendency to show that the belief falls short of knowledge. How and When Bias Excludes Knowledge: A Proposal *BIASED BELIEFS AREN’T KNOWLEDGE (Unqualified): If a token belief is the manifestation of a tendency to systematically depart from a genuine norm of belief, then it isn’t knowledge.* *BIASED BELIEFS AREN’T KNOWLEDGE (Qualified): If a token belief is the manifestation of a tendency to systematically depart from an epistemic norm, then it isn’t knowledge.*
Chapter 9. Knowledge, Skepticism, and Reliability 1. Biased Knowing and Philosophical Methodology (104) *The fact that biased knowing is possible gives us good reason to reject various methodological norms that philosophers have proposed.* 2. Are We Biased Against Skepticism? (105) Although the traditional skeptic is not ordinarily presented as levelling the charge of bias against us, that’s a natural way of reconstructing his view about our relationship to common sense. (106) Insofar as we’re biased in favor of common sense, the bias is severe, deeply entrenched, and not historically contingent. (107) Even if one knows that one is biased in favor of believing p over q, it doesn’t follow that one should not believe p rather than q. (108) *Recognizing the possibility of biased knowing allows us to concede that there is a sense in which we’re biased against skepticism, while clearheadedly retaining our claims to have genuine knowledge and justified beliefs in the face of the skeptic’s challenges.* (109) The skeptic is right in holding that our beliefs to the effect that the skeptical scenarios don’t obtain fall short of a genuine ideal, but fulfilling that ideal isn’t a necessary condition for either knowing or justifiably believing. The resulting view is a relatively attractive option with respect to questions about what should and shouldn’t be conceded to the skeptic. 3. Reliability and Contingency (110) A biased thinker might be highly reliable if their bias dovetails with their environment in the right way. (111) Apparent platitudes like “Biased thinkers are unreliable” should be interpreted as generic generalizations that tolerate exceptions, as opposed to universal generalizations that do not. 4. A Tale of Three Thinkers (112) A paradigmatically biased thinker and a paradigmatically unbiased thinker might be equally reliable when it comes to their beliefs about some topic. (113) When it comes to our most basic ways of forming beliefs, we differ significantly from the practices of both paradigmatically biased thinkers and paradigmatically unbiased thinkers. (114) Arriving at true beliefs in a way that’s non-accidentally reliable is sufficient for knowing, even if from the inside the believing subject has no way of distinguishing between such beliefs and manifestations of bias in the pejorative sense.
Chapter 10. Bias Attributions and the Epistemology of Disagreement 1. On Attributing Bias to Those Who Disagree with Us (115) When it comes to assessing a person’s credibility with respect to a given issue, their scoring poorly along “the bias dimension” tends to trump or neutralize their scoring well with respect to other salient dimensions. (116) The epistemic costs of the kind of insensitive believing that is characteristic of the biased believer are largely social, inasmuch as believing in this way affects the individual’s capacity to serve as an epistemic resource to others. (117) The evidential standards for making and sustaining a public charge of bias are relatively high: if one publicly accuses another person of bias, one is expected to be able to produce substantial evidence for the accusation, of a sort that would justify others in believing it. (118) *In contrast, so long as one’s concern is with regulating one’s own beliefs, it can be rational to discount for bias even in the absence of anything like a “smoking gun” or rationally compelling evidence: there is no general presumption of innocence, to the effect that people and their opinions should be assumed to be unbiased in the absence of hard evidence to the contrary.* 2. The Case for Skepticism (119) Given certain facts about bias and the bias blind spot, there is a prima facie formidable argument that we should be much less confident of our opinions about many controversial issues, even if it’s not true that we should give more weight to the opinions of competent people who disagree with us about those issues. 3. Against Skepticism (120) *Given that equal susceptibility to the bias blind is compatible with other normatively relevant asymmetries among people who disagree, there is no compelling argument from facts about the alleged pervasiveness of the bias blind spot to strong conclusions about the rational impermissibility of steadfastly maintaining one’s views in response to disagreement.* (121) *Even if the only reasons that an unbiased person can offer for her correct assessments of bias beg the question against the biased person, it doesn’t follow that those assessments are unwarranted.* (122) *The unbiased person will generally and all else equal be in a stronger position than the biased person to rationally resist skeptical pressure from disagreement.* (123) *There is no general requirement that one bracket or set aside one’s first-order opinions about a topic when it comes to making higher-order judgments about who or what is biased about that topic, although doing so might be a useful heuristic in some
cases.* (124) *The Rich Get Richer: A person who at a given time has relatively accurate and unbiased first-order views about a topic will be well positioned to arrive at relatively accurate and unbiased higher-order views about which people and sources of information are biased and unbiased about that topic. This in turn will increase their chances of arriving at further accurate and unbiased first-order views about the topic in the future, and so on, in an epistemically virtuous cycle.* (125) *The Poor Get Poorer: In contrast, a person who begins with biased views about a topic will tend to be unreliable and biased in their assessments of which people and sources of information are biased about it. They will tend to judge, incorrectly, that people and sources of information whose biases align with their own are unbiased. And they will tend to judge, also incorrectly, that people and sources who don’t share their biases—and in fact are relatively free from bias—are biased. These mistaken assessments of bias will in turn tend to further reinforce and amplify the characteristic biases displayed by their first-order views about the topic in the future, views that they arrive at because they rely on biased sources and avoid unbiased ones, in an epistemically vicious cycle.* Bias: A Philosophical Study. Thomas Kelly, Oxford University Press. © Thomas Kelly 2022. DOI: 10.1093/oso/9780192842954.003.0012
Bibliography Adler, Jonathan E. (1997). “Lying, Deceiving, or Falsely Implicating.” Journal of Philosophy 94(9):435–52. American Psychological Association, APA Dictionary of Psychology, https://dictionary.apa.org. Accessed 1 March 2020. Anderson, Elizabeth (2010). The Imperative of Integration (Princeton University Press). Anderson, Luvell, Haslanger, Sally, and Langton, Rae (2012). “Language and Race.” In Russell and Fara (eds.), The Routledge Companion to Philosophy of Language (Routledge). Antony, Louise (1993). “Quine As Feminist: The Radical Import of Naturalized Epistemology.” In Antony and Witt (eds.), A Mind of One’s Own: Feminist Essays on Reason and Objectivity (Westview Press):185–226. Antony, Louise (2016). “Bias: Friend or Foe? Reflections on Saulish Skepticism.” In Brownstein and Saul (eds.), Implicit Bias and Philosophy, volume 1 (Oxford University Press):157–90. Antony, Louise (2021). “Bias.” In Kim Q. Hall and Asta (eds.), The Oxford Handbook of Feminist Philosophy (Oxford University Press). Appiah, Anthony (1990). “Racisms.” In David Goldberg (ed.), Anatomy of Racism (University of Minnesota Press):3–17. Appiah, Anthony (2005). The Ethics of Identity (Princeton University Press). Armor, D.A. (1999). “The Illusion of Objectivity: A Bias in the Perception of Freedom from Bias.” Dissertation Abstracts International: Section B: The Sciences and Engineering, 59, 5163. Armstrong, David (1973). Belief, Truth and Knowledge (Cambridge University Press). Armstrong, David. (1999). “A Naturalist Program: Epistemology and Ontology.” Proceedings and Addresses of the American Philosophical Association, 73:2. Arpaly, Nomy and Brinkerhoff, Anna (2018). “Why Epistemic Partiality is Overrated.” Philosophical Topics 46(1):37–51. Arthur, John (2007). Race, Equality, and the Burdens of History (Cambridge University Press). Audi, Robert (2003). Epistemology, 2nd edition (Routledge). Ballantyne, Nathan (2019). Knowing Our Limits (Oxford University Press). Banks, Ralph R. and Richard T. Ford (2009). “(How) Does Unconscious Bias Matter?: Law, Politics, and Racial Inequality” Emory Law Journal 58(5):1053–122. Banks, Ralph R. and Richard T. Ford (2011). “Does Unconscious Bias Matter?” Poverty and Race 20(5):1–2. Barnett, Zach (2019). “Belief Dependence: How Do the Numbers Count?” Philosophical Studies 176(2):297–319. Baron Jonathan (2012). “The Point of Normative Models in Judgment and Decision Making.” Frontiers of. Psychology 3:577. doi: 10.3389/fpsyg.2012.0057. Basu, Rima (2019). “Radical Moral Encroachment: The Moral Stakes of Racist Beliefs.” Philosophical Issues 29(1):9–23. Basu, Rima (2020). “The Specter of Normative Conflict: Does Fairness Require Inaccuracy?” In Beeghly and Madva (eds.), An Introduction to Implicit Bias (Routledge):191–210. Basu, Rima (2021). “A Tale of Two Doctrines: Moral Encroachment and Doxastic Wronging.” In Jennifer Lackey (ed.), Applied Epistemology (Oxford University Press):99–118. Becker, Kelly (2007). Epistemology Modalized (Routledge). Becker, K. and Black, T. (2012). The Sensitivity Principle in Epistemology (Cambridge University Press). Bedke, Matthew S. (2010). “Developmental Process Reliabilism: on Justification, Defeat, and Evidence.” Erkenntnis 73(1):1–17. Beeghly, Erin (2015). “What is A Stereotype? What is Stereotyping?” Hypatia 30(4):675–91. Beeghly, Erin and Madva, Alex (eds.) (2020). An Introduction to Implicit Bias (Routledge). Begby, Endre (2013). “The Epistemology of Prejudice.” Thought 2(1):90–9. Begby, Endre (2018). “Doxastic Morality.” Philosophical Topics 46(1):155–72. Begby, Endre (2021). Prejudice: A Study in Non-Ideal Epistemology (Oxford University Press).
Berstler, Sam (2019). “What’s the Good of Language? On the Moral Distinction Between Lying and Misleading.” Ethics 130(1):5–31. Bertrand, Joseph (1889). Calcul des probabilities, 1st edition (Paris). Bickle, John (2020). “Multiple Realizability.” In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Summer 2020 edition), https://plato.stanford.edu/archives/sum2020/entries/multiple-realizability/. Blanshard, Brand (1974). Reason and Belief (Allen & Unwin). Bolinger, Renée Jorgensen (2018). “The Rational Impermissibility of Accepting (Some) Racial Generalizations.” Synthese 197(6):2415–31. Bolinger, Renée Jorgensen (2020). “Varieties of Moral Encroachment.” Philosophical Perspectives 34(1):5–26. Bostrom, Nick and Ord, Toby (2006). “The Reversal Test: Eliminating Status Quo Bias in Applied Ethics.” Ethics 116(4):656–79. Boughner, R.L. (2012). “Volunteer bias.” In Neil J. Salkind (ed.), Encyclopedia of Research Design (Sage Publications):1609–10. Brams, Steven and Taylor, Alan (1999). The Win/Win Solution: Guaranteeing Fair Shares to Everyone (W.W.Norton and Company). Brennan, Geoffrey et al., (2013). Explaining Norms (Oxford University Press). Broome, John (1984). “Selecting People Randomly.” Ethics 95:38–55. Broome, John (1990). “Fairness.” Proceedings of the Aristotelian Society 91:87–102. Brownstein, Michael (2016). “Attributionism and moral responsibility for implicit bias.” Review of Philosophy and Psychology 7(4):765–86. Brownstein, Michael (2017). “Implicit Bias.” In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2017 edition), https://plato.stanford.edu/archives/spr2017/entries/implicit-bias/. Brownstein, Michael (2018). The Implicit Mind (Oxford University Press). Brownstein, Michael and Saul, Jennifer (eds.) (2016). Implicit Bias and Philosophy, 2 volumes (Oxford University Press). Buchak, Lara (2017). Risk and Rationality (Oxford University Press). Burge, Tyler (1979/[2007]). “Individualism and the Mental.” Midwest Studies in Philosophy 4(1):73–122. Reprinted in his Foundations of Mind (Oxford University Press 2007). Byrd, Nick (2019). “What We Can (And Can’t) Infer About Implicit Bias From Debiasing Experiments.” Synthese 2:1–29. Byrne, Alex (2004). “How Hard Are the Skeptical Paradoxes?” Noûs 38(2):299–325. Carnap, Rudolf (1950). Logical Foundations of Probability (University of Chicago Press). Carr, E.H. (1961). What is History? (Vintage). Cassam, Quassim (2015). “Stealthy Vices.” Social Epistemology Review and Reply Collective 4(10):19–25. Castro, Clinton (2019). “What’s Wrong with Machine Bias.” Ergo 6. Childress, James F. (1970). “Who Shall Live When Not All Can Live?” Soundings 53:339–55. Chisholm, Roderick M. and Feehan, Thomas D. (1977). “The Intent to Deceive.” Journal of Philosophy 74(3):143–59. Chomsky, Noam (1959). “Review of Skinner’s Verbal Behavior.” Language 35:26–58. Chomsky, Noam (1980). Rules and Representations (Columbia University Press). Christensen, David (2009). “Disagreement as Evidence: The Epistemology of Controversy.” Philosophy Compass 4(5):754– 67. Christensen, David (2011). “Disagreement, Question-Begging and Epistemic Self-Criticism.” Philosophers’ Imprint 11. Christensen, David (2019). “Formulating Independence.” In Mattias Skipper and Asbjørn Steglich-Petersen (eds.), HigherOrder Evidence: New Essays (Oxford University Press):13–34. Cohen, G.A. (2012). “Rescuing Conservatism: A Defense of Existing Value.” In Finding Oneself in the Other (Princeton University Press):143–74. Cohen, Stewart (2013). “A Defense of the (Almost) Equal Weight View.” In Christensen and Lackey (eds.), The Epistemology of Disagreement: New Essays (Oxford University Press):98–117. Collingwood, R.G. (1956). The Idea of History (Oxford University Press). Collins, P.H. (1990). Black Feminist Thought (Routledge). Comesaña, Juan (2005). “Unsafe Knowledge.” Synthese 146(3):395–404. Confucius (1979). The Analects (Penguin). Conee, Earl (1982). “Against Moral Dilemmas.” Philosophical Review 91(1):87–97.
Crane, Tim (2001). “David Lewis (1941–2001).” The Independent, October 23, 2001. Crawford, Lindsay (2019). “Believing the Best: On Doxastic Partiality in Friendship.” Synthese 196(4):1575–93. Dancy, Jonathan (2017). “Moral Particularism.” In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2017 edition), https://plato.stanford.edu/entries/moral-particularism/#Rel. Daniels, Norman (1996). Justice and Justification: Reflective Equilibrium in Theory and Practice (Cambridge University Press). Daniels, Norman (2012). “Reasonable Disagreement about Identified vs. Statistical Victims.” Hastings Center Report 41:35–45. Daniels, Norman (2018). “Reflective Equilibrium.” In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2018 edition), https://plato.stanford.edu/archives/spr2018/entries/reflective-equilibrium/. Danks, David and London, Alex J. John (2017). “Algorithmic Bias in Autonomous Systems.” Proceedings of the 26th International Joint Conference on Artificial Intelligence. DeRose, Keith (1995). “Solving the Skeptical Problem.” Philosophical Review 104(1):1–52. DeRose, Keith (2017). The Appearance of Ignorance: Knowledge, Skepticism, and Context, volume 2 (Oxford University Press). Devito N. and Goldacre B. (2019). “Publication Bias.” In Badenoch D., Heneghan C., and Nunan D. (eds.), Catalogue of Bias, https://catalogofbias.org/biases/publication-bias/. (Oxford University Press). Diaconis, P., Holmes, S., and Montgomery, R. (2007). “Dynamical Bias in the Coin Toss.” SIAM Review 49(2):211–35. Diamond, Peter. 1967. “Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of Utility: A Comment.” Journal of Political Economy 75:765–6. DiBella, Nicholas (forthcoming). “Indifference Defended.” Dingfelder, Sadie (2010). “The First Modern Psychology Study.” Monitor on Psychology 41(7):30, https://www.apa.org/monitor/2010/07-08/franklin. Dretske, Fred ([1970]2000). “Epistemic Operators.” Reprinted in his Perception, Knowledge, and Belief: Selected Essays (Cambridge University Press):30–47. Dretske, Fred [1991]2000). “Two Conceptions of Knowledge: Rational vs. Reliable Belief.” Reprinted in his Perception, Knowledge, and Belief (Cambridge University Press):80–93. Dretske, Fred (2014). “Reply to Hawthorne.” In Steup et al. (eds.), Contemporary Debates in Epistemology (Wiley Blackwell):56–9. Dror, Lidal. “Is There An Epistemic Advantage to Being Oppressed?” forthcoming in Nous. Du Bois, W.E.B. (1903). The Souls of Black Folk (McClurg and Company). Earman, John (1992). Bayes or Bust? (Bradford). Easwaran, Kenny (2011a). “Bayesianism I: Introduction and Arguments in Favor.” Philosophy Compass 6(5):312–20. Easwaran, Kenny (2011b). “Bayesianism II: Applications and Criticisms.” Philosophy Compass 6(5):321–32. Ehrlinger, J., Gilovich, T., and Ross, L. (2005). “Peering Into the Bias Blind Spot: People’s Assessment of Bias in Themselves and Others.” Personality and Social Psychology Bulletin 31:680–92. Elga, Adam (2007). “Reflection and Disagreement.” Noûs 41(3):478–502. Elstein, Daniel and Hurka, Thomas (2009). “From Thick to Thin: Two Moral Reduction Plans.” Canadian Journal of Philosophy 39(4):515–36. Elster, Jon (1989). Solomonic Judgments: Studies in the Limitations of Rationality (Cambridge University Press). Epley, N. and Dunning, D. (2000). “Feeling ‘Holier Than thou’: Are Self-Serving Assessments Produced by Errors in Selfor Social Prediction?” Journal of Personality and Social Psychology 79(6):861–75. Ericson, K. and Fuster, A. (2014). “The Endowment Effect.” Annual Review of Economics 6(1): 555–79. Eva, Benjamin (2019). “Principles of Indifference.” Journal of Philosophy 116(7):390–411. Fantl, Jeremy and McGrath, Matthew (2002). “Evidence, Pragmatics, and Justification.” Philosophical Review 111(1):67– 94. Faucher, Luc (2016). “Revisionism and Moral Responsibility for Implicit Attitudes.” In Michael Brownstein and Jennifer Saul (eds.), Implicit Bias and Philosophy volume 2 (Oxford University Press):115–45. Fazelpour, S., and Danks, D. (2021). “Algorithmic Bias: Senses, Sources, Solutions.” Philosophy Compass 16(8). Feldman, Richard (2006). “Epistemological Puzzles about Disagreement.” In Stephen Hetherington (ed.), Epistemology Futures (Oxford University Press):216–36.
Feldman, Richard (2007). “Reasonable Religious Disagreements.” In Louise Antony (ed.), Philosophers Without Gods (Oxford University Press):194–214. Firth, Roderick (1978). “Are Epistemic Concepts Reducible to Ethical Concepts?” In Alvin Goldman and Jaegwon Kim, (eds.), Values and Morals (D. Reidel Publishing Co.). Forgas, J.P., and Laham, S.M. (2017). “Halo Effects.” In R. F. Pohl (ed.), Cognitive Illusions: Intriguing Phenomena in Thinking, Judgment and Memory (Routledge/Taylor & Francis Group):276–90. Franco, Annie, Malholtra, N., and Simonovits, G. (2014). “Publication Bias In the Social Sciences: Unlocking the File Drawer.” Science 345(6203):1502–5. doi: 10.1126/science.1255484. Frankish, Keith (2016). “Playing Double: Implicit Bias, Dual Levels, and Self-Control.” In Michael Brownstein and Jennifer Saul (eds.), Implicit Bias and Philosophy volume 1 (Oxford University Press):23–46. Fricker, Elizabeth (2012). “Stating and Insinuating.” Aristotelian Society Supplementay 86(1):61–94. Fricker, Miranda (2007). Epistemic Injustice: Power and the Ethics of Knowing (Oxford University Press). Fritz, James (2017). “Pragmatic Encroachment and Moral Encroachment.” Pacific Philosophical Quarterly 98(S1):643–61. Funkhouser, Eric (2007). “Multiple Realizability.” Philosophy Compass 2(2):303–15. Gadamer, Hans-Georg (1960/2013). Truth and Method (Bloomsbury Publishers). Gallow, J. (2018). “No One Can Serve Two Epistemic Masters.” Philosophical Studies 175(10):2389–98. Garcia, Jorge L.A. (1996). “The Heart of Racism.” Journal of Social Philosophy 27(1):5–46. Garcia, Jorge L.A. (1997a). “Current Conceptions of Racism: A Critical Examination of Some Recent Social Philosophy.” Journal of Social Philosophy 28:5–42. Garcia, Jorge L.A. (1997b). “Racism as a Model for Understanding Sexism.” In Naomi Zack (ed.), Race/Sex: Their Sameness, Difference, and Interplay (Routledge):45–59. Garcia, Jorge L.A. (1999). “Philosophical Analysis and the Moral Concept of Racism.” Philosophy and Social Criticism 25:1–32. Garcia, Jorge L.A. (2001a). “Three Sites for Racism: Social Structurings, Valuings, and Vice.” In Michael Levine and Tomas Pataki (eds.), Racism in Mind (Cornell University Press):35–55. Garcia, Jorge L.A. (2001b). “Racism and Racial Discourse.” Philosophical Forum 32. Gardiner, Georgi (2018). “Evidentialism and Moral Encroachment.” In Kevin McCain (ed.), Believing in Accordance with the Evidence (Springer Verlag). Gawronski, B. and Bodenhausen, G.V. (2006). “Associative and Propositional Processes in Evaluation: An Integrative Review of Implicit and Explicit Attitude Change.” Psychological Bulletin 132(5):692. Gendler, Tamar Szabó (2008). “Alief and Belief.” Journal of Philosophy 105(10):634–63. Gendler, Tamar Szabó (2011). “On the Epistemic Costs of Implicit Bias.” Philosophical Studies 156(1):33–63. Gift, Paul (2015). “Sequential Judgment Effects in the Workplace: Evidence From the National Basketball Association.” Econ Inq 53:1259–74, https://doi.org/10.1111/ecin.12186. Gigerenzer, Gerd. (1991). “How to Make Cognitive Illusions Disappear: Beyond ‘Heuristics and Biases.’” European Review of Social Psychology 2:83–115. Gigerenzer, Gerd. (2002). Adaptive Thinking: Rationality in the Real World (Oxford University Press). Gigerenzer, Gerd. (2008). Rationality for Mortals: How People Cope with Uncertainty (Oxford University Press). Gigerenzer, G. and Brighton, H. (2009). “Homo Heuristicus: Why Biased Minds Make Better Inferences.” Topics in Cognitive Science 1(1):107–43. Gigerenzer, G., Todd, P. and the ABC Research Group (1999). Simple Heuristics that Make Us Smart (Oxford University Press). Gilovich, Thomas (1991). How We Know What Isn’t So (The Free Press). Gilovich, T., Griffin, D., and Kahneman, D. (2002). Heuristics and Biases: The Psychology of Intuitive Judgment (Cambridge University Press). Gilovich, T. et al. (2016). Social Psychology, 4th edition (W.W. Norton). Glasgow, Joshua (2009). “Racism as Disrespect.” Ethics 120:64–93. Glasgow, Joshua (2016). “Alienation and Responsibility.” In Michael Brownstein and Jennifer Saul (eds.), Implicit Bias and Philosophy, volume 2 (Oxford University Press): 37–61. Goldberg, Sanford (2019). “Against Epistemic Partiality in Friendship: Value-Reflecting Reasons.” Philosophical Studies 176(8):2221–42.
Goldman, Alvin (1979). “What is Justified Belief?” In George Pappas (ed.), Justification and Knowledge (Reidel):1–25. Goldman, Alvin (1986). Epistemology and Cognition (Harvard University Press). Goldman, Alvin (2001). “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63(1):85– 110. Goldman, Alvin and Beddor, Bob (2016). “Reliabilist Epistemology.” In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2016 edition), https://plato.stanford.edu/archives/win2016/entries/reliabilism/. Goodman, Nelson (1955). Fact, Fiction, and Forecast (Harvard University Press). Goodwin, Barbara (2005). Justice by Lottery, 2nd edition (Imprint Academic). Green, Stuart (2001). “Lying, Misleading, and Falsely Denying: How Moral Concepts Inform the Law of Perjury, Fraud, and False Statements.” Hastings Law Journal 53:157–212. Greenwald, A.G. and Banaji, M.R. (1995). “Implicit Social Cognition: Attitudes, Self-Esteem, and Stereotypes.” Psychological Review 102(1):4–27. Greenwald, A.G. and Banaji, M.R. (2013). Blindspots: Hidden Biases of Good People (Delacorte Press). Grice, H.P. (1989). Studies in the Way of Words (Harvard University Press). Griffiths, T.L., Kalish, M.L., and Lewandowsky, S. (2008). “Theoretical and Experimental Evidence for the Impact of Inductive Biases on Cultural Evolution.” Philosophical Transactions of the Royal Society 363: 3503–14. Grundmann, Thomas (forthcoming). “Experts: What Are They and How Can Laypeople Identify Them?” In Jennifer Lackey and Aidan McGlynn (eds.), The Oxford Handbook of Social Epistemology (Oxford University Press). Hahn, U. and Harris, A.J.L. (2014). “What Does it Mean to be Biased: Motivated Reasoning and Rationality.” In B.H. Ross (ed.), The Psychology of Learning and Motivation, volume 61 (Elsevier Academic Press):41–102. Hamilton, Mark (2011). “The Moral Ambiguity of the Makeup Call,” Journal of the Philosophy of Sport 38(2):212–28, doi: 10.1080/00948705.2011.10510423. Hansen Katherine et al. (2014). “People Claim Objectivity After Knowingly Using Biased Strategies.” Personality and Social Psychology Bulletin 40(6):691–9, doi:10.1177/0146167214523476. Hanson, Norwood Russell (1958). Patterns of Discovery (Cambridge University Press). Harman, Gilbert (1986). Change in View (MIT Press). Haslanger, Sally (2011). “Ideology, Generics, and Common Ground.” In Charlotte Witt (ed.), Feminist Metaphysics (Springer Verlag):179–207. Haslanger, Sally (2015). “Social Structure, Narrative, and Explanation.” Canadian Journal of Philosophy 45(1):1–15. Haslanger, Sally (2016). “What Is a (Social) Structural Explanation?” Philosophical Studies 173(1):113–30. Hawley, Katherine (2014). “Partiality and Prejudice in Trusting.” Synthese 191(9):2029–45. Hawthorne, John (2014). “The Case for Closure.” In Steup et al. (eds.), Contemporary Debates in Epistemology, 2nd edition. (Wiley-Blackwell):40–56. Hazlett, Allan (2013). A Luxury of the Understanding: On the Value of True Belief (Oxford University Press). Hedden, Brian (2019). “Hindsight Bias Is Not A Bias.” Analysis 79(1):43–52. Hedden, Brian (2021). “On Statistical Criteria of Algorithmic Fairness.” Philosophy and Public Affairs 49(2):209–31. Henderson, David and Horgan, Terence (2001). “Practicing Safe Epistemology.” Philosophical Studies 102(3):227–58. Henning, Tim (2015). “From Choice to Chance? Saving People, Fairness, and Lotteries.” Philosophical Review 124(2):169– 206. Hohwy, Jakob (2013). The Predictive Mind (Oxford University Press). Holroyd, J. (2012). “Responsibility for Implicit Bias.” In M. Crouch and L. Schwartzman (eds.), Journal of Social Philosophy, Special Issue: Gender, Implicit Bias and Philosophical Methodology 43:274–306. Holroyd, J. (2016). “What Do We Want from a Model of Implicit Cognition?” Proceedings of the Aristotelian Society 116(2):153–79. Holroyd, J. (2017). “Responsibility for Implicit Bias.” Philosophy Compass 12(3):1–13. Holroyd, Jules and Kelly, Dan (2016). “Implicit Bias, Character and Control.” In Jonathan Webber and Alberto Masala (eds.), From Personality to Virtue (Oxford University Press):106–33. Holroyd, Jules, Scaife, Robin, and Stafford, Tom (2017). “What is Implicit Bias?” Philosophy Compass 12(10). Holroyd, Jules and Sweetman, Joseph (2016). “The Heterogeneity of Implicit Bias.” In Michael Brownstein and Jennifer Saul (eds.), Implicit Bias and Philosophy, volume 1 (Oxford University Press):80–103. Hooks, B. (1984). Feminist Theory (South End Press).
Horwich, Paul (1982). Probability and Evidence (Cambridge University Press). Huebner, Bryce (2016). “Implicit Bias, Reinforcement Learning, and Scaffolded Moral Cognition.” In Michael Brownstein and Jennifer Saul (eds.), Implicit Bias and Philosophy, volume 1 (Oxford University Press):47–79. Huemer, Michael (2008). “Revisionary Intuitionism.” Social Philosophy and Policy 25(1):368–92. Huemer, Michael (2009). “Explanationist Aid for the Theory of Inductive Logic.” British Journal for the Philosophy of Science 60(2):345–75. Hume, David (1748). An Enquiry Concerning Human Understanding (many editions). Hunter, David (2011). “Alienated Belief.” Dialectica 65(2):221–40. Icard, Thomas (2021). “Why Be Random?” Mind 130(517):111–39. Jaworksi, William (2020). “Mind and Multiple Realizability.” The Internet Encyclopedia of Philosophy, https://www.iep.utm.edu/mult-rea/. Accessed 5 January 2020. Jefferson, Thomas (1814). “George Washington,” https://www.donparrish.com/EssayWashington.html. Accessed 5 January 2020. Johnson, Gabbrielle (2020). “The Structure of Bias.” Mind 129(516):1193–236. Johnson, Gabbrielle (2021). “Algorithmic Bias: On the Implicit Biases of Social Technology.” Synthese 198(100):9941–61. Johnson, Gabbrielle (forthcoming). “Are Algorithms Value-Free? Feminist Theoretical Virtues in Machine Learning.” Journal of Moral Philosophy. Jussim, Lee (2012). Social Perception and Social Reality: Why Accuracy Dominates Bias and Self-Fulfilling Prophecy (Oxford University Press). Kahneman, Daniel, Jack L. Knetsch, and Richard H. Thaler (1991). “Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias.” Journal of Economic Perspectives 5:93–206. Kahneman, Daniel, Sibony, Oliver, and Sunstein, Cass (2021). Noise (Little, Brown, Spark). Kahneman, D, Slovic, P., and Tversky, A. (1982). Judgment Under Uncertainty: Heuristics and Biases (Cambridge University Press). Kamm, Frances (1993). Morality, Mortality, volume 1 (Oxford University Press). Kant, Immanuel. Lectures on Ethics. Karabel, J (2005). The Chosen: The Hidden History of Admission and Exclusion at Harvard, Yale, and Princeton (Houghton Mifflin Harcourt). Karlan, Brett (2020). Rationality, Bias, and Mind: Essays on Epistemology and Cognitive Science. Princeton University Doctoral Dissertation. Kawall, Jason (2013). “Friendship and Epistemic Norms.” Philosophical Studies 165(2):349–370. Keller, Simon (2004). “Friendship and Belief.” Philosophical Papers 33(3):329–51. Keller, Simon (2018). “Belief for Someone Else’s Sake.” Philosophical Topics 46(1):19–35. Kelly, Thomas (2004). “Sunk Costs, Rationality, and Acting for the Sake of the Past.” Noûs 38(1):60–85. Kelly, Thomas (2005a). “The Epistemic Significance of Disagreement”. In John Hawthorne and Tamar Gendler (eds.), Oxford Studies in Epistemology 1 (Oxford University Press):167–96. Kelly, Thomas (2005b). “Moorean Facts and Belief Revision, or Can the Skeptic Win?” Philosophical Perspectives 19:179– 209. Kelly, Thomas (2008). “Common Sense as Evidence: Against Revisionary Ontology and Skepticism.” Midwest Studies in Philosophy 32(1):53–78. Kelly, Thomas (2010). “Peer Disagreement and Higher-Order Evidence.” In R. Feldman and T.A. Warfield (eds.), Disagreement (Oxford University Press):111–74. Kelly, Thomas (2013). “Evidence Can Be Permissive.” In Matthias Steup and John Turri (eds.), Contemporary Debates in Epistemology, 2nd edition (Blackwell):298–312. Kelly, Thomas (2013). “Disagreement and the Burdens of Judgment.” In David Christensen and Jennifer Lackey (eds.), The Epistemology of Disagreement: New Essays (Oxford University Press). Kelly, Thomas (2014). “Quine and Epistemology.” In Gilbert Harman and Ernie Lepore (eds.), A Companion to W.V.O. Quine (Blackwell):17–37. Kelly, Thomas (2022). “Evidence.” In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2022 edition), https://plato.stanford.edu/archives/win2016/entries/evidence/. Kelly, Thomas (in preparation). Evidentialism.
Kelly, Thomas and McGrath, Sarah (2010). “Is Reflective Equilibrium Enough?” Philosophical Perspectives 24:325–59. Kelly, Daniel and Roedder, Erica (2008). “Racial Cognition and the Ethics of Implicit Bias.” Philosophy Compass 3(3):522– 40. Kornblith, Hilary (1999). “Distrusting Reason.” Midwest Studies in Philosophy 23(1):181–96. Kornblith, Hilary (2010). “Belief in the Face of Controversy.” In Richard Feldman and Ted A. Warfield (eds.), Disagreement (Oxford University Press). Kornhauser, Lewis and Lawrence G. Sager (1988). “Just Lotteries.” Social Science Information 27: 483–516. Kripke, Saul (2011). “Nozick on Knowledge.” In his Philosophical Troubles (Oxford University Press):162–224. Kruger, J. 1999. “Lake Wobegon Be Gone! The ‘Below-Average Effect’ and the Egocentric Nature of Comparative Ability Judgments.” Journal of Personality and Social Psychology 65:670–80. Krugman, Paul (2000/2020). “Bait-and-Switch.” Reprinted in his Arguing with Zombies. (W.W. Norton and Company). Kuhn, Thomas (1963). “The Function of Dogma in Scientific Research.” In A. Crombie (ed.), Scientific Change (Heinemann):347–69. Kuhn, Thomas (1970). The Structure of Scientific Revolutions (University of Chicago Press). Kukla, Quill. (2006). “Objectivity and Perspective in Empirical Knowledge.” Episteme 3:80–95. Lackey, Jennifer (2010). “What Should We Do When We Disagree?” Oxford Studies in Epistemology 3. Lackey, Jennifer (2013). “Disagreement and Belief Dependence: Why Numbers Matter.” In David Christensen and Jennifer Lackey (eds.), The Epistemology of Disagreement (Oxford University Press):243–68. Lane, David M. et al., Introduction to Statistics. Online Statistics Education: A Multimedia Course of Study, http://onlinestatbook.com/. Accessed 5 January 2020. Leary, Stephanie (2021). “Banks, Bosses, and Bears: A Pragmatist Argument Against Encroachment.” Philosophy and Phenomenological Research. Lemos, Noah (2004). Common Sense: A Contemporary Defense (Cambridge University Press). Leslie, Sarah-Jane (2008). “Generics: Cognition and Acquisition.” Philosophical Review 117(1):1–47. Leslie, Sarah-Jane and Lerner, Adam (2016). “Generic Generalizations.” In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2016 edition), https://plato.stanford.edu/entries/generics/. Levine, Michael P. and Pataki, Tomas (eds.) (2004). Racism in Mind (Cornell University Press). Levy, Neil (2012). “Consciousness, Implicit Attitudes, and Moral Responsibility.” Noûs, 48(1): 21–40. Levy, Neil (2015). “Neither Fish nor Fowl: Implicit Attitudes as Patchy Endorsements.” Noûs 49(4):800–23. Levy, Neil (2016). “Implicit Bias and Moral Responsibility: Probing the Data.” Philosophy and Phenomenological Research XCIV(1):3–26. Lewis, David (1983). “Scorekeeping in a Language Game.” Reprinted in his Philosophical Papers, volume 1 (Oxford University Press):233–49. Lewis, David (1999). “Elusive Knowledge.” Reprinted in his Papers in Metaphysics and Epistemology (Cambridge University Press):418–45. Lipton, Peter (2004). Inference to the Best Explanation, 2nd edition (Routledge). List, Christian and Pettit, Phillip (2011). Group Agency (Oxford University Press). Lord, Errol (2014). “From Independence to Conciliationism: An Obituary.” Australasian Journal of Philosophy 92(2):365– 77. Lucretius, De Rerum Natura. Lycan, William (2001). “Moore Against the New Skeptics.” Philosophical Studies 103(1):35–53. Machery, Edouard (2016). “De-Freuding Implicit Attitudes.” In Michael Brownstein and Jennifer Saul (eds.), Implicit Bias and Philosophy volume 1 (Oxford University Press):104–29. MacIntyre, Alasdair (1994). “Truthfulness, Lies, and Moral Philosophers: What Can We Learn from Mill and Kant?” The Tanner Lectures on Human Values, Princeton University, April 6 and 7. Madva, Alex (2016a). “A Plea for Anti-Anti-Individualism: How Oversimple Psychology Misleads Social Policy.” Ergo 3(27):701–28. Madva, Alex (2016b). “Why Implicit Attitudes are (Probably) Not Beliefs.” Synthese 193(8):2659–84. Madva, Alex (2018). “Implicit Bias, Moods, and Moral Responsibility.” Pacific Philosophical Quarterly 99(S1):53–78. Majors, Brad and Sawyer, Sarah (2005). “The Epistemological Argument for Content Externalism.” Philosophical Perspectives 19(1):257–80.
Mandelbaum, Eric (2016). “Attitude, Inference, Association: On the Propositional Structure of Implicit Bias.” Noûs 50(3):629–58. Manley, David and Wasserman, Ryan (2007). “A Gradable Approach to Dispositions.” Philosophical Quarterly 57(226):68– 75. Marcus, Ruth Barcan (1980). “Moral Dilemmas and Consistency.” Journal of Philosophy 77 (3):121–36. Marion, Mathieu, “John Cook Wilson.” In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2016 edition), https://plato.stanford.edu/archives/spr2016/entries/wilson/. Marzilli Ericson, Keith. and Fuster, Andreas (2014). “The Endowment Effect.” Annual Review of Economics 6:555–79. Mason, Cathy (2020). “The Epistemic Demands of Friendship: Friendship as Inherently Knowledge-Involving.” Synthese 199(1–2):2439–55. Mason, Elinor (2018). “Respecting Each Other and Taking Responsibility for our Biases.” In M. Oshana, K. Hutchison, and C Mackenzie (eds.), Social Dimensions of Moral Responsibility (Oxford University Press):163–84. Matheson, Jonathan (2015). The Epistemic Significance of Disagreement (Palgrave). McConnell, Terrance (2018). “Moral Dilemmas.” In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2018 edition), https://plato.stanford.edu/archives/fall2018/entries/moral-dilemmas/. McGrath, Sarah (2019). Moral Knowledge (Oxford University Press). Menand, Louis (2001). “Morton, Agassiz, and the Origins of Scientific Racism in the United States.” Journal of Blacks in Higher Education 34 (Winter 2001/2002):110–13. Merton, Robert K (1968). “The Matthew Effect in Science.” Science 5:56–63. Mill, John Stuart (1978 [1859]). On Liberty (Hackett). Mladina, P. and Grant, C. (2016). “From Behavioral Bias to Rational Investing,” https://www.northerntrust.com/documents/commentary/investment-commentary/behavioral-bias.pdf. Accessed 5 January 2020. Moller, Dan (2013). “The Epistemology of Popularity and Incentives.” Thought 2:148–56. Moon, Andrew (2018). “Independence and New Ways to Remain Steadfast in the Face of Disagreement.” Episteme 15(1):65–79. Moore, G.E. (1993). Selected Writings. Edited by Thomas Baldwin. (Routledge). Moss, Sarah (2018). “Moral Encroachment.” Proceedings of the Aristotelian Society 118(2):177–205. Munton, Jessie (2022). “Bias in a Biased System: Visual Perceptual Prejudice.” In Nathan Ballantyne and David Dunning (eds.) Bias, Reason, and Enquiry (Oxford University Press). Nasby, W., Hayden, B., and DePaulo, B. M. (1980). “Attributional Bias Among Aggressive Boys to Interpret Unambiguous Social Stimuli as Displays of Hostility.” Journal of Abnormal Psychology 89(3):459–68, https://doi.org/10.1037/0021843X.89.3.459. Nebel, Jacob M. (2015). “Status Quo Bias, Rationality, and Conservatism about Value.” Ethics 125: 449–76. Nelson, M.T. (2001). “On the Lack of ‘True Philosophic Spirit’ in Aquinas: Commitment v. Tracking in Philosophic Method.” Philosophy 76(296): 283–96. Neta, Ram and Rohrbaugh, Guy (2004). “Luminosity and the Safety of Knowledge.” Pacific Philosophical Quarterly 85(4):396–406. Nickerson, Raymond S. (1998). “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises.” Review of General Psychology 2(2):175–220. Nisbett, R.E. and Wilson, T.D. (1977). “Telling More Than We Can Know: Verbal Reports on Mental Processes.” Psychological Review 84(3):231–59. Novick, Peter (1988). That Noble Dream: The “Objectivity Question” and the American Historical Profession (Cambridge University Press). Nozick, Robert (1981). Philosophical Explanations (Harvard University Press). Nozick, Robert (1993). The Nature of Rationality (Princeton University Press). Nozick, Robert (2001). Invariances (Harvard University Press). Odegaard, B., Wozny, D.R., and Shams, L. (2015). “Biases in Visual, Auditory, and Audiovisual Perception of Space.” PLoS Comput Biol 11(12), https://doi.org/10.1371/journal.pcbi.1004649. Accessed 5 January 2020. Oppy, Graham (2001). “On the Lack of True Philosophic Spirit in Aquinas.” Philosophy 76(298):615–24. Pace, Michael (2011). “The Epistemic Value of Moral Considerations: Justification, Moral Encroachment, and James’ ‘Will To Believe.’” Noûs 45(2):239–68.
Palmer, Stephen E. (1999). Vision Science: Photons to Phenomenology (The MIT Press). Parfit, Derek (1984). Reasons and Persons (Clarendon Press). Pedowitz, Lawrence (2008). “Report to the Board of Governors of the National Basketball Association,” https://www.scribd.com/document/71684868/PedowitzReport. Accessed 5 January 2020. Pettigrew, Richard G. (2016). “Accuracy, Risk, and the Principle of Indifference.” Philosophy and Phenomenological Research 92(1):35–59. Plantinga, Alvin (1993). Warrant: The Current Debate (Oxford University Press). Plato, Collected Works. Polanyi, Michael (1958). Personal Knowledge (University of Chicago Press). Pollak, O. (1983). “Antisemitism, the Harvard Plan, and the Roots of Reverse Discrimination.” Jewish Social Studies 45(2):113–22. Pritchard, Duncan (2007). “Anti-luck Epistemology.” Synthese 158(3):277–97. Pritchard, Duncan (2009). “Safety-Based Epistemology.” Journal of Philosophical Research 34:33–45. Pronin, E. (2007). “Perception and Misperception of Bias in Human Judgment.” Trends in Cognitive Sciences 11:37–43. Pronin, E., Gilovich, T., and Ross, L. (2004). “Objectivity in the Eye of the Beholder: Divergent Perceptions of Bias in Self Versus Others.” Psychological Review 111(3):781–99. Pronin, E. and Kugler, M.B. (2007). “Valuing Thoughts, Ignoring Behavior: The Introspection Illusion as a Source of the Bias Blind Spot.” Journal of Experimental Social Psychology 43:565–78. Pronin, E., Lin, D.Y., and Ross, L. (2002). “The Bias Blind Spot: Perceptions of Bias in Self versus Others.” Personality and Social Psychology Bulletin 28:369–81. Pryor, James (2000). “The Skeptic and the Dogmatist.” Noûs 34(4):517–49. Putnam, Hilary (1967). “Psychological Predicates.” In Capitan and Merrill (eds.), Art, Mind, and Religion (University of Pittsburgh Press). Often reprinted under the title “The Nature of Mental States.” Putnam, Hilary (1983). “Foreword to the Fourth Edition.” In Nelson Goodman, Fact, Fiction, and Forecast (Harvard University Press):vii–xvi. Quine, W.V. (1969). “Epistemology Naturalized.” In his Ontological Relativity and Other Essays (Columbia University Press):114–38. Quine, W.V. (1975). “The Nature of Natural Knowledge.” In Samuel Guttenplan (ed.), Mind and Language (Oxford University Press):441–50. Rawls, John (1971). A Theory of Justice (Harvard University Press). Rawls, John (2001 [1975]). “The Independence of Moral Theory.” Reprinted in his Collected Papers (Harvard University Press):286–302. Rees, Clea F. (2014). “Better Lie!,” Analysis 74(1):59–64. Rescorla, Michael (2015). “Bayesian Perceptual Psychology.” In Mohan Matthen (ed.), The Oxford Handbook of the Philosophy of Perception (Oxford University Press):694–716. Rey, Georges (1997). Contemporary Philosophy of Mind: A Contentiously Classical Approach (Wiley-Blackwell). Ritchie, Katherine (2019). “Should We Use Racial and Gender Generics?” Thought 8(1):33–41. Roberts, Debbie. (2013). “Thick Concepts.” Philosophy Compass 8(8):677–88. Robertson, C. and Kesselheim, A. (2016). Blinding as a Solution to Bias (Elsevier). Robinson, R.J. et al. (1995). “Actual Versus Assumed Differences in Construal: ‘Naïve Realism’ in Intergroup Perception and Conflict.” Journal of Personality and Social Psychology 68:404–17. Roese, N. and Vohs, K. (2012). “Hindsight Bias.” Perspectives on Psychological Science 7(5):411–26. Rosen, Gideon (2022). “Accountability and Implicit Bias: A Study in Skepticism about Responsibility.” In Vargas and Doris (eds.) The Oxford Handbook of Moral Psychology (Oxford University Press):947–65. Rosenbaum, Stephen E. (1989). “The Symmetry Argument: Lucretius against the Fear of Death.” Philosophy and Phenomenological Research 50(2):353–73. Ross, L., Ehrlinger, J., and Gilovich, T. (2016). “The Bias Blind Spot and Its Implications.” In Kimberly Elsbach and Anna Kayes (eds.), Contemporary Organizational Behavior (Pearson Prentice Hall). Ross, L., and Ward, A. (1996). “Naive Realism in Everyday Life: Implications for Social Conflict and Misunderstanding.” In E.S. Reed, E. Turiel, and T. Brown (eds.), Values and Knowledge (Lawrence Erlbaum Associates, Inc):103–35. Ross, M., and Sicoly, F. (1979). “Egocentric Biases in Availability and Attribution.” Journal of Personality and Social
Psychology 32:880–92. Russell, Bertrand (1945). A History of Western Philosophy (Simon and Schuster). Sainsbury, R.M. (1997). “Easy Possibilities.” Philosophy and Phenomenological Research 57(4):907–19. Samuelson, William and Richard Zeckhauser (1988). “Status Quo Bias in Decision Making.” Journal of Risk and Uncertainty 1(1):7–59. Saphire, Richard (1997). “Religion and Recusal.” Marquette Law Review 81(2):351–64. Saul, Jennifer (2012). Lying, Misleading, and What is Said: An Exploration in Philosophy of Language and in Ethics (Oxford University Press). Saul, Jennifer (2013). “Scepticism and Implicit Bias.” Disputatio 5(37):243–63. Saul, Jennifer (2017). “Are Generics Especially Pernicious?” Inquiry:1–18. Saunders, Ben. (2008). “The Equality of Lotteries.” Philosophy 83:359–72. Sayre-McCord, Geoffrey. “A Moral Argument Against Moral Dilemmas.” (draft) Scalia, Antonin (1997). A Matter of Interpretation: Federal Courts and the Law (Princeton University Press). Scanlon, Thomas (1998). What We Owe To Each Other (Harvard University Press). Scanlon, Thomas (2002). “Rawls on Justification.” In Samuel Freeman (ed.), The Cambridge Companion to Rawls (Cambridge University Press):139–67. Scanlon, Thomas (2014). Being Realistic About Reasons (Oxford University Press). Scheider, Jessica and Gould, Elise (2016). “‘Women’s Work’ and the Gender Pay Gap.” Economic Policy Institute, https://www.epi.org/publication/womens-work-and-the-gender-pay-gap-how-discrimination-societal-norms-and-otherforces-affect-womens-occupational-choices-and-their-pay/. Accessed 5 January 2020. Schroeder, Mark (2012). “Stakes, Withholding, and Pragmatic Encroachment on Knowledge.” Philosophical Studies 160(2):265–85. Schroeder, Mark (2018). “When Beliefs Wrong.” Philosophical Topics 46(1):115–27. Schroeder, Mark (2021). Reasons First (Oxford University Press). Schwitzgebel, Eric (2010). “Acting Contrary to Our Professed Beliefs or the Gulf Between Occurrent Judgment and Dispositional Belief.” Pacific Philosophical Quarterly 91(4):531–53. Scopelliti, Irene, et al. (2015). “Bias Blind Spot: Structure, Measurement, and Consequences.” Management Science 61(10):2468–86. Setiya, Kieran (2012). Knowing Right From Wrong (Oxford University Press). Sher, George. (1980). “What Makes a Lottery Fair?” Noûs 14(2):203–16. Siegler, Frederick A. (1966). “Lying.” American Philosophical Quarterly 3(2):128–36. Siegel, Susanna (2017). The Rationality of Perception (Oxford University Press). Singer, Peter (1974). “Sidgwick and Reflective Equilibrium.” Monist 58:490–517. Sorensen, Roy (1988). Blindspots (Oxford University Press). Sosa, Ernest (1999a). “How to Defeat Opposition to Moore.” Philosophical Perspectives 13:141–53. Sosa, Ernest (1999b). “How Must Knowledge Be Modally Related to What Is Known?” Philosophical Topics 26(1–2):373– 84. Sosa, Ernest (2000). “Skepticism and Contextualism.” Philosophical Issues 10:1–18. Sosa, Ernest (2010). “The Epistemology of Disagreement.” In Adrian Haddock, Alan Millar, and Duncan Pritchard (eds.), Social Epistemology (Oxford University Press). Sowell, Thomas (1985). Marxism: Philosophy and Economics (William and Morrow). Sowell, Thomas (2011). Economic Facts and Fallacies, 2nd edition (Basic Books). Sterling, T. (1959). “Publication Decisions and Their Possible Effects on Inferences Drawn from Tests of Significance—Or Vice Versa.” Journal of the American Statistical Association 54(285), 30–4. Stone, Peter (2011). The Luck of the Draw: The Role of Lotteries in Decision Making (Oxford University Press). Stroud, Sarah (2006). “Epistemic Partiality in Friendship.” Ethics 116(3):498–524. Strudler, Alan (2010). “The Distinctive Wrong in Lying.” Ethical Theory and Moral Practice 13(2):171–9. Sullivan, Meghan (2018). Time Biases (Oxford University Press). Sullivan‐Bissett, Ema (2019). “Biased By Our Imaginings.” Mind and Language 34(5):627–47. Swedroe, Larry (2019). “Recency Bias Erodes Discipline and Destroys Investor Returns: Reconsidering Reinsurance,” https://buckinghamadvisor.com/recency-bias-erodes-discipline-and-destroys-investor-returns-reconsidering-reinsurance/.
Tessman, Lisa (2015). Moral Failure: On the Impossible Demands of Morality (Oxford University Press). Tessman, Lisa (2017). When Doing the Right Thing is Impossible (Oxford University Press). Thaler R.H. (1980). “Toward a Positive Theory of Consumer Choice.” Journal of Economic Behavior & Organization 1(1):39–60. Thaler, R.H. and Sunstein, Cass (2009/2021). Nudge (Penguin). Thibaut, John and Walker, Laurens (1975). Procedural Justice: A Psychological Analysis (Lawrence Erlbaum Associates). Titelbaum, Michael (2022). Fundamentals of Bayesian Epistemology (Oxford University Press). Toole, Briana (2021). “Recent Work in Standpoint Epistemology.” Analysis 81(2):338–50. Ture, Kwame and Hamilton, Charles (1992 [1967]). Black Power: the Politics of Liberation (Vintage). Tversky, A. and Kahneman, D. (1974). “Judgment under Uncertainty: Heuristics and Biases.” Science 185(4157):1124–31. Tversky, A. and Kahneman, D. (1983). “Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment.” Psychological Review 90(4):293-315. Unger, Peter (1975). Ignorance: A Case for Skepticism (Oxford University Press). Uttich, Kevin and Lombrozo, Tania (2010). “Norms Inform Mental State Ascriptions: A Rational Explanation For the SideEffect Effect.” Cognition 116:(1) 87–100. Van Fraassen, Bas (1989). Laws and Symmetry (Clarendon). Vargas, M. (2017). “Implicit Bias, Responsibility and Moral Ecology.” In D. Shoemaker (ed.), Oxford Studies in Agency and Responsibility (Oxford University Press). Vavova, Katia (2014). “Confidence, Evidence, and Disagreement.” Erkenntnis 79(S1):173–83. Väyrynen, Pekka (2019). “Thick Ethical Concepts.” In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2017 edition), https://plato.stanford.edu/archives/fall2017/entries/thick-ethical-concepts/. Veber, Michael (2021). “The Epistemology of No Platforming: Defending the Defense of Stupid Ideas on University Campuses.” Journal of Controversial Ideas 1(1):1–13. Vogel, Jonathan (1987). “Tracking, Closure, and Inductive Knowledge.” In Stephen Luper-Foy (ed.), The Possibility of Knowledge: Nozick and his Critics (Rowman and Littlefield):197–215. Vogel, Jonathan (2004). “Skeptical Arguments.” Philosophical Issues 14(1):426–55. Washington, N. and Kelly, D. (2016). “Who’s Responsible For This? Moral Responsibility, Externalism and Knowledge about Implicit Bias.” In Michael Brownstein and Jennifer Saul (eds.), Implicit Bias and Philosophy, volume 2 (Oxford University Press):11–36. Wasserman, David. (1996). “Let Them Eat Chances: Probability and Distributive Justice.” Economics and Philosophy 12(1):29–49. Wedgwood, Ralph (2018). “The Unity of Normativity.” In Daniel Star (ed.), The Oxford Handbook of Reasons and Normativity (Oxford University Press):23–45. Weinstein, N.D. (1987). “Unrealistic Optimism About Illness Susceptibility.” Journal of Behavorial Medicine 10(2):481– 500. Weisberg, Jonathan (2009). “Locating IBE in the Bayesian Framework.” Synthese 167(1):125–43. Welpinghus, Anna (2020). “The Imagination Model of Implicit Bias.” Philosophical Studies 177(6):1611–33. White, Roger (2009). “Evidential Symmetry and Mushy Credence.” In T. Szabo Gendler and J. Hawthorne (eds.), Oxford Studies in Epistemology (Oxford University Press):161–86. Williams, Bernard (1973). “Deciding to Believe.” Reprinted in his Problems of the Self (Cambridge University Press). Williams, Bernard (1985). Ethics and the Limits of Philosophy (Harvard University Press). Williams, Bernard (2002). Truth and Truthfulness: An Essay in Genealogy (Princeton University Press). Williams, E. and Gilovich, T. (2008). “Do People Really Believe They Are Above Average?” Journal of Experimental Social Psychology 44(4):1121–8. Williamson, Jon (2018). “Justifying the Principle of Indifference.” European Journal for Philosophy of Science 8(3):559–86. Williamson, Timothy (2000). Knowledge and its Limits (Oxford University Press). Williamson, Timothy (2007). The Philosophy of Philosophy (Blackwell). Wilson, T.D. and Brekke, N. (1994). “Mental Contamination and Mental Correction: Unwanted Influences on Judgments and Evaluations.” Psychological Bulletin 116:117–42. Wodak, Daniel and Leslie, Sarah-Jane (2017). “The Mark of the Plural: Generic Generalizations and Race.” In Paul Taylor, Linda Alcoff, and Luvell Anderson (eds.), The Routledge Companion to the Philosophy of Race (Routledge):277–89.
Worsnip, Alex (2021). “Can Pragmatists Be Moderate?” Philosophy and Phenomenological Research 102(3):531–8. Yamada, Masahiro (2011). “Getting It Right By Accident.” Philosophy and Phenomenological Research 83(1):72–105. Zack, Naomi (ed.) (1997). Race/Sex: Their Sameness, Difference, and Interplay (Routledge). Zheng, Robin (2016). “Attributability, Accountability, and Implicit Bias.” In Michael Brownstein and Jennifer Saul (eds.), Impicit Bias and Philosophy, volume 2 (Oxford University Press):62–89. Zimmerman, Michael J. (1996). The Concept of Moral Obligation (Cambridge University Press).
Index For the benefit of digital users, indexed terms that span two pages (e.g., 52–53) may, on occasion, appear on only one of those pages. Above average effect 88n.2, 155n.8 Academic papers, refereeing of 128–129, 129n.8 Accuracy See truth. Affective biases 114 Affirmative action 24n.10, 131, 132n.12 Aggregation effects 107 Anchoring bias 109–110 Anderson, Elizabeth 39n.23, 131n.10, 206n.22 Anti‐Defamation League (ADL) 27–28 Antony, Louise 19n.5, 164n.29, 177, 182n.17 Appiah, Kwame Anthony 18n.3, 79n.20, 101n.1 Aquinas, St. Thomas 136–137 Armstrong, David 197n.11, 203n.16 Arpaly, Nomy 187n.25 Attributions of bias 70–77, 145–167 to experts 213–229 no ‘presumption of innocence’ 217 see also perspectival character of bias attributions; pejorative vs. non‐pejorative attributions of bias Arthur, John 112n.16 Audi, Robert 201n.13 Ballantyne, Nathan 88n.1, 220n.5, 221n.6, 226n.8 Banaji, Mahzarin 12n.9 Banks, Ralph Richard 39n.23 Barnett, Zach 214n.1 Baron, Jonathan 4n.4 Basu, Rima 79n.20 Bayesian perceptual psychology 182n.15 Bayesianism 178–179, 181–183, 211–212, 216n.3 Becker, Kelly 172n.2 Beddor, Bob 205n.18, 205n.20 Bedke, Matthew 209n.23 Beeghly, Erin 12n.9, 19n.6 Begby, Endre 12n.9, 79n.20, 205n.21, 206n.22 Behaviorism 179 Below average effect 155n.8 Better than average effect. See above average effect. Bias about bias 26–30, 88 inevitability of 100
third‐order biases 28–29 see also bias blind spot, the Bias blind spot, the 88–100, 154, 218, 220, 222–223, 228, 237 inevitability of 100 Biased admissions processes 23–24, 50, 52–54, 153 Biased algorithms 18n.2, 19n.8, 55n.9 Biased beliefs vs. biased believers 175 Biased coins 22, 65, 106–107, 152–153, 165–166 Biased descriptions 32, 55–58 Biased estimates of individual contributions to joint projects 155n.8 Biased knowing 171–175, 190–195, 203 Biased outcomes 47–58 Biased processes 47–58 Biased representation 30–34 Biased samples 48–49 Biases of belief vs. biases of agency 111–115 Blanshard, Brand 141n.23 Blinding, norms of 126–131 Bodenhausen, Galen 105–106 Bolinger, Renee Jorgensen 79n.20 Bostock v. Clayton County, Georgia 24n.9 Bowls (lawn bowling), and origins of the term ‘bias’ 1–2 Brekke, Nancy 89n.5 Brennan, Geoffrey 6n.6 Brinkerhoff, Anna 187n.25 Brownstein, Michael 12n.9, 105–106, 105n.5 Buchak, Lara 103n.3 Burge, Tyler 19, 146, 164n.26 Buridan’s ass 160–163, 165 Byrd, Nick 105–106 Byrne, Alex 201–202, 225n.7 Carnap, Rudolf 151, 184n.21 Carr, E.H. 33n.18 Castro, Clinton 27n.14 Catholic League, The 27–28 Chomsky, Noam 179 Christensen, David 221, 226n.9 Clutter avoidance, norms of 187–188 Cohen, Stewart 226n.9 Collingwood, R.G. 33n.18 Comesana, Juan 176n.7 Common sense 195–203 Confirmation bias 154 Confucius 1 Constitutive norms of objectivity 133–136, 240 Context‐sensitivity 106–107 Cook Wilson, John 13 Council on American Islamic Relations (CAIR) 27–28 Countercharges of bias 70–73
Crawford, Lindsay 187n.25 Danks, David 19n.8, 27n.14 Debiasing, See norms of objectivity Degrees of bias 106 see also severity of a bias DeRose, Keith 176n.7, 202n.15, 225n.7 Devil’s advocates 131 Disagreement 68–77, 92–94, 110–111 epistemology of 157–158, 213–229 see also perspectival character of bias attributions Dispositions, biases of people as 101–108, 238 Diversity of norms 63–68 “Divide and Choose,” 127–128 Dogmatism 140, 143–144 Dretske, Fred 201–202, 203n.16 Dror, Lidal 192n.5 Dubois, W.E.B 192n.5 Easwaran, Kenny 179n.10 Ehringler, Joyce 88n.1, 89nn.3–5 Elga, Adam 219, 225, 226n.9 Emergent bias 39–41 End aversion bias 156–157 Endowment effect, the 156–157 Epistemic injustice 156–157 Epistemic norms 64 Explanation 34–36 Bottom up vs. top–down explanations of bias 35–36 Causal explanations and bias 29–30 Causal vs. constitutive explanations of bias 46–47 explanation of why robust pluralism about bias is true 59 see also explanatory priority Explanatory priority 42–44 Explication 151 Externalism about bias 96, 231–232 metaphysical 237 see also internalism about bias. False positives vs. false negatives 90–91, 96–97 Fantl, Jeremy 140–141 Faucher, Luc 105n.5 Fazelpour, Sina 27n.14 Feldman, Richard 158n.19 Following 138–139, 144 Following the argument wherever it leads 136–144, 240–241 Ford, Richard Thompson 39n.23 Frankish, Keith 105–106 Franklin, Benjamin 128 Frege, Gottlob 184n.21 Fricker, Miranda 156–157
Friendship, norms of 187–188 Fritz, James 79n.20 Gadamer, Hans‐Georg 177n.8 Gallow, J. Dmitri 214n.1 Garcia, Jorge L.A. 35n.19, 39n.23, 112n.17 Gardiner, Georgi 79n.20 Gawronski, Bertram 105–106 Gendler, Tamar Szabo 12n.9, 79n.20, 105–106 Generic generalizations 206 Genesis, Book of 128 Gigerenzer, Gerd 164–165, 205n.19 Gilovich, Thomas 88nn.1–2, 89nn.3–5, 92n.7, 119n.23 Glasgow, Joshua 18n.3, 39n.23, 105n.5, 112n.16 Goldberg, Sanford 187n.25 Goldman, Alvin 203n.16, 205nn.18,20, 210n.25, 214n.1 Goodman, Nelson 178, 184n.21 Gorsuch, Neil 24n.9 Greenwald, Anthony 12n.9 Grice, H.P. 32 Griffiths, Tom 19n.4 “grue” 178, 184 Grundmann, Thomas 214n.1 Hahn, Ulrike 19n.7 Halo effect 156–157 Hamilton, Charles 39n.23 Harman, Gilbert 210n.24 Harris, Adam 19n.7 ‘Harvard Plan, the’ 53 Haslanger, Sally 39n.23, 205n.21, 206n.22 Hate crimes 26–27 Hazlett, Allan 187n.25 Hedden, Brian 18n.2, 36–37, 156n.12 Henderson, David 205n.18 Heuristics and biases (Tversky and Kahneman) 64, 164 Higher‐order biases. See bias about bias. Hindsight bias 109–110, 156–157 Holier than thou effect 155n.8 Holyrod, Jules 12n.9, 105–106, 105, 114n.18 Horgan, Terry 205n.18 Hostile attribution bias 156–157 Huebner, Bryce 39n.23 Huemer, Michael 183n.19, 193–194 Hume, David 210 Hunter, David 112n.16 Implicit bias 12–13, 105–106 Indifference, Principle of 179n.10, 182–183 Induction 178–179, 181–184, 210–211
Infanticide 194–195 Internalism (versus externalism) about bias 95–96, 212 Intrinsically biased beliefs 49 Introspection 88–98 as unreliable method for detecting bias 90–91 as biased method for detecting bias 90–91 Intuitions, philosophical 132, 193 Jefferson, Thomas 2, 116 Johnson, Gabbrielle 12n.9, 18n.2, 101n.1, 106n.6, 151n.5, 182n.17 Jussim, Lee 27 Kahneman, Daniel 5n.5, 29–30, 64, 116n.21, 128n.4, 164–165 Karlan, Brett 105–106 Kawall, Jason 187n.25 Keller, Simon 187n.25 Kelly, Daniel 12n.9, 105n.5 Knowing more by knowing less 130–131 Knowledge 171–188, 190–212, 241–243 Kornblith, Hilary 216n.2, 226n.9 Kripke, Saul 172n.2, 202n.15 Krugman, Paul 118n.22 Kuhn, Thomas 179–180, 184–186 Lackey, Jennifer 214n.1, 226n.9 Langton, Rae 206n.22 Language, knowledge of 179 Lavoisier, Antoine 128n.7 Leary, Stephanie 79n.20 Lemos, Noah 197n.11 Leslie, Sarah‐Jane 206n.22 Levy, Neil 105–106, 105n.5 Lewis, David 106n.8, 197n.11, 201–202, 201n.13 ‘literacy requirements’ 54 List, Christian 40n.25 London, A.J. 19n.8 Lord, Errol 226n.9 Loss aversion 103 Lucretius’ symmetry argument against the fear of death 157 lying vs. misleading 32–33 Machery, Edouard 101n.1, 106n.6 Make‐up calls 119–122 Mason, Cathy 187n.25 Mason, Elinore 105n.5 Matheson, Jonathan 226n.9 Madva, Alex 12n.9, 39n.23, 105–106, 105n.5 Mandelbaum, Eric 12n.9, 105–106 McGrath, Matt 140n.22 McGrath, Sarah 193n.6, 201n.13, 203n.17, 232
Media bias 118 Media Matters for America 27n.14 Media Research Center 27n.14 Media watchdog groups 27–28 Metabias. See Bias about bias. Mill, John Stuart 137 Modalized reasonableness 141 Moon, Andrew 226n.9 Moore, G.E. 197n.11 Moral dilemmas 80 Moral encroachment 79n.20 Morality, as sometimes requiring bias 83–84 Moss, Sarah 79n.20 Motivated irrationality 143–144 Multiple realizability of biases 103–106 Munton, Jesse 177n.9 Naïve realism (in the social psychologists’ sense) 98–100 Nebel, Jake 103n.3, 155n.9 Neta, Ram 176n.7 Noise. See random error. Nisbett, Richard 89n.5, 161n.23 Norms 6–7, 63–70, 186–189 conflict among 77–87 see also epistemic norms; objectivity, norms of; norm‐theoretic account of bias norm‐theoretic account of bias 3–9, 63–87, 94–97, 101–102, 119, 122–124, 145–149, 162, 230–231, 235–237 Novick, Peter 33n.18, 124 Nozick, Robert 140–141, 141n.24, 171n.1, 173n.4, 201–202 Nudges 128n.4 Objectivity, norms of 124–144, 239–241 and bias and bias attributions 230–231 see also blinding; constitutive norms of objectivity; Devil’s advocate; “Divide and Choose”; preemption, norms of; public reason; recusal; remediation, norms of; representation and inclusion. Omission bias 103 On Liberty 137 Optimistic bias 155n.8 Overcompensation 117–123, 136 Palmer, Stephen 177n.9 Parfit, Derek 157n.17 particularism 66–67 parts and wholes 34–41, 233 unbiased wholes with biased parts 37–38 biased wholes with unbiased parts 38–41, 107 pejorative vs. non‐pejorative uses of bias 18–20, 146–148, 162–167 people as fundamental carriers of bias 44–47 perspectival character of bias attributions 70–77, 93–100, 230 Pettigrew, Richard 183n.19 Pettit, Philip 40n.25 Philosophical methodology 190–195
Plato 1, 137 Pluralism 44, 47, 58–59. See also robust pluralism about bias Positive vs. negative biases 25 Practical (non‐epistemic) norms on belief 187–188 Preemption, norms of (preventative norms) 126–131, 135 Prior probabilities 178–179, 182–183 Probabilistic coherence, norms of 64, 164 Pronin, Emily 8n.8, 88nn.1,2, 89n.5, 92n.7 Priority Problem, the 43 Pritchard, Duncan 176n.7 Pryor, James 225n.7 Publication bias 32n.17, 158n.20 Public reason, norms of 127 Putnam, Hilary 103, 184n.21 Quine, W.V. 181n.14 Racism and racial bias 79–80, 111–112 institutional racism 39n.23 Random error 29–30, 63, 65, 116–117 Rationality 7, 63–64, 82–84 as consistent with bias in the pejorative sense 76–77 as requiring bias in the pejorative sense 78–80 Rationally arbitrary choices 160–162 vs. morally arbitrary choices 162n.24 Rawls, John 191–193 Reasons‐first program 67n.6 Recency bias 109–110, 155–156 Recusal, norms of 126–127, 130–131, 134–135 Reflective equilibrium, method of 191–193, 193n.6 Relativism and relativity of bias and bias attributions 3–12, 28, 74–75, 81, 86 Reliability and unreliability 115–117, 203–212 Remediation, norms of (ameliorative norms) 131–133, 135 Representation and inclusion, norms of 132–133 Representativeness heuristic 64 Republic, The 137 Rescorla, Michael 182n.15 “Rich Get Richer, Poor Get Poorer” bias effects 228 Ritchie, Katherine 206n.22 Robust pluralism about bias 44, 47, 58–59, 230, 234–235 Roedder, Erica 12n.9 Rosen, Gideon 105n.5 Ross, Lee 88nn.1–2, 89nn.3–5, 92n.7, 98 Russell, Bertrand 136–137, 159 Saul, Jennifer 12n.9, 206n.22, 226n.8 Sawyer, Sarah 209n.23 Safety (of beliefs) 174n.5, 176n.7 Scaife, Robin 12n.9 Scalia, Antonin 213–215
Schroeder, Mark 67n.6, 79n.20, 140–141 Schwitzgebel, Eric 12n.9, 101n.1, 105–106, 114n.18 Scientific knowledge 179–180, 184–186 Self‐interest 191–193 Self‐other asymmetries 154–155 Self‐serving attributional bias 155 Sense perception 37–38, 177–178 as theory‐laden 180n.13 Sensitivity and insensitivity (of belief) 171–173, 176n.7, 200–201, 216–217 Setiya, Kieran 209n.23 Severity of a bias vs. entrenchment of a bias 36–37 vs. historical contingency 37 see also degrees of bias Shakespeare, William 2n.2 Sibony, Oliver 5n.5, 29–30, 116n.21, 128n.4 Siegel, Susanna 79n.20, 180n.13 Singer, Peter 193–194 Skepticism 180–181, 195–203, 218–229 Social roles and bias 87 Sorensen, Roy 88n.1 Sosa, Ernest 172, 172n.3, 176n.7, 226n.9 Sowell, Thomas 70n.10, 159 Speciesism 69 Stafford, Tom 12n.9 Standpoint epistemology 192n.5 Status quo bias 1, 63–64, 103, 154 Steadfast vs. Conciliatory views of disagreement 221–225 Stroud, Sarah 187n.25 Students for Fair Admissions v. President and Fellows of Harvard College 24n.10 Sullivan, Meghan 157n.17 Sullivan‐Bissett, Ema 105–106 Sunk cost fallacy 103 Sunstein, Cass 5n.5, 29–30, 116n.21, 128n.4 SURE (Series of Unsurprising Results in Economics) 32n.17 Sweetman, Joseph 105n.4, 114n.18 Symmetry 144, 152–160, 181–186 Systematic error 63, 65 Systematically misleading evidence 77 Tendencies, biases of people as. See dispositions. Testimony against interest 51 Thaler, Richard 128n.4, 156n.13 Thick evaluative concepts 108–111 Time biases 157 Toole, Briana 192n.5 Triple blinding 128–129, 129n.8 Truth 4–5, 30–34, 43–44, 57, 63, 72–73, 77–78, 82, 94–97, 171, 186, 188, 230–231 Ture, Kwame (Stokely Carmichael) 39n.23 Tversky, Amos 64, 164–165
Type 1 errors vs. Type 2 errors. See false positives vs. false negatives Underdetermination 181–182, 195–196 Unger, Peter 106n.8 Unmanifested biases 102–103 Vagueness 106–107 Vargas, Manuel 105n.5 Väyrynen, Pekka 108n.10 Vavova, Katia 226n.9 Veber, Michael 142 Vogel, Jonathan 172, 172n.3, 196n.7, 200–201 Washington, George 2, 116 Washington, Natalia 105n.5 Wedgwood, Ralph 6n.6 Weisberg, Jonathan 183n.19 Welpinghus, Anna 101n.1, 105–106, 105n.4 White, Roger 183n.19 Williams, Bernard 140–141 Williamson, Timothy 72n.11, 150n.4, 172n.2, 174n.5, 176n.7, 198n.12, 203n.17, 225–226 Wilson, Timothy 89n.5 Wishful thinking 94–95 Wodak, Daniel 206n.22 Worsnip, Alex 140–141 Yamada, Masahiro 209n.23 Zheng, Robin 105n.5