Reasons Without Persons: Rationality, Identity, and Time [Hardcover ed.] 0198732597, 9780198732594

Brian Hedden defends a radical view about the relationship between rationality, personal identity, and time. On the stan

337 13 2MB

English Pages 240 [219] Year 2015

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Reasons Without Persons: Rationality, Identity, and Time [Hardcover ed.]
 0198732597, 9780198732594

Citation preview

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

Reasons without Persons

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

Reasons without Persons Rationality, Identity, and Time

Brian Hedden

1 i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

3

Great Clarendon Street, Oxford, OX DP, United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © Brian Hedden  The moral rights of the author have been asserted First Edition published in  Impression:  All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press  Madison Avenue, New York, NY , United States of America British Library Cataloguing in Publication Data Data available Library of Congress Control Number:  ISBN –––– Printed and bound by CPI Group (UK) Ltd, Croydon, CR YY Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

Acknowledgments This book is a defense of a picture of rationality centered on the agent-at-a-time rather than on the agent-over-time. Rationality is concerned fundamentally with how you are at particular instants, rather than with how your attitudes and actions at different times fit together. It puts the time-slice at the center of theorizing about rationality. The title is inspired by Derek Parfit’s justly renowned Reasons and Persons (), which has deeply influenced my thinking on a wide range of issues relating to ethics, rationality, and metaphysics. He perfectly captures the essence of the time-slice-centric view that I will defend, writing that, “when we are considering both theoretical and practical rationality, the relation between a person now and himself at other times is relevantly similar to the relation between different people” (p. ). I am deeply indebted to many colleagues and teachers who have discussed the ideas in this book and in many cases read substantial portions of the draft. I would like to thank Frank Arntzenius, Boris Babic, Michael Bratman, Rachael Briggs, John Broome, Jessica Brown, Cian Dorr, Tom Dougherty, Antony Eagle, Kenny Easwaran, Jane Friedman, David Gray, Hilary Greaves, Daniel Greco, Daniel Hagen, Alan Hájek, John Hawthorne, Richard Holton, Sophie Horowitz, Maria Lasonen-Aarnio, Harvey Lederman, Heather Logue, Anna Mahtani, Sarah Moss, Tyler Paytas, Douglas Portmore, Agustín Rayo, Jeff Russell, Joshua Schechter, Miriam Schoenfield, Paulina Sliwa, Matthew Noah Smith, Declan Smithies, Robert Stalnaker, Michael Titelbaum, Chris Tucker, Roger White, Timothy Williamson, and Steve Yablo. Thanks also to two referees for Oxford University Press, for their detailed and insightful comments. I presented material from this book at a number of conferences and universities. Thanks to audiences at the  Rocky Mountain Ethics Congress, the  Bellingham Summer Philosophy Conference, the  Formal Epistemology Festival, the University of St. Andrews/Arché, Oxford University, Cambridge University, MIT, the Australian National University, the University of Sydney, the University of Arizona, Arizona State University, Princeton University, and Yale University. I am grateful for permission to use material originally published in various journals. Chapter  is adapted from “Options and the Subjective Ought,” Philosophical Studies  (): –, and Chapters  and  draw on “Options and Diachronic Tragedy,” forthcoming in Philosophy and Phenomenological Research.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

vi acknowledgments The backbone of the argument of the book is presented in condensed form in “Time-Slice Rationality,” forthcoming in Mind. This publication was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the John Templeton Foundation. My greatest debt is to Caspar Hare, who served as my doctoral supervisor at MIT. He helped me to see connections between parts of my research that I had previously seen as separate and first encouraged me to develop my ideas into a book. His helpful suggestions and incisive criticisms have left their mark throughout.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

Contents . Time-Slice Rationality . . . .

Rationality, Personhood, and Time Time-Slice Rationality The Roles of Rationality Looking Ahead

. General Motivations . Personal Identity . Internalism

. Against Diachronic Principles . Introduction . Against Conditionalization . Diachronic Principles for Preferences

. Against Reflection Principles . Reflection for Beliefs . Reflection for Preferences

. The Diachronic Tragedy Argument . . . . .

Conditionalization and Reflection Utility Conditionalization Preference Reflection Other Cases of Diachronic Tragedy Common Structure

. Options and Time-Slice Practical Rationality . . . . . . . .

Introduction Rationality and the Subjective Ought The Problem of Options Skirting the Issue: A Minimalist Proposal Desiderata for a Theory of Options Unsuccessful Theories of Options Options as Decisions Options and the Semantics of Ought

. Options and Diachronic Tragedy . Diachronic Tragedy and the Prisoner’s Dilemma . Depragmatization and the No Way Out Argument . Rationality and the Stability of Intentions

    

                            

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

viii contents . Replacing Diachronic Principles . Replacing Conditionalization . Replacing Utility Conditionalization . Coda: Uniqueness, Coherence, and Kolodny

. Replacing Reflection Principles . Expert Deference . Preference Deference

. Doxastic Processes and Responsibility . Doxastic Justification . What about Reasoning? . Rational Evidence-Gathering

. Rationality and the Subject’s Point of View Bibliography Index

             

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 Time-Slice Rationality This book is about what it takes to be rational. As a rough first pass, we can say that being rational is a matter of being sensible, given your perspective on the world. It is a matter of believing, desiring, and acting in sane, reasonable ways. Rationality does not require that you always get things right, but only that you make the best possible use of the limited information available to you. Of course, this is not to give a reductive analysis of rationality, one which avoids use of related normative terms like “sensible,” “sane,” and “reasonable,” but I doubt whether such a reductive analysis is possible. Nor is a reductive analysis needed in order to fix ideas. Examples will often suffice for that. Suppose that your friend has a headache, and you have some pills that you justifiably believe to be pain relievers. But you’re wrong. They are really poison. Given that you want to help your friend, you rationally ought to give him the pills, even though they will in fact do him harm. You would be quite irrational if, despite your confidence that giving him the pills will relieve his headache and your desire to help him, you neglected to offer him the pills. Of course, when your friend winds up writhing around on the floor and foaming at the mouth, you will quite rightly regret offering the pills, but this does not mean that your initial decision was irrational. It just means that being rational is no guarantee of success in your endeavours. So, what you rationally ought to believe is not just whatever is in fact true, and what you rationally ought to do is not just whatever will in fact satisfy your desires or make you happy. Rather, what you rationally ought to believe depends on the evidence available to you, and what you rationally ought to do depends on what your evidence suggests would best satisfy your desires. This is a subjective notion of rationality, on which how you rationally ought to be depends on your—the subject’s—perspective rather than on an objective, god’s-eye perspective on the world.1 1 See Nagel () for discussion of objective vs. subjective points of view and the importance of this distinction for a number of philosophical problems.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 time-slice rationality

. Rationality, Personhood, and Time An important foundational issue for how we should think about rationality concerns the relationship between rationality, personhood (or identity), and time, and this is the focus of this book. Is rationality only a matter of how you are at specific instants, independently of how you are at other times? Or does it also have to do with how you are over extended periods of time? Does it, for instance, put constraints on how your actions at different times should be related to each other? Does it put constraints on how your beliefs and desires are to evolve over time? Or does it just say what you ought to believe, desire, or do at particular times, irrespective of how you were beforehand or will be in the future? It is extremely natural to think that rationality must put constraints on the relationship between one agent’s attitudes or actions at two different times that do not apply to the relationship between two different agents’ attitudes. There must be distinctively intrapersonal rational requirements, distinctive in the sense of not being reducible to interpersonal rational requirements. Agents are rationally required to coordinate with themselves in a way that they are not required to coordinate with others. Take a synchronic case. If at one time I believe both that it is raining and that it’s not raining, I am irrational, but if I believe that it’s raining and you believe otherwise, neither of us need be irrational. Similarly, if I want to both bring my umbrella and not bring it, I am irrational, but if I want to bring an umbrella and you don’t, neither of us is thereby irrational. Now turn to an example of the sort of diachronic case that will be our focus. In the epistemic realm, consider the contrast between the following two cases: Fickle Frank Frank is a physicist who changes his mind minute by minute. When he wakes, as he is having breakfast, he is pretty sure that the Everett multiple universe hypothesis is the right interpretation of quantum mechanics. By mid-morning, he abandons that belief in favor of the Copenhagen interpretation. By lunchtime, he switches camps once again, siding with the de Broglie-Bohm theory. But that doesn’t last, and he continues changing his mind throughout the day. Moreover, his shifting opinions are not the result of acquiring new evidence. Rather, he just . . . changes his mind. The Frankfurt Physics Conference A major conference on quantum mechanics is being held in Frankfurt. In attendance are proponents of a wide range of interpretations of quantum mechanics. There is a team of researchers from MIT who believe that the Everett multiple universe hypothesis is the best explanation of the available data. Seated next to them is an eminent professor from Cambridge who advocates the Copenhagen interpretation. Further down the row is a philosopher of physics who recently authored a book arguing that the de Broglie-Bohm theory is correct. In all, the lecture hall is filled by advocates of at least a dozen competing quantum mechanical views.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

time-slice rationality  Fickle Frank is not just fickle, but irrational. His beliefs, confidently held and then promptly abandoned, are unjustified. By contrast, nothing about the description of the Frankfurt conference requires that any of the attendees be irrational. Indeed, on a natural spelling-out of the case, they seem like paragons of rationality, carefully evaluating the evidence and debating their views with colleagues. An obvious explanation of the difference between these two cases is that there are rational requirements that apply within agents that do not apply across agents. If we take snapshots of a single agent at successive moments, the agent’s beliefs in each snapshot should cohere in a certain way with her beliefs in preceding and succeeding snapshots. Of course, that’s not to say that an agent’s beliefs cannot change over time, but only that they should do so in a smooth manner, free from the sort of wild, inexplicable fluctuations that characterize Frank’s evolving view of the world. By contrast, if we line up all the attendees at the physics conference, one physicist’s beliefs do not need to cohere in any particular way with the beliefs of the physicist to her left or the physicist to her right. This suggests that there are irreducibly intrapersonal requirements of rationality that apply to belief. Something similar seems plausible in the case of desires or preferences. Consider: Career Decisions Julie is in college trying to decide what career to pursue. Early in her sophomore year, she wants to study medicine and become a surgeon. But by the next week, this desire fades and she desires nothing more than to become a journalist. Shortly thereafter, she is fully committed to becoming a biologist. And she continues changing her mind throughout college. Moreover, it was not as though these shifts are the result of her learning that she is repulsed by blood, or that she objects to the superficial media coverage so commonplace nowadays, or that she feels claustrophobic spending all day in a lab. She just . . . changes her mind. The Career Counselor’s Office The college career counselor’s office is filled with students seeking advice about how to choose their studies to match their career goals. The counselor is currently talking to a sophomore who wants to become a doctor and needs to know which courses to take to get into med school. Next in line is a prospective journalist who wants to know if working on the university newspaper will help her chances. After her is a student who wants to be a biologist but wants more details about job prospects.

Once again, Julie seems irrational, whereas the students in the career office do not (nor does the group as a whole). Rational agents do not have their desires fluctuate inexplicably over time (though of course their desires can change in subtler ways), whereas groups of agents can consist of members with very different

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 time-slice rationality desires without anyone being irrational. This again suggests that there are intrapersonal requirements of rationality applying to desire that are not reducible to interpersonal requirements of rationality. There is more than just intuitions to support intrapersonal requirements of rationality. First, it is a commonly held thought that rationality is important in part because being rational generally helps us achieve our ends. But Briggs (, ) argues that intrapersonal norms are needed if rationality is to be useful in this regard: But some sort of intrapersonal coherence is necessary for inference and planning; an agent who conducts his or her epistemic life correctly will have earlier and later selves that cohere better than a pair of strangers. The sort of diachronic coherence in question should not be so strong as to demand that agents never change their beliefs. But it should be strong enough to bar agents from adopting belief revision policies which lead to changes that are senseless or insupportable by their current lights.

Similarly, Broome (, ) argues, You could not manage your life if your beliefs and intentions were liable to vanish incontinently. This is most obviously true of intentions. To bring some intertemporal coherence to our lives, we regularly decide at one time to do something at a later time. Making decisions will not actually achieve coherence unless we generally do as we decide. To decide is to form an intention, and to be effective, that intention must generally persist until we put it into effect. Only a little less obviously, our beliefs must also persist if our lives are to be coherent. In getting about the town, you have to rely on the persistence of your beliefs about where the various streets and shops are.

Indeed, this seems to capture an important part of why Frank’s fluctuating beliefs and Julie’s fluctuating desires seem irrational. If they keep changing their minds about what to achieve and how best to achieve it, they will wind up achieving nothing. Instead, they will start many projects but finish none, because by the time they have embarked on one, either they will no longer think it is a promising way of achieving their goals or they will have adopted different goals altogether. Frank will apply for research grants, but by the time he wins such a grant, he will have abandoned the background theory on which the research proposal was based. And Julie will change her major over and over as she changes career goals. She will rack up fees for course changes and consistently be behind the other students in her major who chose one thing and stuck with it. Finally, she will need extra years and extra loans to finish her studies, if she ever does. Frank and Julie’s vacillating attitudes cannot count as rational, in part because they are so debilitating. A similar point can be made in the epistemic case, where one might be tempted to think that being rational is supposed to help one arrive at true beliefs. Rational

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

time-slice rationality  beliefs are supposed to be more likely2 to be true than false ones, else what is the point of being rational? Indeed, many epistemologists, who agree on little else, hold that this link with truth is at least partly constitutive of epistemic rationality.3 Even if requirements of epistemic rationality do not obtain that status because following them tends to lead to true beliefs, it seems at the very least that being rational should, ceteris paribus, improve one’s chances of reaching the truth. But if you change your beliefs willy-nilly, without having gained some relevant evidence that supports a change in view, whether you arrive at the truth will be a matter of mere luck. And likewise, even if you do at some point arrive at a true belief on some topic, you will be unlikely to maintain a true belief on the matter. Frank may wind up with a true belief about quantum mechanics by mid-afternoon, but he will be back to believing a falsehood by dinner once he changes his mind again. Ruling out this vacillation, which is not truth-conducive, as irrational might seem to require distinctively intrapersonal requirements of rationality. Finally, rationality seems closely related to inference and reasoning. This is reflected in our language. “Rational” seems to mean much the same thing as “reasonable,” “ratiocination” is an antiquated term meaning the same as “reasoning,” and the noun “rationality” is arguably synonomous with “reason” (read as a mass noun, as in Hume’s famous dictum, “Reason is, and ought to be, the slave of the

2 It is not entirely clear how one should cash out this notion of rational beliefs being more likely to be true. Is it that the objective chance of a rational belief being true is higher than that of an irrational belief being true? Of course, given how the world actually is—namely, that it’s an orderly, inductionfriendly world—it seems true on virtually all conceptions of rational belief that rational beliefs have a higher objective chance of being true, regardless of whether you are a reliabilist or an internalist, and regardless of whether you buy into diachronic norms or not. Or is the notion of likelihood supposed to be a matter of rational credence—that it’s rational to be more confident that rational beliefs will be true than that irrational beliefs will be true? On many views, this will follow trivially, e.g. if you are rationally required to think that most of your beliefs are rational and that most of them are true. I will not delve into details here. Instead, I just want to note that it isn’t always clear what epistemologists mean when they say that rational beliefs must be more likely to be true. 3 For instance, Goldman (, ), an externalist, writes that “true belief is the ultimate value in the epistemic sphere, and various belief-forming processes, faculties, or mechanisms are licensed as [epistemically] virtuous because they are conducive to true belief. Beliefs are justified when they are produced by these very truth-conducive processes.” In the same vein, BonJour (, –), an internalist, holds that “What makes us cognitive beings at all is our capacity for belief, and the goal of our distinctively cognitive endeavors is truth: we want our beliefs to correctly and accurately depict the world . . . The basic role of justification is that of a means to truth . . .” Finally, Quine (, –), a naturalist, writes that “For me normative epistemology is a branch of engineering. It is the technology of truth-seeking, or, in a more cautiously epistemological term, prediction . . . There is no question here of ultimate value, as in morals; it is a matter of efficacy for an ulterior end, truth or prediction.” Berker () argues against this view that epistemic rationality is a teleological enterprise with true belief as its goal.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 time-slice rationality passions”).4 One might even think that the goal, or at least an important goal, of theorizing about rationality is to characterize good reasoning and to distinguish it from bad reasoning.5 But reasoning is something that, in paradigmatic cases, is undertaken by a single agent over a stretch of time. (Groups of agents can perhaps also be said to reason, for instance in a public debate, but arguably the sense in which groups reason is derivative. Groups reason in virtue of the fact that their members reason and periodically share their views in an attempt to influence the reasoning of others.) If rationality is partly about reasoning, and reasoning is something that an agent does over time, then there must be intrapersonal requirements of rationality, such as requirements stating what an agent ought to believe at one time, given that she has gone through a certain pattern of reasoning leading up to that time.

. Time-Slice Rationality What I have said so far seems obvious, almost a truism. But I am convinced that it is all wrong. This way of thinking about rationality, on which there is an important distinction between the intrapersonal case and the interpersonal case, is misguided. There are no requirements of rationality that apply within agents that do not also apply across agents. As far as rationality is concerned, there is no difference in kind between the relationship an agent at one time bears to herself at another time and the relationship one agent bears to another. The relationship between time-slices of the same agent is not fundamentally different, for purposes of rational evaluation, from the relationship between time-slices of distinct agents. The mere fact that me-now and me-later are related by the relation of personal identity over time, whereas me-now and you-now are not, plays no role in the theory of rationality. Moreover, we don’t need requirements of rationality that govern how agents are over time in order to account for the contrast between Fickle Frank and the Frankfurt Physicists, or between Julie and the other students in the career counselor’s office. Frank and Julie are irrational not because of how they are over time, but because of how they are at particular times. Frank changes his beliefs without any changes in his evidence, so at most times during the day, his beliefs are not supported by the evidence he has at that moment, whereas the Frankfurt Physicists presumably have different bodies of evidence, a fact which permits them to each have different beliefs. (Note that since what you ought to believe 4 5

Hume (, B..). In Chapter  I argue against the view that rationality is largely concerned with reasoning.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

time-slice rationality  depends on your total evidence, the Frankfurt physicists will likely differ in their total evidence even if they have read roughly the same articles and studies. Of course, top physicists often disagree even when they share approximately the same evidence, but in my view this may be because they are not all perfectly rational, even if they are extremely smart and talented.) And Julie changes her goals without any changes in the tastes she has or the evidence she possesses, which arguably means that at most times during the day, her goals don’t make sense given the rest of her mental states at that time. In this way, we can account for why Frank and Julie are irrational without invoking intrapersonal requirements governing how one’s views should change over time. This book is an extended defense of an alternative picture of rationality on which the locus of rationality, as it were, is not the temporally extended agent, but rather the time-slice. Aggregations of time-slices are like aggregations of agents, in the sense that there are no requirements of rationality which state how the attitudes of one member of the aggregation should be related to the attitudes of another member of the aggregation. I call this picture “Time-Slice Rationality.” It is the view that rationality is concerned entirely with what attitudes you have at particular times. Whether you are rational at a time depends solely on the attitudes you have at that particular time, independently of their relation to attitudes you have, or believe you have, at other times.6 The preceding is rather vague, so let me spell out in more detail the theses to which Time-Slice Rationality is committed. It involves, in the first instance, two sweeping claims. The first is that there are no diachronic requirements (or principles or norms; I use these terms interchangeably) of rationality. Diachronic requirements say how you rationally ought to be as a function of how you are at other times. Time-Slice Rationality holds, instead, that all principles of rationality are synchronic. How you rationally ought to be at a time directly depends only on your mental states at that time, not on how you (or time-slices psychologically continuous with you) were in the past or will be in the future.7 6 Time-Slice Rationality is related to a view which Moss (forthcoming) has called “Time-Slice Epistemology.” While my view is quite similar in spirit to Moss’s, mine extends beyond epistemology to apply also to the rationality of preferences and actions. My view also goes beyond Moss in a further respect. Moss focuses on the claim that all epistemic norms should be synchronic. But my view also says that your beliefs about the attitudes you will have in the future play no special role in determining what attitudes you ought to have now. As we shall see in Chapter , this rules out Reflection principles, which give a special role to your beliefs about your future attitudes but are nonetheless synchronic principles. 7 Note that I am here appealing to the notion of propositional justification. This is a forwardlooking, ex ante notion of rationality or justification concerned with what beliefs, desires, and actions you ought to have or perform. But there is also the notion of doxastic justification (and analog for desires and actions), which is concerned with the basis on which you formed or hold some belief

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 time-slice rationality Synchronicity All requirements of rationality are synchronic.

Importantly, Synchronicity, and hence Time-Slice Rationality, allows that your past attitudes can affect what you ought to believe, desire, or do now, but only by affecting your present mental state. So Time-Slice Rationality is compatible with the obvious datum that your past beliefs and desires affect which actions you performed in the past, which in turn affect what evidence you have now. But it is your present evidence that directly determines what you ought to believe right now (and more generally, it is your evidence at a time that determines what you ought to believe at that time). I want to flag here that there are two different ways in which a principle might be diachronic and hence incompatible with Time-Slice Rationality. Diachronic principles can be either narrow-scope or wide-scope. Narrow-scope diachronic principles say how you ought to be at a time as a function of how you are at other times, and so if you violate a narrow-scope diachronic principle, there is a particular time at which you are irrational (by the lights of that principle). Widescope diachronic principles, by contrast, concern a notion of “global rationality” which goes beyond rationality-at-a-time. If you violate a wide-scope diachronic principle, there may be no particular time at which you are irrational (by the lights of that principle). Each of your time-slices, so to speak, is rational, but you—the temporally extended agent—are not. I will discuss this distinction in Chapter , Section , footnote , but for now I just want to note that Time-Slice Rationality denies the existence of both types of diachronic principles. Even if the attitudes you have at other times play no non-derivative role in determining what you ought to believe, desire, or do now, there is still a second way for there to be irreducibly intrapersonal requirements of rationality. For it might be that even though the attitudes you in fact have at other times do not affect how you ought to be now, nevertheless the attitudes you believe you had in the past or will have in the future affect how you ought to be in some special way. It might be that your beliefs about the attitudes you (or psychological continuents of you) will have in the future differ importantly from your beliefs about the attitudes that other people have in terms of their impact on what attitudes you ought to have now. Such norms would not, strictly speaking, be diachronic. They would not say how your attitudes must in fact be related over time; instead, they would say how your attitudes now should be related to certain other attitudes you have now, (or the basis on which you formed some desire or decided on some course of action). This is a backward-looking, ex post notion of justification. Time-Slice Rationality is primarily concerned with propositional justification, but I will briefly discuss doxastic justification in Chapter .

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

time-slice rationality  namely your beliefs about what attitudes you will have in the future. Time-Slice Rationality rejects this second type of irreducibly intrapersonal requirements of rationality. It espouses Impartiality: Impartiality In determining how you rationally ought to be at a time, your beliefs about what attitudes you have at other times play the same role as your beliefs about what attitudes other people have.

According to Time-Slice Rationality, at the most fundamental level, your beliefs about what attitudes you have at other times play exactly the same role in determining how you ought to be as your beliefs about what attitudes other people have. Your belief that you will be optimistic about the Democrats’ chances in  plays the same role in determining what you ought to believe as your belief that someone else is or will be optimistic about the Democrats’ chances in . There is nothing special about the fact that in the former case, your belief is about your future beliefs, while in the latter case it is about the beliefs of someone else. These two commitments give us a precise characterization of Time-Slice Rationality: Time-Slice Rationality

• Synchronicity: All requirements of rationality are synchronic. • Impartiality: In determining how you rationally ought to be at a time, your beliefs about what attitudes you have at other times play the same role as your beliefs about what attitudes other people have.

. The Roles of Rationality I am advocating a particular conception of rationality. But before plunging into a defense of that conception of rationality, it will be helpful to say a bit in more general terms about how I am thinking about rationality. Now it may be that the terms “rational” and “irrational” are employed in many different and sometimes conflicting ways, so that ordinary usage of the terms fails to align perfectly with any specific theory of rationality, and I do not want my defense of Time-Slice Rationality to amount simply to a flatfooted insistence on my preferred way of talking. Instead, I want to start by focusing on the work that the notion of rationality is supposed to do in a broader theory of agency and normativity. By focusing on some interesting and important theoretical roles, we may be able to home in on a concept of substantial philosophical interest and which is deserving of the name “rationality.” There may be other concepts which are also more or less compatible with our usage of the term “rational.” I don’t want to quibble

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 time-slice rationality over language. Instead, I just want to identify some theoretical roles which are of particular interest in philosophy and to focus on a concept which satisfies them. In ordinary speech, we evaluate a vast array of different things as rational or irrational: people, dispositions, habits, emotions, and even laws, city layouts, voting systems, arguments, and conversations. Rather than attempt to devise a theory of rationality on which there are rational norms governing all of these multitudinous things—a strategy which would no doubt result in a gerrymandered and unenlightening theory—I want to focus on a more narrow set of things as the objects of rational evaluation. In particular, I want to focus on the rationality of beliefs, desires, and actions. Then, the hope will be that insofar as we can talk about the rationality of laws, emotions, dispositions, city plans, arguments, and the like, this will be derivative on the rationality of beliefs, desires, and actions, as things that either result from or cause irrational beliefs, desires, or actions (though perhaps city plans and laws are only irrational insofar as they are inefficient or arbitrary). So let’s focus on beliefs, desires, and actions in theorizing about rationality. Or rather, let’s focus on doxastic (belief-like) attitudes, whether fine-grained credences (aka degrees of belief or subjective probabilities) or coarse-grained beliefs; conative (desire-like) attitudes, whether fine-grained utilities or more coarse-grained preferences or desires; and actions. So much for delineating the objects of rational evaluation. What more can we say about rationality? I think that we can get a good grip on the notion of rationality by highlighting three central theoretical roles which we want the notion of rationality to play. The first role is an evaluative one. If you violate principles of rationality, there is a distinctive sort of criticism that can be leveled against you. While it is exceedingly difficult to say precisely what kind of criticism this is, a few rough comments will suffice. If you are irrational, you are doing something that even by your own lights is a mistake. You are making a mistake that you are in some sense specially placed to recognize and correct. Being irrational is a matter of making a mistake that is in some sense internal to you and not just the result of the world being uncooperative. That’s why having a false belief needn’t be irrational, but having a belief which is unsupported by your own evidence is irrational. The second role for the notion of rationality is a predictive and explanatory one. We predict that, ceteris paribus, you will do more or less whatever it is that you rationally ought to do, have more or less whatever beliefs you rationally ought to have, and have more or less whatever preferences you rationally ought to have. Of course, the ceteris paribus clause is crucial. If we have evidence that you are irrational is some specific way, for instance the evidence coming from recent

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

time-slice rationality  developments in behavioral economics, then we will not predict that you will be as you rationally ought. So it is only against a general, defeasible, background assumption of your rationality that we use the theory of rationality to predict what you will believe, desire, and ultimately do. Nevertheless, the predictive role of the notion of rationality is an important one. The prospective predictive role of the notion of rationality is paired with a retrospective explanatory role. We can explain why you believe such-and-such by pointing out that you were rational that you had evidence which supported belief in such-and-such. And we can explain why you acted as you did by pointing out that you were rational and had beliefs and desires which recommended the action that you wound up performing. The third and final theoretical role of the notion of rationality is a guidanceproviding one. The theory of practical rationality is supposed to be action-guiding, and the theory of epistemic rationality is supposed to be belief-guiding. Now, it is tempting to cash out this guidance-providing role by saying that you should always be in a position to know what rationality requires of you. While I have some sympathy with this formulation, it is also potentially problematic. For Williamson (, ch. ) has argued that no states are luminous, in the sense that whenever they obtain, you are in a position to know that they obtain.8 Hence even the state of being such that you ought to believe such-and-such, or ought to do such-andsuch, will fail to be luminous; there will be cases in which you ought to have some belief or perform some action without being in a position to know that you ought to do so. Now, it is worth emphasizing that Williamson’s argument relies on controversial assumptions about knowledge and can be resisted.9 But I propose to remain neutral on the soundness of Williamson’s anti-luminosity argument and refrain from cashing out action- and belief-guidingness in terms of always being in a position to know what you rationally ought to do or believe. Instead, we should demand that the rational ought be guidance-providing in the sense that what you rationally ought to do or believe is sensitive to your evidence and your uncertainty about the world, so that our theory of the rational ought can be of some aid to you. What you rationally ought to do or believe should depend on what information you have available, rather than simply on how the world in fact is. Insofar as some fact affects what you ought to do or believe, it must do so by way of affecting your mental state. If some fact obtains but does not impact on your mental state in any way, then it cannot affect how you rationally ought to be.

8 A caveat: As Williamson notes, his argument does not apply to trivial cases of states which always obtain or always fail to obtain, and so these states may in fact be luminous. But this does not threaten the import of his conclusion. 9 See especially Berker () for a rebuttal of Williamson’s argument.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 time-slice rationality In sum, the theory of rationality is supposed to play an evaluative role, a predictive/explanatory role, and a guidance-providing role. There is, of course, much more to be said about these roles and the relations between them, but for now what I have said is enough to clarify somewhat the notion of rationality and the work it is supposed to do. Being able to appeal to the roles meant to be played by the notion of rationality will be crucial in what follows, allowing us to evaluate competing claims about rationality without relying solely on intuitions about cases and without simply arguing about words and flatfootedly insisting on one’s own preferred way of talking.

. Looking Ahead The remainder of this book is dedicated to defending Time-Slice Rationality. In the next chapter, I begin with two general motivations for this picture of rationality. The first appeals to puzzle cases for personal identity over time,10 while the second is based on a moderate form of internalism about rationality. Having motivated Time-Slice Rationality as a general thesis, I turn my attention to the details. After all, even if Time-Slice Rationality seems initially attractive, one might doubt whether it is actually workable. What exactly would such a timeslice-centric picture of rationality look like? Which principles would it reject, and which would it espouse? Chapters – are dedicated to answering these questions. Chapters  and  are negative. They are devoted to arguing against two sorts of potential requirements of rationality which conflict with Time-Slice Rationality. In Chapter , I argue against diachronic principles, which violate Synchronicity, and in Chapter  I argue against reflection principles, which violate Impartiality. Importantly, while diachronic principles and reflection principles for beliefs have been widely debated, I also investigate analagous principles governing preferences. But there is a problem. In Chapter , we shall see that diachronic principles and reflection principles are supported by considerations of intrapersonal coherence. If you violate one of these principles, then you are predictably exploitable over time. On this basis, the Diachronic Tragedy Argument (as I will call it) concludes that it is irrational to have beliefs or preferences which violate the principles in question. Defending Time-Slice Rationality will thus require rebutting the Diachronic Tragedy Argument. Chapter  develops a time-slice-centric approach to the rationality of actions, and Chapter  shows how this approach can be used to rebut the Diachronic Tragedy Argument.

10

I briefly discuss the relation of personal identity at a time in Chapter .

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

time-slice rationality  In Chapters  and , I replace the jettisoned principles with improved principles which are time-slice-centric and which do much of the work we expected from the old principles. That is, where diachronic principles and reflection principles yielded the right results, so do my replacement principles. But where the old principles go awry, my new principles give the right results. This is important. It shows that these time-slice-centric principles yield coherence over time as a byproduct. That is, if you obey these new principles at every time and are thus rational by the lights of Time-Slice Rationality, then in most cases you will also be rational by the lights of traditional, non-time-slice-centric views of rationality. For this reason, my view is less revisionary than one might have suspected. Instead, it both gives intuitively plausible results about particular cases and puts principles of rationality on a firmer footing than traditional nontime-slice-centric theories. In Chapter , I explore what Time-Slice Rationality can say about doxastic justification, reasoning, and evidence-gathering. Chapter  highlights themes from preceding chapters and raises further issues for future research. So much for stage-setting. Let’s get started!

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 General Motivations Time-Slice Rationality is motivated by two general considerations. The first is puzzle cases about personal identity over time, and the second is internalism about rationality. I take these up in turn. But before delving in, let me mention a third motivation for Time-Slice Rationality that will emerge only over the course of the remainder of the book, and in particular in Chapters –. This motivation is the overall simplicity and unity of the resulting time-slice-centric picture of rationality. I believe that the time-slice-centric norms which I propose are superior to the more orthodox norms which they are intended to replace. The considerations of internalism and puzzle cases for personal identity over time, to be discussed in this chapter, serve as an initial motivation for exploring the possibility of a fully time-slice-centric theory of rationality, but ultimately, Time-Slice Rationality must also be assessed on the theoretical virtues of the resulting package view itself, compared to those of alternative non-time-slicecentric packages.

. Personal Identity If what you rationally ought to believe, desire, or do depends in a special way on the attitudes that you have, or believe you have, at other times, then what you ought to believe, desire, or do crucially depends on facts about personal identity over time. That is, it depends on facts about what makes it the case that a later person is or is not identical to (i.e. is or is not the same person as) an earlier person. But the relation of personal identity over time is problematic in ways that make it doubtful whether it should play such an important role in the theory of rationality. Consideration of puzzle cases for personal identity—ones where it is unclear whether a later person is identical to an earlier person—suggests that what you ought to believe or do at a time does not depend on facts about identity. If this is right, then requirements of rationality should avoid reference to the relation of personal identity over time; they should be impersonal.1 This constraint supports both parts of Time-Slice Rationality—Synchronicity and Impartiality. 1

I will shortly discuss the option of replacing reference to personal identity over time with reference to surrogate notions like psychological continuity in formulating requirements of rationality.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

general motivations  I will not be attempting to give a theory of personal identity over time here. Quite the contrary, my goal is to show that personal identity is so messy and problematic that it should play no important role in the theory of rationality. To see this, we must go into some detail about the metaphysics of personal identity over time. I want to go into just enough detail to give a sense of what this debate involves and to give you the feeling that this debate could not possibly need to be settled in order to come up with a theory of rationality. Consider some standard cases where the identity facts are unclear and controversial. Start with Teletransportation: Teletransportation You enter the teletransporter. The machine scans you, identifying and recording the exact molecular structure of your body. Then, the information is sent to Australia, and your body is destroyed just as a molecule-for-molecule copy of your body is created in Sydney.

Do you survive teletransportation? Is it like air travel, only quicker and without the jetlag? Or do you die in the teletransporter when it destroys your body? In other words, is the person who exits the machine in Sydney the same person as you, or merely a physical duplicate of you? The physical criterion of personal identity holds that a later person is identical to an earlier person just in case they have the same body. You and the person who exits the machine in Sydney do not have the same body. Your body was destroyed, and a molecule-for-molecule copy was created in Sydney. So according to the physical criterion of personal identity, you do not survive teletransportation. By contrast, the psychological criterion of personal identity over time holds that it is psychological facts which determine the identity facts. What matters is that the later person and the earlier person have a certain amount of psychological continuity.2 If psychological continuity rather than physical continuity is what determines identity, then you do survive teletransportation, albeit with a new body.

2 The relevant psychological connections include that which holds between a later memory and the experience of which it is a memory, that which holds between an earlier intention and the carrying out of that intention, and that which holds between beliefs and desires which are had at both the earlier and the later times. Note that the relation between a memory and the remembered experience, and the relation between and intention and the later carrying out of the intention, presuppose the relation of personal identity. You can only remember an experience that you had, and you can only later on carry out an intention that you made earlier. This raises the worry that the psychological criterion of personal identity will be circular. Following Parfit (), we can avoid this circularity worry by replacing memory with quasi-memory, which is just like memory except that it doesn’t presuppose identity (you can quasi-remember experiences that someone else had), and by similarly replacing talk of carrying out an earlier intention with talk of quasi-carrying out an earlier intention (where you can quasi-carry out an earlier intention that was formed by someone else).

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 general motivations Teletransportation is already a tricky case. But things get worse. Consider Parfit’s (, ) Combined Spectrum: The Combined Spectrum At the near end of this spectrum is the normal case in which a future person would be fully continuous with me as I am now, both physically and psychologically. This person would be me in just the way that, in my actual life, it will be me who wakes up tomorrow. At the far end of this spectrum the resulting person would have no continuity with me as I am now, either physically or psychologically. In this case the scientists would destroy my brain and body, and then create, out of new organic matter, a perfect Replica of someone else. Let us suppose this person to be . . . Greta Garbo. We can suppose that, when Garbo was , a group of scientists recorded the states of all the cells in her brain and body. In the first case in this spectrum, at the near end, nothing would be done. In the second case, a few of the cells in my brain and body would be replaced. The new cells would not be exact duplicates. As a result, there would be somewhat less psychological connectedness between me and the person who wakes up . . . Further along the spectrum, a larger percentage of my cells would be replaced, again with dissimilar cells. The resulting person would be in fewer ways psychologically connected with me, and in more ways connected with Garbo, as she was at the age of  . . . Near the far end, most of my cells would be replaced with dissimilar cells. The person who wakes up would have only a few of the cells in my original brain and body, and between her and me there would be only a few psychological connections.

In the first case in the spectrum, the future person is clearly Parfit. In the last case, the person is clearly not Parfit. But in the middle cases, the identity facts are unclear. There is some degree of physical and psychological continuity with earlier Parfit, but it is not clear whether these connections are strong enough for the relation of personal identity to hold between them. Exactly how much physical and/or psychological continuity is required for identity? Is the phrase “is the same person as” vague (just as the predicate “bald” is vague)? Could the relation of personal identity over time come in degrees, or must identity be an all-or-nothing affair? Yet further problems arise when we turn to branching cases. Consider Double Teletransportation: Double Teletransportation3 One person (call her “Pre”) enters the teletransporter. Her body is scanned. Then, at the instant her body is vaporized, the information about her molecular state is beamed to two locations, Los Angeles and San Francisco. In each city, a molecule-for-molecule duplicate of Pre is created. Call the one in Los Angeles “Lefty” and the one in San Francisco “Righty.” Lefty and Righty are each qualitatively just like Pre is before her body is vaporized. 3 Double Teletransportation is closely related to Fission, in which one person undergoes a surgery in which each half of her brain is transplanted into a body which also contains a physical duplicate of the other half of her brain. The two cases raise much the same issues.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

general motivations  If only one of Lefty and Righty existed, then it is plausible that Pre would survive the teletransportation. This is especially clear for defenders of the psychological criterion (defenders of the physical criterion will likely say that Pre doesn’t survive, for the same reasons they do not think that you can survive regular, non-branching teletransportation). Lefty and Righty are each psychologically continuous with Pre, so if psychology is what matters for identity, each of them individually fits the bill. The trouble is that in Double Teletransportation, Pre’s psychology branches. Given this, is Pre identical to Lefty, to Righty, to both, or to neither? There are no good grounds for saying that Pre is identical to only one of Lefty and Righty but not the other, for the cases are perfectly symmetric. And Pre seemingly cannot be identical to both Lefty and Righty, since the identity relation is transitive and Lefty and Righty are not identical to each other. If Lefty is identical to Pre, and Pre is identical to Righty, and if identity is transitive, then Lefty is identical to Righty, but this is plainly false. They are different people. Finally, we could say that Pre is identical to neither Lefty nor Righty. Double Teletransportation results in Pre’s death and the creation of two new people. But this is also problematic. For if only one of Lefty and Righty were created, then Pre would survive and hence be identical to the resultant person. In the words of Parfit (, ), “How could a double success be a failure?” Williams (, ) also seems to make the plausible claim that whether a later person is identical to an earlier person should depend only on the intrinsic features of the relation between them, and not on what happens with other people. If so, it is hard to see how Pre could fail to be identical to Lefty, since if we ignore Righty, Lefty has everything it takes to count as the same person as Pre. And similarly, mutatis mutandis, for Righty. But we have not yet canvassed all the possibilities. Perhaps Pre is still around after Double Teletransportation but has a divided mind (see Parfit (, –) for criticism of this proposal). She has Lefty and Righty as parts without being identical to either, just as I have my right hand and my left hand as parts without being identical to either. There are cases where one person may have a somewhat divided mind, such as epilepsy sufferers who undergo an operation partially separating the two hemispheres of the brain involving the cutting of the corpus callosum.4 But could a single person have such a radically divided mind as this? The degree to which Pre would have to have a divided mind in order to have Righty and Lefty as parts is much, much greater than the degree to which epilepsy sufferers with a cut corpus callosum have a divided mind. 4

This case will be discussed further in Chapter .

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 general motivations And lastly, Lewis () argues that prior to Double Teletransportation, there are really two people who are temporarily co-located. There is one person who, after Double Teletransportation, survives as Lefty, and another who survives as Righty. But prior to Double Teletransportation, they occupy the same body. These puzzle cases clearly show that it is no easy task to come up with a satisfactory criterion of personal identity over time. But in my view, if we consider an agent in one of these puzzle cases, such as Lefty in Double Teletransportation, once we have specified what her total evidence is, we have settled all that needs to be settled in order to determine what she ought to believe (and once we also specify what her preferences are, we have settled all that needs to be settled in order to determine how she ought to act). We do not also need to settle the controversial facts about personal identity over time. Similarly, in Teletransportation, we do not have to determine whether you survive teletransportation in order to determine what you, prior to entering the machine, or the person who exits the machine in Sydney (who may or may not be you), ought to believe. And in the middle cases of the Combined Spectrum, we do not have to determine whether there is enough physical and/or psychological continuity for the post-operation person to be Parfit in order to determine what he or she ought to believe. In the puzzle cases, the facts about rationality can be crystal clear even when the facts about personal identity over time are murky. This suggests that what rationality requires of someone does not depend on these identity facts. One major reason for this is that facts about personal identity over time are (typically) not evidentially relevant (and nor are facts about surrogate notions like psychological continuity, for that matter), whereas what you ought to believe depends only on evidential considerations. Of course, without yet considering any specific potential norms of rationality, you may not already have the intuition that in these puzzle cases we can settle facts about rationality without settling facts about personal identity. But I hope that once we start looking at particular norms, starting with Conditionalization in Chapter , you will share this judgment. If I am right, and such metaphysical facts about identity are irrelevant to the question of how one rationally ought to be, then the theory of rationality must avoid being held hostage to such problematic metaphysical questions. It should countenance only principles that are impersonal. Time-Slice Rationality does just this. By focusing on the time-slice, rather than the temporally extended person, as the locus of rationality, Time-Slice Rationality entails that we do not have to settle the metaphysical facts about identity in order to settle the normative facts about rationality. This is an appealing feature.5 5 I should acknowledge that there is a trivial sense in which facts about the right theory of personal identity over time might affect what you rationally ought to believe. Facts about the right theory of

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

general motivations  Note that I have not denied that there is a fact of the matter about who is identical to whom in the various puzzle cases, but only that neither the agents themselves, nor we the theorists, need to settle these identity facts in order to determine what the agent ought to believe, desire, or do. But the case for TimeSlice Rationality would be bolstered still further if we could draw a stronger conclusion from the puzzle cases, namely that our concept of personal identity is in some sense inconsistent. It carries commitments that cannot be satisfied all at once. In Lewisian () terms, it may be that there is nothing that can satisfy, or even come close enough to satisfying, the theoretical role of personal identity, and so there simply isn’t any relation in the world that could be the relation of personal identity over time. On this view, talk of personal identity over time is akin to talk of phlogiston. Scientists thought that they were really succeeding in referring to something when they used the term “phlogiston,” but in fact there was nothing in the world to answer to their concept.6 Clearly, if the concept of personal identity is inconsistent, then requirements of rationality should avoid reference to it. What you ought to do or believe should not depend on facts about identity over time, since there are no such facts. (This would support but not quite entail TimeSlice Rationality, since one might attempt to replace reference to personal identity over time with some surrogate notion such as psychological continuity. I delay discussion of this sort of gambit until Chapter , Section ., when we can test its viability in the context of a particular purported requirement of rationality, namely Conditionalization.) But again, let me emphasize that this stronger conclusion, while congenial to my view, is not necessary for it. It should be more than clear from the foregoing that Time-Slice Rationality is not a metaphysical theory. Instead, it is compatible with any criterion of personal identity over time out there.7 Nor is it a theory about what we are; it does not say identity might affect what you ought to believe about identity. It is a metaphysical question what the correct theory of personal identity is, and if metaphysics is an a priori discipline, then the correct theory of personal identity should be necessary and a priori knowable. But then there is arguably some sense in which you ought to believe the correct theory of personal identity, whatever it is. After all, it is a priori that it is entailed by your evidence (since it is a priori that it is a necessary truth) while its negation is incompatible with your evidence. But this is a trivial sense in which what you ought to believe might depend on facts about the right theory of identity; it does not threaten the claim that the metaphysics of personal identity over time is irrelevant to what you ought to believe about propositions not about identity. 6 Parfit (, –) suggests such a view, arguing that there are extremely plausible requirements that a criterion of identity over time must meet but which cannot be satisfied together. I will not rehearse his argument here, but will only note that such an argument might support the conclusion that our concept of personal identity is inconsistent. 7 I have heard it objected that Time-Slice Rationality must somehow be incompatible with the aforementioned psychological criterion of personal identity over time (Antony Eagle and Hilary Greaves suggested this objection, without endorsing it). But it is unclear what exactly the worry is. Perhaps the thought is that in order for psychological continuity to be constitutive of personal identity

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 general motivations that you are your present time-slice, for example. Likewise, it is not a semantical theory. It does not say, for instance, that in sentences of the form “S ought to φ,” “S” refers to S’s present time-slice. Instead, Time-Slice Rationality is only a normative theory about the structure of requirements of rationality. The title of the book, Reasons without Persons, is meant only to deny that personal identity over time plays an important role in the theory of rationality, not that there are no such things as persons. I close this section with a brief aside about other normative domains. Parfit () famously argued that personal identity is not “what matters.” Parfit was specifically concerned with what matters in survival. How should you feel when confronted with the prospect of Double Teletransportation, say? Should you regard it as about as good as ordinary survival, or as about as bad as death, or something in between? Parfit argues that how you should feel about such prospects does not depend on whether there will later be someone to whom you bear the relation of identity, but rather on whether there will be someone with whom you are psychological continuous. I am arguing, in effect, for a more sweeping claim. Neither personal identity nor surrogate notions such as psychological continuity are “what matter” in determining what rationality demands of you. What you ought to believe, desire, or do at a given time does not depend on facts about personal identity or R-relatedness except insofar as these facts affect your present evidence or your present desires. Time-Slice Rationality is a theory about one normative domain: rationality. It does not say anything directly about other normative domains like morality. But it might be objected that a time-slice-centric picture is not at all plausible in the case of morality and that this constitutes an objection to my theory of rationality. Why think that morality must rely on the relation of personal identity over time? Consider a norm against breaking promises. You only ought to fulfill promises that were made by you in the past; you are under no obligation to fulfill promises made by third parties. Consider also the doctrine of the separateness of persons.8 It is permissible to hurt a child by giving her a vaccination, since it is she who will later benefit. The fact that she will later benefit compensates her for her earlier pain. But according to the doctrine of the separateness of persons, it is not always permissible to harm one person for the sake of another’s benefit, even if the benefit over time, we need norms requiring your attitudes to cohere with each other in a certain way over time. But this worry is misguided. The psychological criterion says that a later person is identical to an earlier person just in case they are psychologically continuous. The psychological criterion thus makes no mention of rational requirements. It might be that a person’s attitudes are in fact related in a certain way to her earlier attitudes without there being rational requirements which say that her attitudes ought to be so related to her earlier attitudes. 8

See e.g. Rawls () and Nozick ().

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

general motivations  is greater than the harm. I cannot trade your interests off against another’s, since the other’s benefit will not be compensation for your harm. Do considerations about morality cast doubt on Time-Slice Rationality? No. First, and most importantly, we must be exceedingly careful about inferring anything about rationality from claims about other normative domains. Morality is different from rationality in many other respects, so it may simply be that a timeslice-centric picture of rationality is plausible while a time-slice-centric picture of morality is not. Of course, one might attempt to argue that the theories of rationality and morality should be structurally similar in certain ways and so those sympathetic to time-slice-centric theories of rationality should also be sympathetic to time-slice-centric theories of morality, and vice versa. But the inference from time-slice-centrism about rationality to time-slice-centrism about morality is far from immediate and may plausibly be resisted. This is itself sufficient to stave off the objection from morality. But second, it is far from clear that a time-slice-centric picture of morality really is implausible. It is not clear that moral norms must make reference to the relation of personal identity over time. Some moral theories, most notably Utilitarianism, place little or no importance on personal identity over time. For Utilitarians, all that matters is the total amount of happiness in the world (or, more accurately, the total amount of utility, usually understood as happiness minus suffering). It does not matter how that happiness is distributed across people or across times. So Utilitarians reject the claim that it is impermissible to harm one person to give a greater benefit to another, just as most non-Utilitarians reject the claim that it is impermissible to harm an earlier person to give a greater benefit to her later self.9 Indeed, Utilitarianism can be motivated by appeal to puzzle cases for personal identity and the concomitant suspicion that personal identity cannot be of great normative import. Utilitarians are often accused of falsely believing that mankind is some kind of “super-person,” to quote Gauthier (, ). If mankind were a super-person, then one person’s well-being could be traded off against another’s well-being just as one person’s well-being can be traded off against her own future well-being. But Utilitarianism does not rely on this (obviously false) assumption. Instead, it can be motivated by a rejection of the claim that facts about personal identity over time are deep and important. As Parfit (, –) writes, 9 Another consistent position which places no importance on personal identity over time would prohibit both interpersonal trade-offs and intrapersonal trade-offs. It would prohibit harming one person in order to give a greater benefit to another person, but it would also prohibit harming a person at one time in order to give a greater benefit to her in the future. Such a view would, for instance, prohibit vaccinating a child, causing her pain now in order to allow her to be healthy in the future. Such a position may be implausible, but it is consistent.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 general motivations “the Utilitarian View may be supported by, not the conflation of persons, but their partial disintegration. Utilitarians may be treating benefits and burdens, not as if they all came within the same life, but as if it made no moral difference where they came.” I want to reiterate, however, that my view, which is exclusively about rationality, does not depend on endorsing Utilitarianism or any other theory of morality. Utilitarianism is a time-slice-centric theory of morality (though not the only one), but Time-Slice Rationality is, as the name suggests, only a theory about rationality and hence can be paired with any theory of morality whatsoever. So nonUtilitarians need have no quarrel with my theory of rationality. Considerations about the relevance of personal identity to other normative domains simply do not bear directly on the plausibility of Time-Slice Rationality.

. Internalism The considerations marshalled in the previous section regarding personal identity over time are by themselves sufficient to motivate Time-Slice Rationality, at least assuming that we cannot replace personal identity over time with a surrogate notion like psychological continuity, a possibility I address in Chapter . If we agree that requirements of rationality should not make reference to the relation of personal identity over time (or surrogate notions)—that they should be impersonal—then we should likewise agree that what attitudes you ought to have at a time does not depend on your attitudes at other times (Synchronicity) and that your beliefs about what attitudes you have at other times play the same role as your beliefs about what attitudes other people have (Impartiality). But the case for Time-Slice Rationality can be bolstered further, in particular by an appeal to a very moderate form of internalism about rationality. This internalism supports the case for Synchronicity, but it does not have any bearing on Impartiality. So internalism supports one half of Time-Slice Rationality. Before going on, however, let me emphasize that Time-Slice Rationality is, strictly speaking, compatible with both externalism and internalism. I think that internalism is a powerful motivation for Time-Slice Rationality, but externalists can also buy into Time-Slice Rationality just by holding that facts about your past are not among the external factors that affect how you ought to be now. Externalists could be drawn to Time-Slice Rationality by the considerations about puzzle cases for personal identity over time, for instance. It is just that internalists have an additional reason for adopting Time-Slice Rationality. Onward then. Time-Slice Rationality can be motivated by a moderate and plausible form of internalism which states that what rationality requires of you supervenes on

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

general motivations  your mental states. To distinguish this view from other, more extreme forms of internalism (which I will discuss shortly), I will give it a new name: Mentalist Internalism What you rationally ought to believe, desire, or do supervenes on your mental states.10

Mentalist Internalism does a good job of capturing my initial gloss on rationality from Chapter , namely that being rational is a matter of believing and behaving in ways that are sensible, given your perspective on the world. This is because your perspective on the world is constituted by your mental states. (Importantly, these mental states can include beliefs about your past and future attitudes.11 ) Moreover, I think it may even be part of the theoretical role of our concept of a mental state, part of the reason we have a concept of mental states in the first place, that they and only they are the sorts of things that can affect what you rationally ought to believe, desire, or do. Physical states, such as the makeup of your ocular system, can affect whether your beliefs are true or reliably formed, but they do not directly affect whether your beliefs are rational. Only mental states can affect whether your beliefs and actions count as rational or irrational. I submit that Mentalist Internalists should be sympathetic to Time-Slice Rationality, since your perspective on the world is the perspective of your present self. As Williams (, ) wrote in another context, “The correct perspective on one’s life is from now.” And your perspective on the world at any particular time is constituted by your mental states at that time. Your past beliefs, say, partly constitute your previous perspective on the world, but they do not partly constitute your current perspective on the world. For this reason, I think that if we really take seriously the idea that what rationality demands of you depends only on your perspective on the world, then we should think that what you rationally ought to believe, desire, or do supervenes on your present mental states. In the epistemic case, Mentalist Internalism would result from combining Evidentialism (the view that what you rationally ought to believe depends only on your evidence) with the view that what your evidence is supervenes on your

10 Compare Broome’s (, ) claim that “rationality supervenes on the mind.” Broome states that “If your mind has the same intrinsic properties (apart from the property of rationality) in one situation as it has in another, then you are rational in one to the same degree as you are rational in the other.” 11 Bratman () talks of an agent’s practical standpoint, and argues that this standpoint will include facts about an agent’s temporally extended plans. I am sympathetic to this thought, provided we think of the standpoint as including not plans that she has at other times, but only plans the agent presently has (these plans may be plans to do things in the future), as well as beliefs about what plans she has had in the past or will have in the future.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 general motivations mental states.12 In the Evidentialist slogan that your beliefs ought to be proportioned to your evidence, “your evidence” is most naturally interpreted as referring to your present evidence. After all, why should your beliefs be proportioned to evidence that you had in the past but have since lost, or to evidence that you have not yet encountered? Then, it is a synchronic matter whether your beliefs at a time are proportioned to the evidence you have at that time. Indeed, the synchronic principle I propose in Chapter  (as a replacement for the diachronic principle of Conditionalization) can be seen as an attempt to make precise this claim that your beliefs ought to be proportioned to your evidence. Of course, evidentialism is only a view about the rationality of beliefs, while Time-Slice Rationality is a theory of the rationality of preferences and actions as well. This is why I put the emphasis not on Evidentialism so much as on Mentalist Internalism, which is a theory of the rationality in general, not just epistemic rationality. Mentalist Internalism suggests that facts about your past attitudes and actions can affect what you ought to believe, desire, or do now, but only by way of affecting your present mental states. In determining how you rationally ought to be now, the relevance of facts about your past selves is screened off by facts about your present mental states. For instance, the fact that you spilled juice on the rug this morning does make it the case that you now ought to go to the store to purchase cleaning supplies. But it does so in virtue of the fact that it caused you to now have attitudes—the desire that the rug be clean and the belief that the best means of getting the rug clean involves buying cleaning supplies—which make it rational for you to go to the store now. More generally, facts about the past are relevant to how you rationally ought to be now only in virtue of and insofar as those facts affect your present mental states. This thesis can be bolstered by the claim that internalists should treat memory and perception in parallel ways. Just as facts about the external world and about the reliability (or unreliability) of your perception should affect what you ought to believe about the external world only insofar as they affect your mental states, so facts about the past and about the reliability (or unreliability) of your memory should affect what you ought to believe now only insofar as they affect your present mental states. Take the New Evil Demon Problem (Cohen ()). Consider a brain in a vat (BIV) who has exactly the same perceptual experiences as you do, and who is otherwise exactly the same as you with respect to how things seem “from the inside.” If you and the BIV are in the same mental state, then you ought to 12 See Feldman and Conee () for a seminal defense of Evidentialism. Feldman (, ) writes that “Evidentialism is best seen as a theory about synchronic rationality.”

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

general motivations  have exactly the same beliefs.13 Both you and the BIV, upon having a perceptual experience as of your having hands, ought to believe that you have hands, even though your perceptual experiences are veridical while the BIV’s are not. If this is correct, what you ought to believe depends on your mental states, not on whether your perceptions are in fact veridical. A case analogous to that of the BIV can be used to support the claim that what you ought to believe does not depend on what attitudes you in fact have at other times except insofar as they affect your present mental states. For what goes for perception also goes for memory. Consider Davidson’s () Swampman, created when a lighting bolt causes a bunch of molecules to spontaneously arrange themselves into a human form. Let us modify his original case so that Swampman comes to have the same apparent memories as you have, including memories as of having had certain beliefs in the past. For instance, just as you remember once believing that you visited Disneyland as a child, so the scientists’ creation seems to remember having once believed it had been to Disneyland as a child (though it in fact did not previously have such a belief). Now, if you and this creature count as being in the same mental states, I think that the two of you ought now to have the same beliefs, even though your apparent memories are veridical whereas those of the creature are not. This judgment supports Synchronicity. It suggests that what you ought to believe, desire, or do does not depend on how you were in the past, except of course insofar as your past causally affects what mental state you are in now. Now, some epistemologists deny that, at least in standard ways of setting up these cases, the BIV or Swampman really are in the same mental states as you. For instance, Putnam () has argued that a BIV could not be in the same mental states as you, since it could not possess the same concepts as you. Being able to have beliefs about hands, say, requires having had some past causal interaction with hands, but since the BIV lacks hands, and has never interacted with other language users who themselves have hands, the BIV cannot have beliefs about hands. Similarly, Davidson argues that Swampman would actually have few if any of your actual beliefs, since Swampman’s words and thoughts would not refer to anything at all. As Davidson argues, “It can’t mean what I do by the word ‘house’, for example, since the sound ‘house’ it makes was not learned in a context that would give it the right meaning—or any meaning at all. Indeed, I don’t see how my replica can be said to mean anything by the sounds it makes, nor to have any thoughts” (). 13 At least, you and the BIV ought to have the same perceptual beliefs; whether you ought to have the same beliefs about e.g. a priori matters is a separate question.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 general motivations Even if we set up the case so that the BIV and Swampman possess all the requisite concepts,14 some epistemologists (see esp. Williamson ()) will still deny that they are in the same mental states, because factive states like knowledge and memory are genuine mental states, and you know and remember things that the BIV and the Swampman only seem to know and seem to remember. I have no quarrel with these epistemologists, and indeed I am sympathetic to their views. First, let me emphasize that my aim in discussing these two cases—of the BIV and Swampman—is simply to draw out a parallel between perception and memory, and to say that they should be treated analogously. In particular, in both cases Mentalist Internalists should say that whether your perception is reliable, and whether your memory is veridical, should affect what you ought to believe now only in virtue of affecting your present mental states. This is compatible with the claim that in fact you and the BIV, or you and Swampman, are not in fact in the same mental states after all. Second, now is a good time to emphasize that Mentalist Internalism is a very moderate form of internalism, and one that is quite compatible with the verdict that you and the BIV, or you and the Swampman, ought to have different beliefs. Indeed, we shall see that it is so moderate that even Williamson, often considered a paradigmatic externalist, nonetheless also counts as a Mentalist Internalist. Pace Barry Goldwater, extremism is a vice, in philosophy as elsewhere. Extreme forms of internalism are extremely problematic. Consider an internalism which says that what you ought to believe (or desire or do) supervenes on your intrinsic physical properties. Physical duplicates ought to have the same beliefs. But this internalism about rationality is incompatible with externalism about content, the widely held view that the contents of your mental states—what they are about— depend not only on your intrinsic physical properties, but also on facts about your environment (e.g. facts about your causal interactions with the world and usage facts in your linguistic community).15 If what your beliefs are about, and indeed what it is even possible for you to have beliefs about, does not supervene on your intrinsic physical properties, then obviously what you ought to believe likewise does not supervene on your intrinsic physical properties.16 14 To do so, we can imagine a recently envatted brain, who possesses the requisite concepts in virtue of having had prior causal contact with the relevant objects in the world. Similarly, we can imagine a Swampman who has been allowed to wander around for a time in order to pick up the requisite concepts. 15 This was the point behind Putnam’s take to the BIV case and Davidson’s take on Swampman, considered above. See also Kripke (), Putnam (), and Burge () for the main motivations for content externalism. 16 It might be thought that there must be some common core which unites the beliefs of all physical duplicates, even if their differing environments make the contents of their beliefs different. But much

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

general motivations  A different extreme form of internalism would hold that what you ought to believe (or desire or do) supervenes on some special class of mental states to which you have perfect access. There is a class of mental states such that whenever you are in them, you are in a position to know that you are in them. Perhaps experiential states, such as the state of seeming to see something red, qualify. But as noted in Chapter , Williamson (, ch. ) argues that there are no states that are luminous, in the sense that whenever you are in them, you are in a position to know that you are. Even the state of seeming to see something red is like this. You might seem to see something red without being in a position to know that you seem to see something red.17 Williamson’s argument is controversial. Many philosophers reject the assumptions about knowledge on which Williamson relies. And even if Williamson is right, his argument is concerned specifically with knowledge. It does not show that we lack exceptionally keen powers of introspection which would lead to you being perfectly reliable in forming true beliefs about whether you seem to be seeing something red. But even though Williamson does not even purport to show that we lack such powers of introspection, why think that we have them, or even that it is possible for a being to have them? Any form of internalism which is committed to such introspective capabilities is, at the very least, leaving itself out on a limb. But Mentalist Internalism avoids any such extreme commitments. It does not depend on the assumption that you have some infallible introspective access to your mental states, or that those mental states supervene on your intrinsic physical properties, or that mental states must be non-factive. If mental states fail to supervene on intrinsic physical properties, then Mentalist Internalism says that what you rationally ought to believe or do may likewise fail to supervene on intrinsic physical properties. And if you lack infallible access to your mental states, then Mentalist Internalism says that what you ought to believe may depend on facts to which you do not have infallible access. Because of its moderate character, while Mentalist Internalism will conflict with certain forms of externalism (e.g. Goldman’s () Reliabilism), it is compatible work in philosophy of mind suggests that it is doubtful whether this common core is a belief (or a mental state at all), as opposed to just your shared brain state. See, Fisher (), among many others, for discussion. 17 I will not rehearse Williamson’s Anti-Luminosity argument here, but a central point concerns borderline cases. If the color you seem to be seeing is red, but only just, so that it is very close to being orange, then a large portion of your confidence that you seem to be seeing something red is unreliably based, and so your belief is unsafely held. Because you are close to the borderline, it is impossible to place a high degree of confidence in the claim that you seem to be seeing something red without a large portion of that confidence being misplaced, and this prevents you from being in a position to know that you seem to be seeing something red.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 general motivations with other views which would not ordinarily be considered internalist. Williamson () argues that your evidence consists of all and only the things that you know. If this is right, and if what you ought to believe depends on your evidence, then what you ought to believe also depends on what you know.18 Williamson also argues for the claim that knowledge (along with some other factive attitudes) is a mental state.19 If he is correct, then his knowledge-first approach qualifies as a version of Mentalist Internalism, even though few would class him as anything like an orthodox internalist. I myself am neutral about whether knowledge is a mental state. My commitment is only to Mentalist Internalism. Only mental states affect what you rationally ought to believe or do. And Mentalist Internalists, in my view, should be sympathetic to Time-Slice Rationality. Facts about your past attitudes are on a par with facts about the external world or the reliability of your perceptual faculties; they affect how you ought to be now only insofar as they affect your present mental states. This is because rationality is a matter of believing and behaving sensibly given your perspective on the world. If we then hold, as I think we should, that your perspective on the world is the perspective of your present self, and that your perspective on the world is constituted by your mental states, then we get the result that rationality is a matter of believing and behaving sensibly, given your present mental states. In this way, Mentalist Internalism supports TimeSlice Rationality, which holds that all requirements of rationality are synchronic. Importantly, however, if you are repelled by even the moderate form of internalism I have outlined, the considerations about personal identity are by themselves sufficient to motivate Time-Slice Rationality. 18 Williamson avoids talk of rationality and of what you ought to believe, so it is not clear that he would endorse what I am saying here. 19 Williamson’s argument that knowledge is a mental state is primarily negative, based on the claim that there are no good reasons for excluding knowledge from the class of mental states. First, one might think that knowledge cannot be a mental state because whether you know something does not supervene on your physical properties. But content externalism shows that this is true of all contentful mental states. Second, one might think that knowledge is not a mental state since you are not always in a position to know whether or not you know some proposition. But if Williamson’s Anti-Luminosity argument is sound, this is true of all mental states. I think that Mentalist Internalism properly frames the debate over whether knowledge is a mental state. If knowledge is like paradigmatic mental states in playing a fundamental role in the theory of rationality, then there is no good reason not to grant it full membership in the society of mental states. But if knowledge plays no role beyond that played by the beliefs which constitute that knowledge, then this seems like a good reason to deny that it is a mental state.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 Against Diachronic Principles . Introduction Diachronic principles concern how you should change your attitudes over time. They say how your attitudes now should fit with the attitudes you had in the past (or will have in the future). They contrast with synchronic principles, which say how your attitudes at a single time ought to be, without reference to the attitudes you have at other times. Most theorists of rationality accept both synchronic and diachronic principles. Other theorists, e.g. Lam (), think that there are only diachronic principles; rationality is only concerned with how you proceed and change your beliefs and desires over time, and not with how you are at particular times. Time-Slice Rationality, by contrast, espouses Synchronicity—the claim that there are only synchronic principles of rationality. In this chapter, I consider and reject diachronic principles both for credences and for desires or preferences. I begin with the former, epistemic case.

. Against Conditionalization The most widely endorsed diachronic principle is Conditionalization, which is a principle about how to change your credences over time. It states that when you learn some proposition. your new credences should equal your old credences conditional on the proposition you just learned.1 More formally, where P is your 1 There are two subtly different ways of cashing out Conditionalization and other diachronic principles—a narrow-scope way and a wide-scope way. See Broome () and Kolodny (b) for discussion of this important distinction. Suppose it is irrational to be in state S at t but to fail to be in state S at t . So, if you are rational at all times, you will satisfy the conditional, “If you are in S at t , then you are in S at t .” But this conditional is not yet a norm, for it has no ought claim. But there are two possible places to stick the ought. We could have our ought take scope just over the consequent of the conditional (narrow-scope) or over the conditional as a whole (wide-scope). Narrow-scope and wide-scope norms differ in cases where you are in S at t , but it’s not the case that you ought to be in S at t , whether because you actually ought not be in S or simply because it’s permissible not to be in S . In those cases, the narrow-scope diachronic norm will say that you ought to be in S at t , since the mere fact that you are in S at t makes it the case that you ought to go on

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against diachronic principles credence function before becoming certain of evidence proposition E and P is your credence function after becoming certain of E (and nothing stronger), we have: Conditionalization It is a requirement of rationality that, for all H, P (H) = P (H | E)

Conditionalization amounts to the claim that upon becoming certain of E, your conditional credences on E (in symbols, P( ∗ | E)) must stay the same.

to be in S . The wide-scope norm has no such implication. It merely says that you ought to satisfy the conditional “If S at t , then S at t .” But it is possible to satisfy the conditional in multiple ways—you could make the antecedent and consequent both true, or you could make the antecedent false. The two sorts of norms also differ in a case where you ought to be in S but aren’t. Assume orthodox deontic logic, which is the modal logic D, whose semantics are given by a possible worlds semantics with an accessibility relation that is serial, so that for every possible world, there is at least one world accessible from it. In this deontic logic, we get φ ∧ (φ ⊃ ψ) |= ψ, but also φ ∧ (φ ⊃ ψ)  ψ. The wide-scope principle says that if you ought to satisfy the antecedent, then you ought to satisfy the consequent even if you fail to satisfy the antecedent, whereas the narrow-scope principle only says that you ought to satisfy the consequence if you in fact satisfy the antecedent. Importantly, the considerations I discussed in Chapter  and raise in more detail below— namely personal identity puzzle cases and internalism about rationality—apply to both narrow-scope and wide-scope diachronic principles. Whether narrow-scope or wide-scope, diachronic principles require us to settle irrelevant facts about personal identity over time in order to determine whether they apply in a given case. If they are narrow scope, then whether someone at t ought to satisfy the consequent of the conditional depends on whether she is the same person as an earlier person who was in S at t . If so, then the person at t ought to satisfy the consequent of the conditional; if not, not. And if diachronic principles are wide-scope, then whether an earlier person at t and a later person at t ought to make-true the conditional depends on whether they are the same person. Either way, diachronic principles depend on problematic facts about personal identity over time. Now take internalism. If diachronic principles are narrow-scope, then whether a later person at t ought to make-true the consequent of the conditional depends on a fact about her t state to which she has no access; facts about her past state are not part of her perspective on the world. What if diachronic principles are wide-scope? Well, consider things from the perspect of an agent at t . If she ought to have been in S at t , then as we have seen, she ought (narrow-scope) to be in state S at t . But whether it was the case earlier that she ought to have been in S is not something to which she has access now. For whether at t she ought to have been in state S depended, presumably, on what evidence she possessed at t (perhaps among other things), and this may be something of which the agent is justifiably ignorant at t . So again, if diachronic principles are wide scope, then there will still be cases in which what state an agent ought to be in at a later time depends on facts about her earlier self to which she has no access at that later time. More generally, whether an agent satisfies something like our conditional above is a matter of how she is over extended periods of time. Satisfying such a conditional requires the cooperation of her various selves at different instants during that extended period of time. But these various selves may not have access to facts about what the others have done or will do. So at different instants during that extended period of time, an agent’s various time-slices may not have access to what is required in order to satisfy the conditional as a whole. Wide-scope diachronic principles concern how you ought to be over time, but your perspective on the world is always grounded in a particular time. So whether narrow-scope or wide-scope, diachronic principles run into trouble with internalism. For this reason, I am sloppy with the distinction in the main text.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against diachronic principles  Conditionalization preserves conditional credences on the evidence. In the terminology of Weisberg (a), Conditionalization is rigid. Proof: P (H | E) = P (H ∧ E)/P (E) P (H | E) = P (H ∧ E | E)/P (E | E) P (H | E) = P (H ∧ E ∧ E)/P (E) P (H | E) = P (H ∧ E)/P (E) P (H | E) = P (H | E) Q.E.D.

If your conditional credences on E stay the same once you become certain of E, your new unconditional credences must equal your old conditional credences on E. So interestingly, Conditionalization says what must change by saying what must stay the same. Some of your credences—your conditional credences on the evidence—must stay the same, and this entails how the rest of your credences must change when you learn something. Conditionalization is now a standard part of Bayesian epistemology. For instance, Greaves and Wallace (, ) write that, “According to Bayesian epistemology, the epistemically rational agent updates her beliefs by conditionalization.” And in the Stanford Encyclopedia of Philosophy entry on Bayesian Epistemology, Talbott (), we find: The formal apparatus itself has two main elements: the use of the laws of probability as coherence constraints on rational degrees of belief (or degrees of confidence) and the introduction of a rule of probabilistic inference, a rule or principle of conditionalization. (emphasis in original)

So, Conditionalization is widely regarded as one of the main components of the Bayesian view of epistemic rationality. But despite its initial plausibility and widespread acceptance, I argue that Conditionalization must ultimately be rejected.

.. Conditionalization and Personal Identity Conditionalization is not an impersonal principle. It makes reference to the relation of personal identity over time. It says how your credences after learning some proposition should be related to your credences before learning that proposition. Because of this, Conditionalization runs into trouble in cases where the facts about personal identity are unclear. Consider the case of Double Teletransporation from Chapter , in which Pre enters the teletransporter and, at the instant her body is vaporized, moleculefor-molecule duplicates—Lefty and Righty—are created in Los Angeles and San

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against diachronic principles Francisco, respectively. Suppose also that Pre knows that when Lefty and Righty appear, each will have her eyes open and will see that the walls around her are a particular color. And suppose that Pre thinks that the colors Lefty and Righty see provide evidence about what the chief mad scientist’s alma mater was. Pre’s credence that the mad scientist attended Harvard, conditional on at least one of Lefty and Righty seeing crimson, is very high. And similarly for a number of other prominent medical schools: Pre’s credence that the mad scientist went to school X, conditional on one of Lefty and Righty seeing school X’s colors, is high. Now suppose Pre enters the machine, her body is vaporized, and the next instant Lefty appears in Los Angeles and sees crimson. Ought Lefty come to have high credence that the mad scientist is a Harvard alum? If Conditionalization is right, then it depends. If Lefty and Pre are the same person, then Conditionalization says that Lefty indeed ought to have high credence that the mad scientist is a Harvardian. But if not, Conditionalization is silent, for it is as if Lefty just suddenly came into existence. So, for instance, Subjective Bayesians say that you can start off life (prior to gaining any evidence) with any credence function whatsoever, provided that it satisfies the axioms of the probability calculus. If this view is right, then if Lefty and Pre are not the same person, then Lefty can just pick any credence function whatsoever, update it on whatever her present evidence happens to be and move on. Her credences are not required to be related in any special way to those of Pre.2 This strikes me as the wrong result. What Lefty’s credence ought to be is entirely independent of these facts about identity. How confident she ought to be that the mad scientist went to Harvard does not depend on whether she is the same person as Pre. Instead, if Lefty herself has evidence that the room color she would see would be correlated with the mad scientist’s alma mater, then Lefty indeed ought to become very confident that the mad scientist went to Harvard. But if Lefty has no such evidence (e.g. if she has reason to think that Pre’s conditional credence was based on a whimsical, capricious notion about school spirit), then Lefty’s credence needn’t be constrained by what Pre used to believe. In either case, we need only look at Lefty’s own evidence to determine what credences she ought to have; we do not also need to settle facts about identity. Of course, it might be that the operation in fact leaves Lefty with the same credences as Pre. But I am asking the normative question of whether Lefty’s credences ought to be the same as Pre’s. And the answer to this normative question is not settled by the descriptive fact that their credences happen to be the same. 2 Note that the same point applies even on views less extreme than Subjective Bayesians, on which not all prior probability functions which satisfy the axioms are permissible. See Ch.  for discussion.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against diachronic principles  We can still ask whether, if the machine left Lefty with credences slightly different from those of Pre, this would mean that Lefty would be in any way rationally less than ideal (even if blameless for this failure)?3 And of course the same point can be made with any of the other puzzle cases for personal identity over time. In Chapter , I mentioned the possibility of modifying non-impersonal principles by replacing reference to personal identity with reference to some surrogate notion such as psychological continuity or Parfit’s R-relatedness. Two time-slices are said to be psychologically connected if they meet some threshold level of psychological similarity. Two time-slices are said to be psychologically continuous if there is a chain of intermediate time-slices, each of which is psychologically connected to the adjacent ones in the sequence. And Parfitian R-relatedness is psychological continuity, where each time-slice in the chain is a cause of its successor’s psychological states. Could we, then, modify Conditionalization to replace reference to personal identity with reference to R-relatedness, say? I am pessimistic. First, there is a technical worry. Psychological continuity, and hence Rrelatedness, comes in degrees. But it is difficult to see how we could modify Conditionalization so that it is sensitive to degrees of R-relatedness. Intuitively, it seems like the degree to which your current credences should be constrained by facts about some past time-slice’s credences should be proportional to the degree to which you are R-related to that past time-slice. But it is difficult to see how this intuitive notion could be worked into a precise mathematical formula.4 But there is a deeper explanatory worry for this strategy of modifying Conditionalization. Just as Conditionalization faced the problem of explaining why facts about personal identity over time should affect what you ought to believe at 3 Note that there is also a further tension with internalism if you think that neither Lefty nor Righty is the same person as Pre. Suppose that Lefty does not know whether or not Righty was also created. If Righty was created, then Lefty is not the same person as Pre, but if not, then she is the same person as Pre (by many theorists’ lights). So, if Conditionalization is correct, then what Lefty’s credence ought to be depends on a fact to which she has absolutely no access, namely whether or not the scientists decided to create Righty in addition to Lefty. Obviously this conflicts with the internalist view that what you ought to believe depends only on facts to which you have some kind of access. I discuss Conditionalization and internalism in full shortly. 4 A possible further technical problem: Personal identity is supposed to be a one-one relation, but R-relatedness can be a many-one or one-many relation. And in a case of fusion, in which two people undergo operations so that they combine to form one person, later time-slices are R-related to multiple pre-fusion time-slices which are not R-related to each other. And it is difficult to see how one could modify Conditionalization so that instead of making reference to a one-one relation like personal identity, it instead made reference to a potentially many-one and one-many relation. Now, fusion cases are problematic. It is not clear how two persons could be “combined” to create a single person—which beliefs and desires from each input person would go into the output person? How could such a surgery go? But if they are possible, they pose a further problem for defenders of Conditionalization-like principles.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against diachronic principles a particular time, so such a modified principle will face the explanatory problem of saying why facts about who is R-related to whom are relevant to what you ought to believe.5 After all, just as facts about identity generally do not constitute evidence about the matters at hand (e.g. about the alma mater of the mad scientist), neither will facts about R-relatedness. This explanatory challenge for defenders of Conditionalization (and modifications thereof) can be made vivid by considering a case from Christensen (). Christensen notes that Conditionalization embodies a kind of conservatism, saying that you have reason to stick with your old credences (in particular, your old conditional credences on the evidence) simply because they are yours. And this kind of conservatism is unmotivated. Whether you have reason to stick with your old beliefs depends only on truth-related matters such as whether you have evidence for them, whether they were reliably formed, etc. But whether the beliefs are yours is not a truth-related matter. Here is Christensen’s case (): The Ichthyologist Suppose that I have a serious lay interest in fish, and have a fairly extensive body of beliefs about them. At a party, I meet a professional ichthyologist. Although I of course believe that she shares the vast majority of my beliefs about fish, I know that she can probably set me straight about some ichthyological matters. However, I don’t want to trouble her by asking a lot of work-related questions. Fortunately, I have a belief-downloader, which works as follows: If I turn it on, it scans both of our brains, until it finds some ichthyological proposition about which we disagree. It then replaces my belief with that of the ichthyologist, and turns itself off.

If you have epistemic reason to stick with your old beliefs simply because they are yours, then you have some epistemic reason to decline to use the downloader— even if this reason to decline is outweighed by reasons to use it, such as the consideration that the ichthyologist is very reliable about matters involving fish. But Christensen (, ) argues that this is not right: But the ichthyologist represents one end of a spectrum of cases. We can consider other agents whose fish-beliefs I have reason to think are a bit better informed, just as well informed, a bit less well informed, or much less well informed than mine are. And it seems to me that in any case in which I have reason to think the other agent is better informed, it will be epistemically rational to use the belief-downloader. When I have reason to think the other agent is less well informed, it will be epistemically irrational. And when my evidence indicates that the other agent is just as well informed as I am, it will be a matter of epistemic indifference whether I use the belief-downloader or not. 5 Of course, one might think that facts about R-relatedness, since they have to do with the etiology of your beliefs, are relevant to whether they are doxastically justified. I address doxastic justification in Chapter .

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against diachronic principles  Now, I have some quibbles with Christensen’s case. For instance, I am skeptical about whether actions (such as the act of using the belief-downloader) can be evaluated for epistemic rationality. But regardless, Christensen’s case is illuminating. The basic point is that Conditionalization and other diachronic principles, including modifications of Conditionalization employing R-relatedness or the like, make what you ought to believe depend on factors that are entirely unrelated to truth. But what it is rational for you to believe depends only on factors having to do with evidence and truth.

.. Conditionalization and Internalism Conditionalization conflicts with internalism about epistemic rationality. It entails that what rationality requires of you does not supervene on your present mental states. This is, of course, an instance of a more general conflict between internalism and diachronic principles. But there is also a more specific conflict between internalism and Conditionalization having to do with evidence loss. The remainder of this section is dedicated to exploring this issue and thereby further bolstering the case against Conditionalization. Consider the following case from Arntzenius ():6 Two Roads to Shangri-La There are two paths to Shangri-La, the Path by the Mountains, and the Path by the Sea. A fair coin will be tossed by the guardians to determine which path you will take: if heads you go by the Mountains, if tails you go by the Sea. If you go by the Mountains, nothing strange will happen: while traveling you will see the glorious Mountains, and even after you enter Shangri-La, you will forever retain your memories of that Magnificent Journey. If you go by the Sea, you will revel in the Beauty of the Misty Ocean. But, just as you enter Shangri-La, your memory of this Beauteous Journey will be erased and be replaced by a memory of the Journey by the Mountains.

Suppose you in fact travel by the Mountains. Intuitively, while en route you ought to be certain that you are going by the mountains, but upon entering ShangriLa, your credence that you went by the mountains should drop to ., since (i) you have no evidence that suggests that your apparent memory of mountains is real rather than illusory, and (ii) whether your apparent memory would be real or illusory was determined by the toss of a fair coin. Note the internalist intuition here: that what you ought to believe depends on what your evidence is, and your

6 Meacham () also uses this case to illustrate the conflict between Conditionalization and internalism.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against diachronic principles evidence supervenes on your present mental states, which are the same no matter which route you took.7 So you ought to be . confident that you traveled by the Mountains. But Conditionalization says otherwise. Clearly, you ought to start off with credence . that you will travel by the Mountains. Then you enter the Mountains. Conditionalizing on this new evidence (the experience as of mountains), you come to have credence  (or nearly ) that you are traveling by the Mountains. But upon entering ShangriLa, you do not gain any new evidence that bears on whether you traveled by the Mountains, and hence Conditionalization does not kick in. So, according to Conditionalization, you ought to just retain your credence  that you traveled by the Mountains. The problem is that you do not learn anything new that is evidentially relevant to the question of which route you took. To bolster the intuition that this is the wrong result and that in fact you ought to wind up . confident that you went by the Mountains, look at things from the perspective of the three theoretical roles for the rational ought introduced in Chapter . First is the evaluative role. If you continued to have very high credence that you went by the Mountains, we would judge you harshly, even though you wound up with the right answer (as you in fact did go by the Mountains). We would think that you got lucky but were essentially guessing. Second is the predictive role. I think that we would predict that you would wind up with . credence that you went by the Mountains. Or at least, we would certainly not predict that you would wind up with credence  that you went by the Mountains in all the cases where you in fact went by the Mountains and credence  that you went by the Mountains in all the cases where you in fact went by the Sea. We would not predict that you would get things exactly right, given the setup of the case. And finally, a rule

7

Note that disjunctivists such as McDowell () will object to my characterization of this case. It is compatible with (though not entailed by) disjunctivism that your mental state upon entering Shangri-La depends on whether you in fact traveled by the Mountains as opposed to the Sea. For instance, they might hold that in the former case, you remember going by the Mountains, whereas in the latter case, you merely seem to remember going by the Mountains, and these are importantly different mental states. Similarly, Williamson would hold that your mental state differs in the two cases, if in the former you know that you went by the Mountains while in the latter you merely believe that you went by the Mountains. However, it seems more natural for Williamson to hold that in the case where you went by the Mountains, your previous knowledge of having gone by the Mountains (knowledge you possessed during your journey) is lost upon your entering Shangri-La. If so, then even on a Williamsonian view, your mental state in the two cases can be the same. By the same token, if remembering that P entails knowing that P, and if your knowledge of having gone by the Mountains is lost upon entering Shangri-La, then even McDowell should hold that your mental state is the same whether you went by the Mountains or by the Sea, for in neither case do you remember (and hence know) that you traveled by the Mountains. Nevertheless, I acknowledge that the claim that your mental state upon entering Shangri-La is the same regardless of the route you took to get there is a substantive assumption.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against diachronic principles  which recommends having credence  that you went by the Mountains when you in fact did go by the Mountains, and credence  otherwise, is insufficiently beliefguiding. It gives advice that you have no chance of being able to follow, except by guessing.8

credence  is not the issue Now one might think that the problem is that having gained credence  that you were in the mountains, it is impossible to lower your credence by Conditionalization. Here I argue that this is not the source of the problem; issues with credence  are a red herring. This section can be skipped without loss by readers who are impatient with discussion of the intricacies of the Bayesian formalism.

8 The case of Shangri-La bears some resemblance to classic cases involving de se or self-locating beliefs, and so one might think that what we say about de se cases will have some bearing on ShangriLa. Consider the Sleeping Beauty Problem (Elga ()). Beauty will be put to sleep on Sunday. A coin will be tossed on Monday. If it lands Heads, she will be awakened on Monday but not on Tuesday. If it lands Tails, she will be awakened on both Monday and Tuesday. But after her waking on Monday, she will be given a drug that makes her forget that waking. When Beauty first wakes up, what should be her credence in Heads? Halfers say /, on the grounds that Beauty knows it’s a fair coin and doesn’t gain any new de dicto evidence upon waking up that should make her change her view about Heads. Thirders say /, on the grounds that her present evidence does not discriminate between three cases she might be in now. It could be that it’s Monday and the coin landed Heads, or it could be that it’s Monday and the coin landed Tails, or it could be that it’s Tuesday and the coin landed Tails. I will not aim for a complete analysis of Sleeping Beauty and of the de se. I just want to make two points. First, Shangri-La is unlike Sleeping Beauty in that in Sleeping Beauty, there is de se uncertainty that arguably does not reduce to de dicto uncertainty. Representing Beauty’s doxastic state seems to require bringing in centered possible worlds, or something of the sort. But in Shangri-La, there are just two epistemic possibilities located in distinct possible worlds. And so we can represent your doxastic state without having to bring in centered possible worlds. This suggests that in Shangri-La, there is no irreducibly de se ignorance. Second, cases of de se ignorance like Sleeping Beauty, arguably further bolster my case for moving to a fully synchronic framework. I take the right answer in Sleeping Beauty to be that she ought to have credence / that the coin landed Heads, and most philosophers seem to agree. This supports adopting a synchronic framework. First, the main argument for the / argument appeals to diachronic considerations—that before being put to sleep she assigned credence / to Heads, and she gained no new evidence upon waking, so she should continue assigning credence / to Heads. By contrast, the main argument for the / answer appeals to synchronic considerations—that her present total evidence is compatible with three cases (Monday+Heads, Monday+Tails, and Tuesday+Tails), and so she ought to assign credence / to each case. (Ross () objects to the principle of indifference implicitly being appealed to here on the grounds that in infinite cases it will yields violations of countable additivity, but I think there are independently good reasons to reject countable additivity.) Second, if we go for the / answer, it is notoriously difficult to do so in a diachronic framework. Theories of de se updating are famously problematic, and even if they get the right results, see e.g. Titelbaum (), they are rather complex. By contrast, synchronic theories of de se epistemology are able to yield the / answer, and get other cases right, in simple, elegant, and straightforward ways. See e.g. Moss () for a synchronic approach to the de se. In sum, the intuitively correct / answer is both motivated by synchronic considerations and best handled in a synchronic framework, thus bolstering my case for Time-Slice Rationality.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against diachronic principles The alleged problem with Conditionalization and credence  is that conditional probabilities are standardly defined as ratios of unconditional probabilities in the following way: Ratio Analysis P(A | B) =df P(A ∧ B)/P(B)

If P(A) = , then P(A ∧ B) = P(B), and so P(A ∧ B)/P(B) = P(B)/P(B) = . So, if a proposition is assigned credence , your conditional credence in that proposition, conditional on any other proposition, must likewise be . Hence you cannot conditionalize on some other proposition and thereby drop your credence in the first proposition from  to something less than . This is a common complaint about Conditionalization. Here is Williamson (, ): Once a proposition has been evidenced [read: gains probability ], its status is as good as evidence ever after; probability  is a lifetime’s commitment. On this model of updating, when a proposition becomes evidence it acquires an epistemically privileged feature which it cannot subsequently lose. How can that be? Surely any proposition learnt from experience can in principle be epistemically undermined by further experience.

Many Bayesians have responded to this worry by saying that your credence in a contingent proposition should never go all the way to . But since Conditionalization applies only when you become certain of an evidence proposition, some other update rule is needed. Jeffrey () devised a generalization of Conditionalization which applies even when you do not become certain of any evidence proposition. Jeffrey Conditionalization (also called Probability Kinematics) works as follows. Let {E , . . . , En } be a partition, such that prior to the learning experience in question, you assigned positive credence to each member of the partition. Call this partition the input partition. Then, when an experience at t leads you to change your credences in the members of the partition from P (E ), . . . , P (En ) to P (E ), . . . , P (En ) (call this the input distribution), Jeffrey Conditionalization says: Jeffrey Conditionalization It is a requirement of rationality that, for all H, P (H) = P (E )P (H | E ) + . . . + P (En )P (H | En )

Standard Conditionalization falls out of Jeffrey Conditionalization as a special case in which the input partition is {E, ¬E} and your new credence in E is . So one might think that in the case of Shangri-La, what went wrong was that you came to have credence  in the contingent proposition that you were in the Mountains, and so you could not drop your credence from  to something less than  upon entering Shangri-La. Instead, while in the Mountains you ought to

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against diachronic principles  have Jeffrey Conditionalized and increased your credence that you were in the Mountains to something very high but less than . But this appeal to Jeffrey Conditionalization as a solution to Shangri-La misses the point. For the problem for Conditionalization in the case of Shangri-La was just that there was nothing that you learned upon entering Shangri-La that was evidence against your having traveled by the Mountains. After all, we can imagine that you knew in advance precisely how things would seem to you upon entering that glorious realm. This problem—that you gained no new evidence that speaks against your having come by the Mountains—remains even if we move to Jeffrey Conditionalization. The problem would remain, for instance, if we assumed that, due to your uncertainty about the reliability of your vision, you only raised your credence that you were traveling by the Mountains to . while en route. What is it that you learned that is evidence that you didn’t come by the Mountains? I can think of no such proposition, regardless of whether while in the Mountains you upped your credence that you were traveling by the Mountains all the way to  or only to .. You might object: Why not say that the experience directly leads you to update with a new credence distribution over the members of the input distribution consisting of the proposition that you went by the Mountains and its negation? The problem with this suggestion is simply that it threatens to trivialize Jeffrey Conditionalization. For if there are no constraints on what can count as the relevant input partition, then provably any change in your credences can be modeled as a update which accords with Jeffrey Conditionalization.9 Suppose you want to model the change from an arbitrary credence function P to an arbitrary credence function P using Jeffrey Conditionalization. Just let the input partition be the maximally fine-grained partition {{w }, . . . {wn }} consisting of all singleton sets of possible worlds and update using the input distribution P ({w }), . . . P ({wn }).10 And there we have it—you moved from P to P by Jeffrey Conditionalization. Clearly, this is too easy. It threatens to completely trivialize Jeffrey Conditionalization. In order for Jeffrey Conditionalization to be a substantive constraint on how you ought to modify your credences, we need to supplement the mathematical framework devised by Jeffrey with intuitive claims about what you count as learning from a given experience. In the case of Shangri-La, what you intuitively learn upon entering Shangri-La is that you are in Shangri-La (or that you are having an experience as of Shangri-La). And importantly, you did not antecedently regard

9

This point is made by Weisberg (a). Actually, this requires primitive conditional probabilities, in case some of the singleton sets of worlds were assigned credence  by P . Primitive conditional probabilities will be discussed shortly. 10

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against diachronic principles your having entering Shangri-La (or having an experience thereof) as evidence against your having traveled by the Mountains. While in the Mountains, you regarded the proposition that you would shortly enter Shangri-La as evidentially irrelevant (i.e. probabilistically independent) to the proposition that you went by the Mountains. So, given an intuitive conception of learning, you learn nothing upon entering Shangri-La that, by Jeffrey Conditionalization, should lead you to lower your credence that you went by the Mountains. So much the worse not only for Jeffrey Conditionalization, but also for standard Conditionalization. As a closing remark, I want to add that I think the claim that Conditionalization will not allow you to lower your credence from  to less than  is simply false. Williamson was mistaken in claiming that probability  is a lifetime commitment if Conditionalization is right. The argument that Conditionalization runs into trouble with credence  relies on the Ratio Analysis of conditional probabilities. But as Hájek () argues, there are good reasons for rejecting the Ratio Analysis. If there are uncountable probability spaces, the Ratio Analysis leaves undefined many conditional probabilities that should clearly be defined. Suppose that you have an infinitely thin dart which will be thrown at a circle. It will certainly hit some point in the circle, and it is as likely to hit any given point as any other. But because there are uncountably many points in the circle (i.e. as many points as there are real numbers), each point has probability  of being hit.11 Now, what is the probability that the dart will hit point A, given that it hits either point A or point B? Clearly, the answer is /. But if the Ratio Analysis is correct, that conditional probability is undefined, for P(A ∨ B) =  and dividing by  is undefined. Hájek argues that we should take conditional probabilities as primitive. On this way of thinking, unconditional probabilities are defined in terms of conditional probabilities. In particular, unconditional probabilities are defined as probabilities conditional on tautologies, so that P(A) =df P(A | T), where T is some tautology. Taking conditional probabilities as primitive would allow conditional probabilities to be defined even when the conditioned proposition has probability . (Note that defenders of primitive conditional probabilities hold that P(A | B) = P(A ∧ B)/P(B) whenever P(B) > . They simply deny that this equality is definitional of conditional probability and that it holds even when P(B) = .)12

11 See Hájek () for further examples and for discussion of the possibility of allowing the probability that the dart will hit some given point to be infinitesimal rather than . 12 There are a number of alternative ways to get roughly the same results as employing primitive conditional probabilities. The main two are (i) using infinitesimals (see especially McGee ()) and (ii) lexicographic probabilities (see Halpern ()).

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against diachronic principles  Taking conditional probabilities as primitive, it is possible to lower credences from  to something less than  by means of Conditionalization, provided that you are conditionalizing on a proposition previously assigned credence . Let P(A) = , P(B) = , and P(A | B) = .. By conditionalizing on B, your credence in A drops from  to .. Problem solved. One might think that if the only way to drop your credence from  to something less than  is to conditionalize on a credence  proposition, that’s not good enough. For how often do you really come to have as evidence a proposition which previously was assigned credence ? Plausibly, quite often. Gaining as evidence propositions which were assigned credence  beforehand may be the rule, not the exception. Consider all the possible perceptual experiences you could have had at any particular instant. There seem to be uncountably many of them, differing in ways both big and small (e.g. the exact position of the mug, the precise angle at which you are viewing it, etc). Since there are uncountably many such perceptual experiences, each of these uncountably many possible experiences presumably had to be assigned credence . So when you have such an experience, you gain as evidence a proposition which used to be assigned credence .13 So ordinary life is probably more like Hájek’s case of the infinitely thin dart than one might have thought.14 Primitive conditional probabilities are independently well-motivated and free Conditionalization from its problem with credence . But the problem with Shangri-La remains. Credence  was a sideshow. The real problem is that upon entering Shangri-La, you don’t learn anything (whether updating goes by Conditionalization or Jeffrey Conditionalization) which is evidence against your having traveled by the Mountains. So, no matter whether Conditionalization or Jeffrey Conditionalization is correct, these diachronic rules will not yield the intuitively correct result that your credence that you went by the Mountains ought to be / upon entering Shangri-La.

13 Note that this point is independent of whether you think that your evidence consists of propositions about your experiences, the contents of those experiences, or of propositions that you know. For instance, assuming there are uncountably many different propositions that you could come to know when you turn around and look out the window (e.g. different propositions about the shape and position of all the leaves, etc), then even on a Williamsonian E=K view, you will almost always be gaining as evidence some proposition to which you previously assigned credence . The general point is simply that if the possible bits of evidence that you might gain are cut up so as to be extremely fine-grained, as I think they should, then it is difficult to see why the strongest evidence you gain from any experience would antecedently have been assigned positive credence. 14 Of course, this claim relies on assumptions about human psychology that might be questioned. Perhaps there are only countably many propositions that the human mind can entertain. I will not discuss this issue here.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against diachronic principles

conditionalization and forgetting The problem facing Conditionalization is not unique to sci-fi cases like Two Roads to Shangri-La. Conditionalization also yields implausible results in cases of forgetting, since Conditionalization only allows you to change your credences when you learn something new, and forgetting involves no such new learning, at least on an ordinary understanding of learning. Suppose you are now certain that you had cereal for breakfast. At some point in the future, you will no longer remember having had cereal today, but since you will not have learned anything new that bears on what you had for breakfast today, Conditionalization says that you ought to retain your certainty that you had cereal. But this is crazy! Surely once you no longer remember having eaten cereal, you ought to drop your confidence that you had cereal. Williamson (, ) accuses Conditionalization of deeming forgetting to be irrational: Bayesians have forgotten forgetting. I toss a coin, see it land heads, put it back in my pocket and fall asleep; once I wake up I have forgotten how it landed . . . No sequence of Bayesian or Jeffrey conditionalizations produced this change in [my credences]. Yet I have not been irrational. I did my best to memorize the result of the toss, and even tried to write it down, but I could not find a pen, and the drowsiness was overwhelming. Forgetting is not irrational; it is just unfortunate.

The issue is that he didn’t learn anything new that constitutes evidence against his belief about how the coin landed. Forgetting is a loss of information, not a gain of new information that bears against old beliefs. The defender of Conditionalization might argue that we are doing ideal epistemology here, and that ideally rational agents would have perfect memories. This position is defended by Broome (, –) and David Chalmers (p.c.), for instance. I am not convinced. I grant that having an imperfect memory is a way of being epistemically suboptimal, but it is important to distinguish irrationality from epistemic suboptimality. Just as forgetting is epistemically suboptimal, so is failing to be a clairvoyant. I think that perfect memory, just like clairvoyance, is an epistemically beneficial power to have, but lacking it is not constitutive of irrationality. But perhaps Conditionalization could be defended on the grounds that assuming perfect memory is a useful idealization when we are modeling rational doxastic states. Two points here: First, the case of Shangri-La isn’t really a case of forgetting. After all, upon entering Shangri-La, you retained your vivid memories of having traveled by the Mountains. The reason you ought to drop your credence to / is that you are aware of the truth of the subjunctive conditional that had you gone by the Sea, you would also have had vivid memory impressions as of

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against diachronic principles  having traveled by the Mountains. So even if the defender of Conditionalization is willing to bite the bullet on forgetting and say that ideally rational agents have perfect memories, this would not solve the problem with Shangri-La. For ideal rationality does not require that you assign credence  to the proposition that your brain was manipulated (even if it requires that you in fact suffer no such brain manipulation), and this is all that is required for the Shangri-La case to be a problem for Conditionalization. Second, idealizations are useful when they allow for a simpler, cleaner, and more tractable theory. There is no sense in idealizing just for idealization’s sake. And, as I will show in Chapter , we can replace Conditionalization with a synchronic norm which is just as simple and tractable as Conditionalization but gets the right results in Shangri-La and ordinary cases of forgetting. Alternatively, the defender of Conditionalization might attempt to respond by saying that Conditionalization is not incompatible with forgetting, but rather is simply silent about it.15 That is, Conditionalization says only that when you gain evidence E, your new credences ought to equal your old credences, conditional on E; it says nothing about what to do when you either gain no new evidence or when you forget something or otherwise lose evidence. But if this is right, then Conditionalization is not the whole story when it comes to rational belief change. We want a theory that gives the right result about Shangri-La and cases of forgetting, not one that is silent about them.16 And again, as I show in Chapter , we can come up with a theory that deals all at once with gaining and losing 15

Brad Armendt (p.c.) has pressed me on this point. One might try to avoid the problem of forgetting while retaining a diachronic framework by saying that an agent should at all times have the same prior probability function (interpreted as representing her standards for evaluating evidence), and that at any time, her credences should be the result of taking that prior probability function and conditionalizing it on her present total evidence. This would be structurally similar to the synchronic norm I propose in Chapter , except that while I hold that all agents at all times should employ the same prior probability function, the diachronic norm under consideration would hold that all time-slices of a single agent should employ the same priors, but time-slices of distinct agents can employ different priors. But even if this move can deal with certain sorts of evidence loss, it will still be in tension with Mentalist Internalism, for which priors you employed at an earlier time will not supervene on your present mental states. In a case where you have forgotten what priors you used to have (i.e. what your previous epistemic standards were), this diachronic norm will say that nonetheless you ought to employ those past priors now. This diachronic norm will also face the same worries about personal identity over time that I raise for orthodox Conditionalization below. See also Titelbaum () for a related and thorough proposal for modeling rational degrees of belief in a way that allows for forgetting and other evidence loss using diachronic requirements of rationality. I will not attempt a detailed evaluation of Titelbaum’s theory here, but I would note that it is considerably more complex than the synchronic framework I develop in Chapter . Titelbaum’s Certainty-Loss Framework may be the best way of attempting to deal with evidence loss in a dynamic framework, but we can achieve the right results more simply within a purely synchronic framework, and thus we should prefer my synchronic approach on grounds of simplicity. 16

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against diachronic principles evidence. This new principle simply says what doxastic state you ought to be in, given your present total evidence; it does not care whether your present total evidence was arrived at through learning, forgetting, or some combination of the two. This principle is a synchronic one and is thus right in line with Time-Slice Rationality. I conclude, therefore, that Conditionalization should be rejected.

. Diachronic Principles for Preferences For philosophers sympathetic to diachronic principles for beliefs or credences, are there analogous diachronic principles for desires or preferences that one might want to endorse? It is a curious fact that amid all the discussion of different update rules for doxastic states—Conditionalization, Jeffrey Conditionalization, Imaging,17 the AGM model of belief revision,18 etc.—there has been little if any discussion of update rules or other diachronic principles for preferences. First, insofar as it seems irrational to have beliefs that fluctuate wildly from moment to moment, the same seems to hold of preferences that shift around erratically. Second, one might expect the case of beliefs and the case of preferences to have some deep parallelisms. This is especially the case if, as many philosophers think, there are conceptual connections between doxastic states (beliefs or credences) and conative states (desires, preferences, or utilities).19 For instance, many hold, with Davidson (), Lewis (), and Stalnaker (), that what it is to have a given set of beliefs and preferences is in large part for those beliefs and preferences to best rationalize and explain your behavior. On this way of thinking we cannot identify your beliefs in isolation from your preferences or vice versa; instead, attribution of beliefs and attribution of preferences are inextricably linked parts of a broader project of making sense of agents and their behavior. Now, it does not logically follow from this theory in the philosophy of mind that norms for beliefs should have structural analogs in the case of preferences, but I think it is nevertheless does motivate exploring the possibility of diachronic norms for preferences.20 17

18 Alchourrón, Gärdenfors, and Makinson (). Gärdenfors (). Thanks to Rachael Briggs for suggesting this. 20 Perhaps the most precise theory on which there are conceptual connections between doxastic and conative states are Representation Theorem-based approaches to credences and utilities. Representation Theorems state that if your preferences—over both gambles and maximally specific possibilities—satisfy certain axioms, then you can be represented as an agent who maximizes expected utility relative to a certain credence function and utility function. In Savage’s () theorem, satisfying his axioms suffices for you to be representable as maximizing expected utility relative to a unique credence function and a utility function which is unique up to positive linear transformations, that is, multiplication by a positive number and and addition of a (possibly negative) constant. In other Representation Theorems, such as that of Jeffrey (), the uniqueness condition—the degree 19

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against diachronic principles  Are there any principles, then, that mandate some sort of stability over time in your preferences? (I will henceforth set desires to one side and talk only of preferences.) It is uncontroversial that it is sometimes rationally permissible, and even rationally obligatory, to change your preferences. At the end of the day, you want to go to sleep. By the next morning, you no longer have this preference; by then you are well rested. But this sort of change is a change in your de se preferences which does not entail a change in your de dicto preferences. Your de dicto attitudes are attitudes concerning which world is actual, whereas your de se attitudes are attitudes not just about which world is actual, but also about who, where, and when you are in such worlds. In the case of belief, the distinction matters because you might be completely certain about which world is the actual world while remaining uncertain about who you are within that world (Lewis ()). We can model the contents of de dicto with sets of possible worlds, whereas de se attitudes have as their contents sets of centered worlds (i.e. sets of world-individual pairs). In the sleeping case, your de dicto preferences remain unchanged, while only your de se attitudes change. At all times, both in the morning and in the previous evening, we may assume, you prefer that the proposition that you go to sleep at pm be true, and you prefer that the proposition that you not go to sleep at am be true. The fact that in the evening, but not in the morning, you wanted to go to sleep, is an artefact of the fact that your position in the world changed. It can also be rational to change your de dicto preferences, as in cases where your evidence changes. A shift from preferring to bet on the Democrats in  to preferring to bet on the Republicans is rational in a case where you gain evidence that favors the Republicans. This observation suggests a diachronic principle for preferences, one that mandates a certain stability in your preferences while also allowing, and indeed requiring, the sort of preference change involved in the case of betting on the election. In that case, you have a more fundamental preference which is stable, and another, less fundamental preference which changes in response to information. The less fundamental preference is over possible means to the end specified in your more fundamental preference. Throughout the whole election season, to which satisfaction of the axioms fixes narrows down the range of credence and utility functions, relative to which you can be seen as maximizing expected utility—is more complicated and need not concern us here. Some philosophers go further and say (if you satisfy the relevant axioms) that not only can you be represented as though you had such-and-such credences and utilities, but that those are in fact your credences and utilities. If so, then there are tight conceptual connections between credences and utilities, which give rise to—but do not entail—the suggestive thought that there might be structural similarities between norms for credences and norms for utilities. See Meacham and Weisberg () for arguments against using Representation Theorems to ground credences and utilities.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against diachronic principles you prefer that you have more money rather than less—your more fundamental preference for more money remains constant. But you also have preferences— less fundamental preferences—over possible means to the end of wealth, in this case preferences over bets, and it is these preferences which change as you gain information bearing on the result of the election. We might then take this insight about a particular case and turn it into a precise diachronic principle for preferences by saying that while your preferences over bets (or, more generally, over non-maximally specific possibilities) should change in response to new information, your preferences over maximally specific possibilities—your ultimate preferences, let us say—should never change.21 That is, there is a bedrock layer to your preferences which, if you are rational, will never change. It is only the superficial layers—preferences over means to your ultimate ends—which should change, and these should change only in response to (rational) changes in your credences.22 This insight can be formalized in terms of a diachronic constraint on your utility function. Let us say that an ultimate utility function is a utility function restricted to maximally specific possibilities, thereby representing your ultimate preferences. And your overall utility function (for both maximally and non-maximally specific possibilities) is derived from your credences and ultimate utility function in the familiar expectational way.23 As a first pass, then, we can formalize our diachronic principle by saying that your ultimate utility function should remain constant over time, with your overall utility function changing only in response to changes in your credences. But this first pass won’t quite do, since there is no such thing as the utility function you have, (and likewise there is no such thing as the ultimate utility function you have). The reason is that your preferences, even if they satisfy the relevant decision-theoretic axioms, do not single out a unique utility function for you (and therefore they do not single out a unique ultimate utility function, since these are just utility functions restricted to maximally specific possibilities). Rather, they fix your utility function at most up to positive affine transformations; that is, up to a choice of  point and scale.24 This means that if your preferences can

21 Note that in the decision theory of Jeffrey () there are no atoms—no one set of maximally specific possibilities—so this move may be unworkable in that framework. 22 Jeffrey () considers this sort of preference change. 23 In particular, if we think of propositions as sets of possible worlds, your utility function should be  such that U(A) = i u({wi }) × P({wi } | A) (this being the formula for evidential decision theorists, with related formulae being used in other decision theories. 24 As mentioned in footnote , the Representation Theorem of Jeffrey () has a weaker, and more difficult-to-state, uniqueness condition. Still, it remains true in his system that if your

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against diachronic principles  be represented by utility function U, they are also represented by utility function U  = aU + b (a > ).25 This fact will play an important role in the next chapter. In the present context, the upshot is that in formalizing our diachronic principle for preferences, we cannot say that your ultimate utility function must remain constant over time, for there is no such thing as the ulitimate utility function that you have. But we can say that your ultimate preferences should always be representable by the same family of ultimate utility functions, each of which is a positive affine transformation of any other one. This gives us: Utility Conditionalization It is a requirement of rationality that your utilimate preferences—preferences over maximally specific possibilities—do not change over time. Formally, if U (and any U  = aU + b (a > )) is an ultimate utility function representing your ultimate preferences at t then U (and any U  = aU + b (a > )) is an ultimate utility function representing your ultimate preferences at t .

The name “Utility Conditionalization” obviously suggests that the rational belief changes that drive changes in preferences conform to Conditionalization. However, strictly speaking, the idea that your ultimate preferences must stay constant while your non-ultimate preferences change only as your beliefs change could be combined with a variety of views about rational belief changes. Note the structural analogy between Conditionalization and Utility Conditionalization. Each principle says how some of your attitudes should change by saying what other attitudes of yours must stay the same. In the case of Conditionalization, the principle says that your conditional credences on the evidence must stay the same when you learn something, and this entails how your other credences should change. In the case of Utility Conditionalization, the principle says that your ultimate preferences must stay the same, and this (together with a principle about how your credences should change) entails how your non-ultimate preferences should change. preferences can be represented by a utility function U, then they can also be represented by any positive affine transformation of U (and perhaps by other utility functions besides). Jeffrey does propose setting the  point as the utility of the tautology but recognizes that this would be an arbitrary convention. He does not, for instance, think that there are reasons for thinking that all agents must value the tautology equally. As such, he does not suggest that this proposed convention provides part of a principled solution for fixing on unique utility functions (even if we had some other way for settling on a unique scale). 25 What is special about utility functions which are positive linear transformations of each other is that they agree about ratios of utility differences. Suppose that you have preferences representable by utility function U, and suppose that prefer {w } to {w } to {w }. Then, where U  is a positive linear transformation of U, [U({w }) − U({w })]/[U({w }) − U({w })] = [U  ({w }) − U  ({w })]/[U  ({w }) − U  ({w })].

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against diachronic principles It seems to me that those sympathetic to Conditionalization should likewise be sympathetic to Utility Conditionalization.26 Moreover, it is difficult to think of any alternative to Utility Conditionalization if we are attempting to formulate a diachronic principle governing preferences, other than the vague injunction not to change your preferences too dramatically too often.27 Now, at the end of the day, I think that Utility Conditionalization is untenable and should be rejected, but let me begin by playing devil’s advocate and defending it. You might think that Utility Conditionalization is in tension with the obvious fact that your tastes change over time. You might start off in your youth preferring rock to classical but then switch once you reach middle age. This seems like a paradigmatically rational change in your preferences. But plausibly, this change in your preferences is not forbidden by Utility Conditionalization, for it is not an ultimate preference. Presumably, your youthful preference for listening to rock rather than classical is grounded in the fact that you enjoy listening to rock more than you enjoy listening to classical. You prefer listening to rock now because you get more pleasure from rock than from classical. But you likely prefer that, were you to get more pleasure from classical than from rock, you listen to classical. If this is right, then once we specify the possibilities in question in a sufficiently fine-grained way, your preferences over these specific possibilities do not change when you reach middle age and come to prefer listening to classical over rock. At all times, you prefer listening to rock at times in your life when you enjoy it over listening to classical at times when you don’t enjoy it, and you prefer listening to classical at times when you enjoy it over listening to rock at times when you don’t enjoy it. Perhaps all of our rational taste-based, at least, preferences are like this. These taste-based preferences change when our tastes change, but these preferences are grounded in ultimate (non-taste-based) preferences for enjoyment over misery which remain constant throughout. What about non-taste-based preferences which are grounded in deeply held values such as political ideals? I will discuss this case later in the present chapter 26 Lewis (, ) endorses Utility Conditionalization, on the assumption that maximally specific possibilities are maximally specific in all descriptive and normative respects. This would hold if the normative supervenes on the descriptive, or if maximally specific possibilities are understood as pairs consisting of a maximally specific descriptive possibility and a normative hypothesis. 27 One possible alternative diachronic principle would say that if at t you prefer A to B, then  at t you prefer A to B unless you have reconsidered your preference in the meantime. In effect, it is rationally permissible to change your preferences as a result of thinking about them, but it is irrational to have preferences just drop out of your mental state for no reason. This principle would parallel Broome’s () proposed diachronic principle for intentions, which I consider in Chapter . But why require that preferences change only in response to some kind of explicit considering or reasoning? Many, if not most, of our preferences are not initially formed as a result of reasoning, and often our preferences—even deep, value-based preferences about what ultimate goals to pursue— change gradually as we grow older and gain experience, without these gradual changes being the result of reasoning. The defender of this sort of principle for preferences owes us an explanation of why reasoning or reconsidering should play this central role in fixing rational preferences.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against diachronic principles  and especially in Chapter . But for now I would suggest that they are unlike taste-based preferences in that it is far less obvious that in a case where you change your value-based preferences over time, we would be happy to regard you as being rational throughout. Indeed, I think that were I to change my value-based preferences in virtue of, say, changing my views about the value of equality, I would regard my later self as irrational or perhaps that my present attitudes are irrational (though of course whether I would be correct in this assessment is another matter). Of course, this judgment may be grounded not in an endorsement of some diachronic principle for preferences (which prohibits changes in value-based preferences, if those value-based preferences are ultimate preferences), but rather in an endorsement of the view that there are few, or even just one, rationally permissible sets of value-based preferences. If the latter, then in a case where you change your value-based preferences, any irrationality on your part could be grounded not in the fact that your preferences changed (resulting in a violation of some diachronic principle for preferences such as Utility Conditionalization), but instead in the fact that at some point in time, you had value-based preferences that were intrinsically irrational. This would make Utility Conditionalization superfluous. The issue of the rational permissibility of different sets of ultimate preferences will be discussed further in Chapter . But for now, I just want to establish that a defender of Utility Conditionalization should say (i) that changes in your taste-based preferences may be rationally permissible, but this is no problem for Utility Conditionalization, since taste-based preferences may be grounded in unchanging preferences for e.g. more pleasure rather than less, and (ii) that value-based preferences may not be grounded in more fundamental preferences, but it is not clear that you can change your value-based preferences and still count as rational throughout the process. Now let me turn from defending Utility Conditionalization to attacking it. It will already be obvious that it faces the same problems with internalism about rationality28 and personal identity puzzle cases that Conditionalization ran into, so I will not reiterate the details here. 28 Note that internalism is, if anything, even more plausible for the case of preferences than for the case of beliefs. This is because the main motivations for adopting externalism about epistemic rationality have no analogs in the case of preferences. First, there is the purported link between justification and truth. If having justified beliefs is to be good in any way, it must be that justified beliefs are in some sense more likely to be true, or so the thought goes. This purported link between justification and truth is used to motivate some forms of externalism, such as reliabilism. But it is questionable whether there is any parallel motivation for externalism about rational preferences. It might be that preferences and desires in some sense aim at what is objectively good (if there is such a thing as objective goodness). But if so, it does not seem that this would motivate externalism, for it may be an a priori matter which things are good, whereas it is not a priori which propositions are true. So a link between preferences and the good, if there is any such link, can be vindicated without resorting to externalism. Second, externalism about epistemic rationality is motivated by the skeptical challenge. The thought is that if what you are justified in believing depended only on factors accessible to you, you

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against diachronic principles In addition to those two problems, however, there is a potential further problem for Utility Conditionalization. Utility Conditionalization is incompatible with the plausible claim that it is rationally permissible to be time-biased (or, more exactly, it rules out time-bias unless that time-bias takes the form of exponential discounting, to be discussed shortly). Most of us are biased toward the future. We prefer, ceteris paribus, that our pains be in the past and our pleasures in the future.29 We often prefer that our pains be in the past, even if this means a greater total amount of pain over the course of our lifetimes, and, similarly, we prefer that our pleasures be in the future, even if this means a lesser total amount of pleasure over our lifetimes. But if you are biased toward the future, your preferences over maximally specific possibilities will sometimes shift, in violation of Utility Conditionalization. This is because your preferences over maximally specific possibilities depend on what time it is, and so as time passes your ultimate preferences will change. Consider the following case from Dougherty (), whose exploitability argument against bias toward the future will be discussed in Chapter . There are two courses of painful surgery you might have to undergo: The Early Course You will have  hours of painful surgery on Tuesday and  hour of painful surgery on Thursday. The Late Course You will have no surgery on Tuesday and  hours of painful surgery on Thursday.

On Monday, you prefer the Late Course to the Early Course, for it involves the lesser amount of future pain. But on Wednesday, you prefer the Early Course to the Late Course, for relative to Wednesday, the Early Course involves less future pain than the Late Course.30 would not clearly be justified in your beliefs about the external world. For you do not have access to whether you are a brain in a vat or not. But if an externalist view such as reliabilism is true, it follows that your beliefs about the external world are justified. For, assuming that you are in fact not a brain in a vat, your beliefs about the external world were formed by a process which is in fact reliable, namely perception. Now, my aim is not to discuss skepticism or whether externalism helps with the skeptical challenge. I simply want to point out that there seems to be no analog of skepticism in the case of preferences that would motivate a move to externalism. 29 Interestingly, time-bias may not extend to things like embarrassment. While I prefer that any sensations of embarrasment be in my past rather than my future, I do not care whether embarrassing events themselves be in the past or the future. I owe this observation to Caspar Hare. 30 One might object that this implausibly presupposes that you do not care at all about how much pain you have in your past, so that you might on Wednesday still prefer the Late Course to the Early Course. Nonetheless, given that you care more about future pain than past pain, the exact numbers in the Early Course and the Late Course could be adjusted so that your preferences between them switch between Monday and Wednesday.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against diachronic principles  In addition to being biased toward the future, many of us are biased toward the near, to use Parfit’s () phrase. Not only do we care more about the future than the past, but most of us also care more about the nearer future than the farther future. We prefer that pains be farther down the road, even if it means a somewhat greater total amount of pain, and likewise we prefer that pleasures come sooner, even if it means a somewhat lesser total amount of pleasure. Whether your bias toward the near will result in shifts in your ultimate preferences depends on the structure of your bias toward the near. (But doesn’t bias toward the near entail bias toward the future, which we have already seen yields shifts in ultimate preferences? No. Bizarrely, we will shortly see that if you want to be biased toward the near while avoiding ultimate preference shifts, you actually have to always care more about earlier events than about later ones, even if those earlier ones are in the past and the later ones in the future.) The upshot of the remainder of this section is that Utility Conditionalization is compatible with the claim that time-bias is rationally permissible just in case your time-bias takes the form of exponential discounting (to be explained shortly), but exponential discounting of well-being is problematic. So, if you think that it can be rationally permissible to be time-biased, you should reject Utility Conditionalization.31 Let us turn, then, to discounting. Discounting is a matter of the importance you assign to different times. Let me flag at the outset that I am concerned here only with discounting well-being. We can also discount goods, so that future goods are treated as less valuable, for purposes of policy-making, than present goods. Economists advocate discounting goods as a way of taking into account things like anticipated economic growth, inflation, and uncertainty about the future. But my concern is with discounting well-being, that is, with the question of whether a given amount of well-being differs in value depending on when it is had. This question is concerned with determining what is sometimes called the “rate of pure time preference.”

31 Utility Conditionalization does, rightly in my view, rule out as irrational a sort of time-bias known as hyperbolic discounting. If you discount your future well-being, say, then the degree to which you currently value some amount of well-being W to be had t time units in the future will be the degree to which you value having well-being W right now times a discount factor of /( + kt), where k is a parameter measuring the intensity of your discounting. Hyperbolic discounting is one sort of time-bias which results in changes in your ultimate preferences, and it has been proposed by Ainslie () as a way of modeling addiction. This is because hyperbolic discounting curves can account for the phenomenon whereby one prefers not smoking over smoking when the time to decide is sometime in the future, but this preference switches when the time to decide grows near.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against diachronic principles Suppose you think that a sacrifice of k units of current well-being would exactly compensate for an increase of  unit of well-being at some time t. Then, your discount factor for time t, denoted R(t), is equal to k. And, as Greaves (forthcoming) explains, your discount rate r is related to your discount factors by the equation: r = − R dR dt

What must your discounting be like in order for you to avoid shifts in your ultimate preferences? Distinguish the time being evaluated from the time at which the evaluation is made. Again following Greaves, let Rij denote the discount factor used by an evaluator at time ti to evaluate amounts of well-being had at time tj . Avoiding shifts in your ultimate preferences requires that your schedule of discount factors Ri is time-neutral, in the sense that for all i, i , j, and j , Rij j

=

R ij

 

R ij

.

In an important result, Strotz (–) shows that your schedule of discount factors is time-neutral, thereby ensuring no shifts in ultimate preferences, if you discount exponentially. (Note: if and only if. There will be various gerrymandered ways of discounting that yield a time-neutral schedule of discount factors, but exponential discounting is the only non-gerrymandered type of discounting that does so.) Being an exponential discounter is a matter of your discount factors being determined exponentially from your discount rate, such that R = exp(−rt). And discounting exponentially is equivalent to having a constant discount rate. As Parfit (, ) explains, a person discounts exponentially if she discounts “at a constant rate of n per cent per month. There will always be the same proportionate difference in how much this person cares about two future events.” If you discount exponentially, then the proportionate difference between how much you care about one time and how much you care about some other time depends only how far apart those points in time are. So, for instance, if you care twice as much about your well-being tomorrow as your well-being the day after tomorrow, then if you discount exponentially, you must also care twice as much about your well-being ten years from now as your well-being ten years and one day from now. Now, whether you discount exponentially, and what your discount rate is at a given time, is a synchronic matter. The structure and rate of your discounting is a matter of your preferences at a time. Therefore, strictly speaking, it is compatible with Time-Slice Rationality to impose on you the synchronic requirement that at each time, you discount exponentially. And, strictly speaking, it is not sufficient for satisfying Utility Conditionalization that at each time you discount exponentially. What Strotz has shown is that your ultimate preferences will remain constant if you always discount exponentially and you don’t change your discount rate. If at

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against diachronic principles  t you discount your well-being exponentially with a discount rate of n per cent per month, and at t you discount your well-being exponentially with a discount rate of m per cent per month, then your ultimate preferences will still have shifted between t and t unless m = n. (Note also that being time-neutral is an instance of exponential discounting, with a discount rate of  per cent per month. So it is compatible with a requirement that you discount exponentially that you are time-neutral with respect to others’ well-being but exponentially time-biased with respect to your own well-being.32 ) Exponential discounting is the only sort of time-bias compatible with Utility Conditionalization, but it nevertheless has some counterintuitive consequences. I mention just two here. First, exponential discounting with some positive discount rate requires not only that you care more about the near future than about the far future, but also that you care more about the recent past than about the near future, and also that you care more about the distant past than about the recent past. Recall that exponential discounting amounts to being such that the proportionate difference between how much you care about one time vs. another depends only on how far apart those points in time are, and hence it does not depend on whether those points in time are in the future or the past, or on how far in the past or the future those points are. Of course, we could say that you must only discount the future and that you must do so exponentially, but this would result in shifts in your ultimate preferences and hence in violations of Utility Conditionalization. One might think that this sort of shift of ultimate preferences is innocuous and that Utility Conditionalization should be modified so as to permit such shifts, since preferences regarding past times aren’t actionable; we cannot change the past. But in Chapter  we will see an argument due to Dougherty () that this thought is mistaken; there are cases where being biased toward the future makes a difference to how you will act, and in a way that makes you exploitable over time. Thus, the sort of “time-inconsistency” generated by exponentially discounting only the future and not the past is not as innocuous as it initially may seem. Second, even fairly moderate discount rates lead to extreme differences in how much you care about two particular times, when those two times are far enough apart. Suppose you have an annual discount rate of just  per cent. Applied to money, this means that you are indifferent between $ at one time and $ a year later. Applied to lives, this discount rate would mean that you regard the  Athenian deaths in the Battle of Marathon as far, far worse than all the deaths in World War II. Each Athenian life in  B.C. would be worth as much as  billion 32 See Hare () for discussion of whether we are required, permitted, or forbidden from being time-biased with respect to the well-being of others.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against diachronic principles lives in  (.+ = .).33 For the same reason, discounting exponentially means caring much, much more about the far future than about the very far future. Of course, these extreme effects over time may be less worrying if you only discount your own well-being, since our lives are of limited duration, although they would reappear if we imagined creatures much longer-lived than ourselves. A final worry is that exponential discounting may be undermotivated. Aside from considerations of exploitability over time (to be considered in depth in Chapters  and ), it is unclear what might motivate a principle requiring that you discount exponentially. For instance, one possible motivation for thinking that bias toward the near is rationally permissible is the idea that it is permissible to proportion your concern for your past and future selves to the degree to which they are psychologically connected (e.g. share the same or similar beliefs, desires, and intentions) with your present self, coupled with the observation that your near future selves are more psychologically connected with your present self than are your farther future selves. But even if this thought is on the right track, it would not motivate exponential discounting, for there is no reason to think that you undergo psychological change at a constant rate of n per cent per month (not to mention that exponential discounting requires caring more about your distant past selves than about your present self, even though your present self is obviously maximally psychologically similar to itself). Indeed, if you are tempted by the thought that bias toward the near must be grounded in facts about degrees of psychological connectedness, this would motivate thinking that your discounting should not be exponential. Summing up this section, we began with the thought that those philosophers attracted to diachronic principles for credences might also be sympathetic toward diachronic principles for preferences. I suggested that Utility Conditionalization— a principle requiring that your ultimate preferences remain constant while other preferences change only in response to changes in your credences—was the most natural (indeed, probably the only natural) possible diachronic principle for preferences. Of course, any diachronic principle for preferences will face the same problems with respect to internalism about rationality and personal identity puzzle cases that face diachronic principles for credences like Conditionalization. But Utility Conditionalization faces other problems as well, most notably ruling out as irrational bias toward the future and bias toward the near, except for the

33

I owe this point to John Broome.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against diachronic principles  special case where your time-bias involves exponential discounting.34 But the sort of time-bias given by exponential discounting has some highly counterintuitive consequences and is undermotivated (the only motivation being the appeal to exploitability considerations, which will be addressed in later chapters). I conclude that Utility Conditionalization is not a requirement of rationality. Moreover, in the absence of some improved principle (again, other than the vague injunction not to change your preferences too much or too often), I conclude that there are no diachronic principles for preferences. As with credences, all the principles for preferences are synchronic. 34 One might attempt to devise a diachronic principle for preferences that is compatible with time-bias by appealing to preferences over centered possible worlds, perhaps by saying that it is your preferences over maximally specific centered possibilities that should not change over time. But even if it is workable from a formal perspective and avoids proscribing time-bias, it will still face objections based on internalism and personal identity puzzle cases. Moreover, such a principle cannot be motivated by a Diachronic Tragedy Argument (Ch. ) in the way that Utility Conditionalization can, for conforming with this new norm will not prevent one from being predictably exploitable over time.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 Against Reflection Principles Time-Slice Rationality says not only that what attitudes you ought to have at a time doesn’t depend on what attitudes you in fact have at other times, but also that what attitudes you ought to have at a time doesn’t depend in any special way on what attitudes you believe you have at other times. That is, it endorses not only Synchronicity, but also Impartiality. Your beliefs about what attitudes you have at other times play the same role in determining what attitudes you ought to have now as your beliefs about what attitudes other people have. For this reason, Time-Slice Rationality conflicts with reflection principles. I will start with the epistemic case, which has been more widely discussed, before turning to a reflection principle for preferences.

. Reflection for Beliefs Van Fraassen’s () Reflection principle enjoins you to defer to the beliefs you anticipate having in the future. It says that if you believe that you will later have some belief, then you ought to now have that belief. In probabilistic terms, where P is your credence function at t and P (H) = n is the proposition that at t you will have credence n in H, the principle states: Reflection It is a requirement of rationality that, for all H, P (H | P (H) = n) = n

There is something right about Reflection. Suppose that there is an envelope on your desk with the results of your blood tests. But you are told that no matter what the paper inside of the envelope says, you will upon opening it be optimistic about your health. It seems that you ought now to be optimistic about your health, even though you haven’t seen the evidence contained in the envelope. Why wait to form a view if you know that, no matter the specifics of the results in the envelope, you will be optimistic about your health? Reflection, unlike Conditionalization, is a synchronic principle. It is a principle about how your beliefs at a time ought to be related to each other—in particular

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against reflection principles  how your first-order beliefs ought to be related to your higher-order beliefs, in this case beliefs about your future beliefs. Because Reflection is a synchronic principle about how your beliefs at a time ought to be related, it does not conflict with internalism about rationality any more than a principle that says that your beliefs at a time ought to be consistent. It is therefore not Synchronicity, but Impartiality, which makes it the case that Time-Slice Rationality must reject Reflection. Reflection faces devastating counterexamples and problematically makes reference to personal identity over time. Start with the counterexamples. First are cases of anticipated future irrationality. Suppose you believe that you will go out drinking tonight, and you believe that while drunk you tend to overestimate your ability to drive safely. Then, it is rational for you to believe that tonight you will believe that you can drive home safely, but this does not mean that you should now believe that you will be able to drive home safely.1 The second type of counterexample to Reflection involves anticipated evidence loss. While sitting at breakfast eating cereal, you believe that ten years from now, you will be quite uncertain what you had for breakfast today. But you shouldn’t now be uncertain about what you are having for breakfast, with the cereal bowl right in front of you!2 One might be tempted to object that these cases of anticipated irrationality and anticipated evidence loss are not counterexamples to Reflection, since Reflection is a principle about ideal rationality. And ideally rational agents would not suffer from future irrationality or lose evidence.3

1 Some suggested counterexamples to Reflection are similar to the case of anticipated drunkenness but do not involve any chemically induced mental impairment. Maher () imagines Persi, a very convincing fellow. You think that he “really has no idea how the coin will land, but has such a golden tongue that if you talked to him you would come to believe him” (). In Maher’s case, you violate Reflection. And plausibly, if you think you will be susceptible to Persi’s charm and wit in this way, you regard your future self as irrational, since rational people would not be hoodwinked by Persi. Relatedly, Briggs () suggests that changes in epistemic standards are a source of rational violations of Reflection. She imagines that you are deciding whether to enroll in the doctoral program of William James University, whose professors are all voluntarists about belief. At present you are agnostic about whether God exists, but you have credence . that if you go to WJU you will come to be certain of the existence of God. So, Reflection says that it is requirement of rationality that Pnow (G | PWJU (G) = .) = . (where G is the proposition that God exists). But, as Briggs argues, “you shouldn’t treat your enrollment in William James University as evidence for God’s existence” (). I think that this is likely also a case of anticipated future irrationality, although this could depend on the exact details of the case. 2 Note also that the problem with Reflection does not have to do only with forgetting, but also with evidence loss more generally. For instance, the Shangri-La case involved loss of evidence but may not have involved forgetting, at least on an intuitive understanding of forgetting. 3 Note that this is extremely implausible for the case of Shangri-La, where the evidence loss isn’t the result of an imperfect memory, but rather the result of recognizing the counterfactual possibility of having your memory tampered with. It is also implausible that ideally rational agents would be

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against reflection principles But we do not need actual future irrationality or actual evidence loss in order to generate counterexamples to Reflection. All that is needed to get violations of Reflection is that you believe that you will later be irrational or will have lost evidence. So a defense of Reflection requires that ideally rational agents be certain that they have perfect memories and will never be irrational. And this claim is overly strong. An ideally rational agent might be given powerful (albeit misleading) evidence that she will forget or become less than ideally rational. Being ideally rational is largely a matter of responding appropriately to evidence, so in such a case the ideally rational agent should be less than certain that she has a perfect memory and will always be ideally rational.4 And such an agent will in a wide range of cases rationally violate Reflection. These counterexamples show that Reflection must at least be modified to say that you ought to defer to the beliefs you believe you will later have unless you believe (i) that your future self will be irrational or (ii) that you will have lost evidence. Modified Reflection It is a requirement of rationality that, for all H, P (H | P (H) = n) = n, unless you believe that at t you will be irrational or will have lost evidence.

Whereas Reflection is now regarded as false by most, if not all, epistemologists, something in the vicinity of Modified Reflection is widely endorsed, and philosophers have objected to various epistemological theories on the grounds that they yield violations of Modified Reflection (or some closely related principle).5 I agree that Modified Reflection avoids the counterexamples facing Reflection, but it remains problematic in virtue of being insufficiently general. My claim, then, is that Modified Reflection is true, but it is not the whole truth about rational deference. We should look for a more fundamental, more general deference principle which subsumes Modified Reflection as a special case. Modified Reflection is insufficiently general on two counts. First, it is insufficiently general in virtue of being future-directed. Modified Reflection only applies to the beliefs you expect to have in the future. But there is nothing special about the future in this regard. immune from losing evidence as a result of factors beyond their control. Thanks to Douglas Portmore for raising this point. 4 Christensen () argues that even an agent who is ideally rational at a time probably ought to be less than certain of her own ideal rationality at that very time. 5 For instance, White () argues against Jim Pryor’s () dogmatist view on the grounds that it yields Reflection violations which do not involve anticipated future irrationality or lost evidence. And in White (), he argues on similar grounds against the rational permissibility of so-called “imprecise” or “mushy” credences (see Chapters  and  for discussion of mushy credences). Kotzen () argues against the objective Bayesian theory of Williamson () on the grounds that it yields violations of something like Modified Reflection.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against reflection principles  Just as there are cases where you ought to defer to the beliefs you expect to have in the future, so there are cases where you ought to defer to the beliefs you think you had in the past. If you believe that ten years ago, you believed you were eating cereal, then you ought to believe now that you were eating cereal ten years ago. Modified Reflection, which is future-directed, should follow from a more general, and more fundamental, norm about deference which is time-symmetric. Second, it is insufficiently general by being about the beliefs you believe you will later have. Just as sometimes you ought to defer to your anticipated future beliefs, you also often ought to defer to the beliefs that you think others have. If you think that the weatherman, who is better informed about meteorological evidence and more skilled than you at evaluating that evidence, believes it will rain, then you yourself ought to believe it will rain. The arbitrariness on the part of Modified Reflection becomes especially clear in puzzle cases about personal identity over time. Whether Modified Reflection kicks-in and instructs you to defer to the beliefs you believe some later person to have depends on whether this later person is your later self (or perhaps on whether you believe that this later person is your later self). But this means that when the identity facts become murky, it also becomes murky whether Modified Reflection applies. But once we fix your beliefs about how rational and well-informed this later person is, we have said all we need to in order to fix whether you ought to defer to her. Neither the deliberating agent nor we, the theorists, need to settle these facts about personal identity in order to determine how the agent ought to respond to evidence about a future agent’s opinions. In sum, Modified Reflection is true, but it is insufficiently general, since it is an essentially intrapersonal principle. It should follow from a more general, and more fundamental, deference principle which applies equally to the interpersonal case. As an analogy, Modified Reflection has the same status as a norm against killing innocent people dressed in jeans. It is true that you ought not kill innocent people who are wearing jeans, but not because they are wearing jeans. Instead, the fact that you ought not to kill jeans-wearing innocents falls out of a more general moral norm that makes no reference to jeans. What is wanted, therefore, is a general principle about whether and how to defer to the beliefs that you think someone else has, irrespective of whether this someone else is your past self, your future self, or some third party. After all, whether you ought to defer to someone should depend only on evidential considerations; that is, on how well-informed and rational that person’s beliefs are. In Chapter  I argue for a principle of deference to expert opinion: to the extent that you believe that some agent is an expert relative to you and has a certain credence in a given proposition, you yourself ought to have that credence. Such a

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against reflection principles principle of expert deference makes no reference to time or to personal identity and subsumes Modified Reflection as a special case.

. Reflection for Preferences I have so far focused on a reflection principle which applies to beliefs or credences. But insofar as such a principle is attractive, it is natural to expect a similar principle to hold for conative attitudes like desires and preferences.6 Such a principle for preferences would say: Preference Reflection It is a requirement of rationality that if you believe that you will later prefer A to B, then you now prefer A to B, unless you believe that you might be irrational in the future or have lost evidence.7

Preference Reflection has some initial plausibility. First, as already noted, many philosophers have accepted some sort of reflection principle for beliefs, and so there is some prima facie reason to think there would be an analogous principle for desires. Second, this principle gives intuitively correct results in many cases (though not all, as I will explain below). When I wake up feeling groggy and think about whether to get up and go to the gym or instead catch another hour of sleep, I often motivate myself by reflecting on the fact that if I go to the gym, I’ll be glad I did so, whereas if I sleep in, I will regret it. Preference Reflection vindicates this sort of “I’ll be glad I did it” reasoning.8 Third, and relatedly, one might think, with Nagel (), that something like Preference Reflection is necessary to underpin the rationality of prudence, understood as something like practical foresight. Nagel argues that “there is reason to do not only what will promote that for which there is presently a reason, but also that for which it is expected that there will be a reason” (, ). Nagel is making about a claim about reasons in general, but he is clear that this thesis applies in particular to reasons stemming from desires, so that if you believe that you will

6 See Arntzenius () and Harman () for discussion of reflection principles for conative attitudes. 7 To make this principle more closely parallel to van Fraassen’s Reflection principle, one might want to express it in terms of credences and utilities rather than beliefs and preferences. But I will argue in Section .. that this cannot be done unless you are certain about what preferences you will have if you are rational in the future. 8 See Harman () for further discussion, though note that Harman is ultimately arguing against Preference Reflection, in part due to the problems with bootstrapping discussed below.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against reflection principles  later have some desire which will provide a reason for you to act in a certain way, then you now have a reason to act in that way.9

.. Personal Identity Despite some prima facie plausibility, Preference Reflection faces a series of devastating problems. First, it should by now be obvious that Preference Reflection will face problems with puzzle cases for personal identity over time. It seems that in these myriad puzzle cases, neither the agent nor the theorist should have to settle the metaphysical facts about identity in order to determine what the agent in the scenario ought to desire. Take Double Teletransportation. If Preference Reflection is a requirement of rationality, then whether Pre must defer to the preferences she expects Lefty (or Righty) to have depends on whether she is identical to one, to the other, to both, or to neither. But plausibly, we don’t need to settle these identity facts in order to settle what Pre ought to prefer. Insofar as it is permissible to have a distinctive sort of self-concern, then Pre has reason to care about the preferences of Lefty and Righty regardless of whether she bears the relation of personal identity to either or both of them. So plausibly, Pre’s reasons to promote the desires she expects Lefty or Righty to have should fall out of more general principles which, unlike Preference Reflection, will make no reference to the relation of personal identity over time. Of course, we could get around this particular problem by modifying Preference Reflection so that it says that you ought to defer to the preferences that you believe future psychological continuants of you will have. I have previously argued that this strategy of replacing personal identity with psychological continuity or R-relatedness is problematic and unmotivated in the context of epistemological principles like Conditionalization and van Fraassen’s Reflection, but it actually seems fairly natural in the context of principles for preferences. So I would not want to rest my case against Preference Reflection (or related principles) on considerations of personal identity puzzle cases alone. But fortunately for my purposes, the case against Preference Reflection is overdetermined, for it faces a number of other devastating problems.10 9 See Bratman () for a defense of a relation practical reflection principle, which he calls Standpoint Reflection. It is a reflection principle not for a single kind of practical attitude like preferences or intentions, but for the agent’s practical standpoint as a whole, which incorporates many elements of an agent’s psychology. I suspect that Bratman’s principle will face some of the problems I raise below, but not all of them (for instance, as Bratman (p.c.) pointed out, it may not face a version of the “No Formal Analogue” problem I raise below). Note that Bratman does not defend the Standpoint Reflection principle as a norm of rationality, but rather as a “structural principle that is part of the metaphysics of planning agency” which “says how the contours of a planning agent’s present standpoint are potentially shaped by certain expectations of future attitudes” (). 10 Thanks to Dan Greco for helpful discussion of this issue.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against reflection principles

.. Bootstrapping Preference Reflection yields unattractive consequences in cases where you believe that what you will later prefer depends on what you will do now. Suppose that you are deliberating about whether to travel to Argentina or to Brazil. You believe that whatever you do, you will be very glad that you did that thing and not the other. If you go to Argentina, then while you are hiking in Patagonia and reminiscing about the fantastic culture of Buenos Aires, you will be very glad you went to Argentina rather than Brazil. You will have a preference for having gone to Argentina over having gone to Brazil. And you believe that if you go to Brazil, then while you’re sitting in the Sambadrome at Carnaval and remembering the brilliant wildlife of the Amazon, you will be glad you chose Brazil over Argentina. Preference Reflection says that which trip you ought to prefer now depends on which trip you believe you will take. If you find yourself leaning toward Argentina and hence believe that you will choose Argentina over Brazil, then you believe that you will later prefer Argentina over Brazil, and so by Preference Reflection you ought now to prefer Argentina over Brazil. And if you think you will wind up going to Brazil, then you ought to believe that you will later prefer Brazil over Argentina, and hence by Preference Reflection you ought now to prefer Brazil over Argentina. It seems, then, that Preference Reflection endorses the following sort of bootstrapping reasoning: “I believe I will go to Argentina (Brazil), and so I ought to go to Argentina (Brazil).” But intuitively, this kind of bootstrapping is irrational, for you know that whichever trip you take, you will find yourself happy and glad you did it.11 Worse, suppose that you believe that whatever you do, you will wish you did the other thing. If you go to Argentina, then you will think fondly of the amazing time you could have had in Brazil and wish you had gone there instead. And if you go to Brazil, you will wish you had gone to Argentina. Then, if you believe you will go to Argentina, then you ought to believe you will later prefer Brazil over Argentina,

11 Harman () discusses a case with just this structure. A mother gives birth to a child who is deaf. The doctors tell her that it is possible to cure the child’s deafness with a cochlear implant, and the mother must decide whether to cure the child’s deafness or not. She knows that if she cures the baby’s deafness, she will be glad she did it. She will later rationally prefer having cured the baby to having refused the treatment. But she also knows that if she refuses the treatment, she will be glad she did so. The child will very likely grow up to be a happy deaf adult and will have numerous close relationships in the deaf community. The child might even be glad to be deaf (as evidence suggests many deaf adults are). And, of course, she will love the child as he or she is. Now, Preference Reflection entails that whether she should now prefer that she cure the child’s deafness or not depends on what she believes she will do. If she starts out believing that she will cure the child’s deafness, she should prefer that she do so, whereas if she starts out believing that she will refuse the treatment, she should prefer that she do that. Again, this seems like a bad result.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against reflection principles  so by Preference Reflection you ought now prefer Brazil over Argentina. And similarly, if you believe you will go to Brazil, then you ought now prefer Argentina over Brazil. So Preference Reflection endorses reasoning as follows: “I believe I will go to Argentina (Brazil), and so I ought to go to Brazil (Argentina).” Once again, this sort of reasoning seems crazy.12 One might attempt to defend Preference Reflection by saying that in these cases, the anticipated preferences are irrational. This may be right, but it’s worth noting that it does rely on the thought that it’s irrational to care non-instrumentally about things other than pleasure and pain. If not, then when you have a wonderful time in Argentina, it may be rational to value those very experiences, which you wouldn’t have had, had you gone to Brazil (though of course you would have had different memorable experiences). I value the experiences that I have had despite recognizing that I could have had other equally pleasant experiences. I am glad that I have lived my life, rather than any number of other lives which contain the same amounts of pleasure and pain. If this is right, then Preference Reflection cannot avoid highly counterintuitive implications in cases where you believe that what you will later desire depends on what you will now do.

.. Time-Bias As we saw in the previous chapter, being biased toward the future results in shifts in your preferences, and so knowing that you are time-biased is incompatible with Preference Reflection. To see this, consider the case discussed in the previous chapter, in which you are ignorant of which of two courses of surgery you will have to undergo: The Early Course You will have  hours of painful surgery on Tuesday and  hour of painful surgery on Thursday. The Late Course You will have no surgery on Tuesday and  hours of painful surgery on Thursday.

On Monday, you prefer the Late Course over the Early Course, but you know that, being biased toward the future, you will on Wednesday prefer the Early Course over the Late Course. Hence, you violate Preference Reflection. Similarly, if you are biased toward the near, your preferences will shift (unless, as explained in the previous chapter, you discount exponentially). And so, if you are biased toward the near and recognize this fact about yourself, you expect to

12

See Hare and Hedden (forthcoming) for further discussion of these sorts of cases.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against reflection principles have preferences in the future which you do not have now. Hence, you violate Preference Reflection. Insofar as we think that some form of non-exponential time-bias is at least rationally permissible, we must reject Preference Reflection.

.. Arbitrary Asymmetries Preference Reflection, like Reflection for credences, is asymmetric. It privileges the preferences of your future selves over the preferences of other people, and it privileges your future preferences over your past preferences. Thus, as with van Fraasen’s Reflection, Preference Reflection is arbitrarily asymmetric with respect to both the you/other distinction and the past/future distinction. Parfit (, ) makes this point while considering an argument against the rationality of time-bias. He imagines the following accusation against one who is biased toward the near: You do not now regret your bias towards the near. But you will. When you pay the price– when you suffer the pain that you postponed at the cost of making it worse–you will wish that you did not care more about your nearer future. You will regret that you have this bias. It is irrational to do what you know that you will regret.

This objection is grounded in something like Preference Reflection, for it suggests that if you know you will prefer one thing to another (so that if you did the other, you would regret it), then you ought now prefer the one to the other. But Parfit rejects this argument. He writes of the agent who is biased toward the near that: he may regret that in the past he had his bias towards the near. But this does not show that he must regret having this bias now. A similar claim applies to those who are self-interested. When a self-interested man pays the price imposed on him by the self-interested acts of others, he regrets the fact that these other people are self-interested. He regrets their bias in their own favour. But this does not lead him to regret this bias in himself.

Parfit is in effect defending time-bias by arguing that Preference Reflection is arbitrary, since it applies only with respect to desires that you anticipate you will later have. If we reject an analogous principle which applies in the interpersonal case, telling you to have the desires or preferences that you believe others have, we should likewise reject Preference Reflection. Preference Reflection is also aymmetric in virtue of applying only with respect to desires you expect you will later have. If you ought to adopt as your own the desires you expect to have in the future, ought you also to adopt the desires you believe you had in the past? It seems doubtful. Parfit (, ) imagines that in his youth he wanted more than anything to be a poet. And it was not as if he wanted to be a poet only if he would still want this in the future, when the time for

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against reflection principles  writing poetry arrived. But he no longer desires to be a poet. It is not that his value judgments have changed; he has not decided that poetry is frivolous or pretentious and so he does not regard his youthful desire as irrational in any way. Parfit thinks that in such a case he has no reason to pursue poetry, not even a reason which is overridden by other considerations. The mere fact that he believes he once desired to become a poet gives him no reason now to have any desire to be a poet. If this is right, it casts doubt on Preference Reflection. If you have no reason to defer to the desires you believe you once had, why should it be that you ought to defer to the desires you believe you will have in the future? So as with van Fraassen’s Reflection, Preference Reflection is doubly asymmetric, and problematic for that reason.

.. No Fine-Grained Analog In the case of reflection principles for doxastic attitudes, there was an analog for fine-grained attitudes (credences) of a more easily-statable principle for coarsegrained attitudes (binary beliefs). The coarse-grained principle stated that you ought to be such that if you believe you will later believe H, then you now believe H, and the fine-grained principle stated that you ought to be such that P (H | P (H) = n) = n. The principle of Preference Reflection, put in terms of coarse-grained attitudes, states that you ought to be such that if you believe you will later prefer that H, then you now prefer that H. Is there an analog of Preference Reflection which is put in terms of fine-grained attitudes like credences and utilities? The natural formal analog would state that your utility for A, conditional on the claim that you will later have utility function Ui , should equal Ui (A). Where the Aj form a partition of A, your conditional utility for A given E is defined thus:  U(A | E) = j U(Aj ) × P(Aj | A ∧ E). Utility Reflection It is a requirement of rationality that, unless you believe you might be irrational or have lost evidence in the future, then for all A, Unow (A | Ulater = U) = U(A)

Utility Reflection entails that if you do not believe that you might be irrational or have lost evidence in the future, your current utility for A should equal your expectation of your future utility for A: Unow (H) =

 i

Ui (H) × P(Ulater = Ui )

Unfortunately, Utility Reflection is unworkable unless it is never rationally permissible to change your ultimate preferences; that is, unless Utility Conditionalization is true. For unless this is the case, Utility Reflection will fall prey to a version of the

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against reflection principles problem of interpersonal comparisons of utility.13 I will show how this problem arises in the present context and argue that, while other instances of the problem may be more easily dealt with, in this case it is insoluble. This is because it assumes that we can talk about the utility you will later assign to a proposition, or about the utility function you will later have. But your preferences at a time do not determine a unique utility function. Rather, they determine a utility function which is unique at most up to positive affine transformation. Utility function U represents your preferences if and only if U  = aU + b (where a > ) represents your preferences as well. More intuitively, the zero point and the scale of a utility function are arbitrary; you can move the zero point and change the scale without changing which preferences the utility function represents. Changing the zero point and scale of a utility function does not change anything when only one utility function is in play. If you rank options by expected utility relative to a credence function P and a utility function U, the ranking will be exactly the same if you calculate expected utilities relative to credence function P and utility function U  = aU + b (where a > ). But changing the zero point and scale of a utility function does matter when multiple utility functions are in play, as when you are aggregating utilities. To see this in the context of Utility Reflection, suppose that you are . confident that you will later have preferences which are representable by utility function U and . confident that you will later have preferences which are representable by utility function U . Utility Reflection says that your current utility function should be Ua = . × U + . × U . But if we change the scale of U , say by multiplying it by , to arrive at a different utility function U which represents the same preferences, Utility Reflection now gives a very different answer about what your current utility function ought to be. And the utility function it now tells you to have will not be a positive affine transformation of the utility function we got using U . We can see this as follows: Let Then, Let Then, But

Ua = . × U + . × U . Ua ×  = U + U is a positive affine transformation of Ua . Ub = . × U + . × U = . × ( × U ) + . × U . Ub ×  =  × U + U is a positive affine transformation of Ub .  × U + U is not a positive affine transformation of U + U .

So plugging U into Utility Reflection results in a recommended utility function which is not a positive affine transformation of the utility function that Utility 13 Arntzenius () is also aware of this problem in his discussion of reflection principles for desire-like attitudes, but he does not address it in detail.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against reflection principles  Reflection recommends if we plug in U instead. Therefore, plugging U rather than U into Utility Reflection results in different recommendations about what preferences you ought to have now! But the choice of U over U , or vice versa is completely arbitrary; there are no grounds for picking out one of the utility functions that represents the preferences you might later have over any other of the infinitely many utility functions which represents those same preferences. For this reason, Utility Reflection gives inconsistent recommendations about what your current preferences ought to be. Of course, this problem will not arise if you must have the same set of ultimate preferences in the future in any case in which you are rational. This would be the case if rational agents never change their ultimate preferences, as Utility Conditionalization entails. For then you know that if you are rational in the future, you will have the same ultimate preferences that you have now. Then it is natural to stipulate that if you have the same ultimate preferences in two possible cases, your preferences in those two cases should be represented by utility functions which agree on the values they assign to maximally specific possibilities. That is, if you have the same ultimate preferences in case  and case , then it is natural to stipulate that once we arbitrarily fix on one particular utility function U out of the set of utility functions that represent your case  preferences, we must choose a utility function U out of the set of utility functions representing your case  preferences, where for all possible worlds wi , U (wi ) = U (wi ). Same ultimate preferences mean same utilities for possible worlds. It is worth emphasizing that this is a stipulation, and not something that follows from the decision-theoretic framework itself. Still, if we make this stipulation, then Utility Reflection is in good shape provided that Utility Conditionalization (or some more general principle that entails it) is true. Indeed, given Utility Conditionalization and the stipulation that same ultimate preferences mean same utilities for possible worlds, Utility Reflection will follow trivially from Modified (Belief) Reflection. But if Utility Conditionalization is false, and you can have different ultimate preferences in the future without being irrational, then Utility Reflection faces the aforementioned problem. This problem is a version of the problem of interpersonal comparisons of utility. If utility functions are thought of as representations of preferences, there is no sense to be made of whether my utility for H is greater than yours. This, of course, is problematic for versions of consequentialism which say that you ought to maximize total well-being in the world, if well-being is understood as preference satisfaction, for different choices of utility functions to represent people’s preferences will result in different conclusions about what you ought to do.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against reflection principles The remainder of this section looks at possible solutions to the problem of interpersonal comparisons of utility and concludes that there is no solution to that problem, if utilities are understood as representations of preferences (which is how utilities must be understood in order for Utility Reflection to be a finegrained analog of Preference Reflection). As the discussion is somewhat long and involved, impatient readers are welcome to skip the remainder of this section. The problem of interpersonal comparisons of utility may be less problematic if we think of utility functions as representing something like levels of happiness or as representing betterness relations. If utilities represent levels of happiness, then intrapersonal comparisons of utility (comparisons of utility between the same person at different times) may be easier to make than interpersonal comparisons, since you have better access (in particular, through memory) to facts about how happy something made you in the past than to facts about how happy that thing makes someone else. But even if interpersonal comparisons of levels of happiness are empirically more difficult, or even impossible, to make, this does not mean to entail that they are meaningless. It may be difficult or impossible to determine whether the pleasure or happiness I get from eating chocolate ice-cream is more intense than the pleasure or happiness you get from eating chocolate icecream, but this does not mean that there is no fact of the matter about whose pleasure or happiness is more intense. To say this is just to reject verificationism, as most philosophers do nowadays. Similarly, if utilities represent facts about betterness, then there is no reason to doubt that interpersonal comparisons of utility are meaningless. There may be facts about whether one state of affairs is better for one person than another state of affairs is for a different person, even if such facts are difficult to determine. In sum, if utility functions represent phenomenological states like levels of happiness, or alternatively if they represent facts about betterness, then both the problem of interpersonal comparisons of utility and the problem of intrapersonal comparisons of utility can be solved (though the latter may be empirically easier). But crucially, because Utility Reflection is meant to be a fine-grained analog of Preference Reflection, the utility functions involved in Utility Reflection must be interpreted as representing possible future preferences, rather than as representing levels of happiness or goodness. Of course, if one thought that the strength of a preference was something phenomenological (e.g. the warm feelings one gets when one contemplates the prospect of the preference being satisfied), then one could solve the problem of interpersonal comparisons of utility in exactly the same way that it can be solved if we interpret utilities as representing levels of happiness or pleasure. There would be some fact of the matter, having to do with intensities of warm fuzzy feelings, about whether my desire for chocolate

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against reflection principles  ice-cream is stronger than your desire for chocolate ice-cream, or stronger than my past desire for chocolate ice-cream, even if it is difficult or impossible to verify this. But it is doubtful whether strengths of preferences can be interpreted as grounded in phenomenology. Compare Ramsey’s (, ) discussion of degrees of belief: It could well be held that the difference between believing and not believing lies in the presence or absence of introspectible feelings. But when we seek to know what is the difference between believing more firmly and believing less firmly, we can no longer regard it as consisting in having more or less of certain observable feelings; at least I personally cannot recognize any such feelings. The difference seems to me to lie in how far we should act on these beliefs: this may depend on the degree of some feeling or feelings, but I do not know exactly what feelings and I do not see that it is indispensable that we should know.

Similarly, I find it doubtful that my own preferences or strengths of desires involve introspectable phenomenological features. For instance, I very strongly desire not to die tomorrow, but I do not get any warm fuzzy feeling when I contemplate the prospect of surviving the week. Indeed, if anything, I get more of a warm fuzzy feeling when I think about chocolate ice-cream than when I think about surviving the week, even though I have a much stronger desire for survival than for icecream. Nor is it the case that strengths of preferences or desires always track the extent to which you would feel disappointment or suffering. I very strongly desire to survive the week, but as Epicurus observed, I won’t feel anything if that desire is frustrated. If utilities are interpreted as representing strengths of preferences (as opposed to levels of pleasure or happiness), and preferences are understood dispositionally rather than phenomenologically, then I think that the problem of inter- and intrapersonal comparisons of utility may be in principle insoluble. I think that once we grant that preferences are not interpreted phenomenologically, we should think that while preferences and ordinal preferences are real, cardinal utilities are merely a device to represent your preferences over not just maximally specific possibilities, but over gambles as well. Suppose you prefer A to B to C. The fact that the difference between your utility for A and your utility for B is equal to the difference between your utility for B and your utility for C just amounts to something like the fact that you would be indifferent between getting B for certain and a gamble with a  per cent chance of yielding A and a  per cent chance of yielding C. On this view, desires, ordinal preferences, and even ratios of utility differences may be real, but the choice of the zero point and scale of your utility function is arbitrary and not determined by anything in your psychology. Note that on this interpretation of utility scales, which I take to be a fairly standard one in decision theory, both the problem of interpersonal comparisons of utility and the problem of intrapersonal but intertemporal comparisons are in principle

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against reflection principles insoluble, since there is no fact of the matter what any given time-slice’s zero point and scale are.14 The reason why there is no corresponding problem of inter- or intra-personal comparison of degrees of belief is that there are clear upper and lower bounds— certainty of truth and certainty of falsehood—to how confident any person can be in a proposition. The choice of  and  as numbers to represent these extremal degrees of belief is largely conventional.15 And so whenever person A is certain of P and person B is certain of Q, we just set A’s degree of belief for P and B’s degree of belief in Q at . But while it is part of the concept of belief that there are upper and lower limits on how confident you can be in a proposition, it is not part of the concept of desire that there are upper and lower limits on how strongly you can desire something. If there were, then we could take anything that any person “maximally desires” and assign it some arbitrary utility (, say) as the conventional upper bound, take anything that any person “maximally disprefers” and assign it some arbitrary utility (−, say) as the conventional lower bound, and fill in everything else accordingly.16 But desire isn’t like that. There is no motivation (apart from wanting to solve the problem of inter- and intra-personal comparisons of utility) for thinking that there are upper and lower bounds for how strongly a person can desire something, nor for thinking that such upper and lower bounds should be the same for everyone. To take just one example, it seems that I could prefer more days in heaven to fewer, such that I have no diminishing 14

This is admittedly a radical conclusion. Is there really no sense in which my preference for living for another week is stronger than your preference for having a cup of coffee? I am tempted to think any truth in this comparative claim derives not from some fact about your mental state and mine, but rather from a normative judgment that in ethical decision-making, we should give your preference for having a cup of coffee less weight than my preference for living another week. So perhaps I can account for our inclination to make comparative claims about strengths of preferences by interpreting them as proposals for how to weight our competing interests in determining what is morally required of us. 15 Of course, there are good mathematical reasons for choosing  and  as the upper and lower bounds. In particular, the axiom of finite additivity and the definition of probabilistic independence seem to depend on using the [, ] scale. Standardly, your degree of belief in the disjunction of two mutually exclusive propositions should be the sum of your degrees of belief in each of the propositions, and your degree of belief in the conjunction of two independent propositions should be the product of your degrees of belief in each proposition. But if the scale were [−, ], for instance, then your degrees of belief in H and ¬H could each be  (the midpoint of the scale), but by finite additivity, your degree of belief in the necessary disjunction H ∨ ¬H would also be . And if the scale were, say, [, ], you could have two independent propositions P and Q, each with degree of belief  (the midpoint of the scale), but by the standard definition of independence, your degree of belief of the conjunction P ∧ Q should still be . So if we were to choose a different scale to represent probabilities or degrees of belief, we would have to replace finite additivity and the standard definition of independent with alternative principles which would almost certainly be far less elegant. So, while the [, ] scale is conventional, there are good reasons for choosing this convention over alternatives. 16 This would be a version of the so-called “zero-one rule.” See Hausman () for discussion.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against reflection principles  marginal utility for days spent in heaven. I prefer , days to , days just as strongly as I prefer  days to  days. If I can have preferences like that (and it certainly seems that I could), then my utility function must be unbounded.17 Another interesting but ultimately mistaken proposal for solving the problem of inter- and intra-personal comparisons of utility (again, under a preference interpretation of utility) is due to Harsanyi (, ). Harsanyi introduces the notion of an extended alternative, which is a pair consisting of a state of affairs and a set of personal characteristics. So, an extended alternative could be something like being a runner with a personal dislike of lactic acid (bad) or being a philosopher while possessing great clarity of thought and an appreciation of solitude (good). Harsanyi thinks that everyone has the same extended preferences and that this allows us to make interpersonal comparisons of utility, even under a preference interpretation of utilities, by stipulating that everyone’s utility function assigns the same utilities to particular extended alternatives, and filling everything else out accordingly. But there is every reason to think that people do not in fact have the same extended preferences.18 Broome (, ) makes this point clearly: I myself prefer to live the life of an academic, with my own academic characteristics, even in the conditions allotted to academics in contemporary Britain, to being a financial adviser living in the conditions allotted to financial advisers. I would expect a financial adviser,

17 One might make the following proposal for fixing the zero points and scales of utility functions, suggested by Sepielli () in a different context. Start with the fact that ratios of utility differences are independent of the choice of zero point and scale. Then, the idea is to find three propositions A, B, and C such that the two people a and b (they could be two time-slices of the same person) both prefer A to B to C and are the same with respect to the ratio of the difference between the utility of A and the utility of B, and the difference between the utility of B and the utility of C (that is, the value of [U(A) − U(B)]/[U(B) − U(C)] is the same for each person). Then, the proposal is to arbitrarily pick a number x as the utility each assigns to A and fill everything else out accordingly. That is, we find three propositions such that the two agents have the same ordinal preferences among them and agree on the ratios of the utility differences between them, and then assign to a and b utility functions which assign the same numbers to these three propositions, so that Ua (A) = Ub (A), Ua (B) = Ub (B), and Ua (C) = Ub (C). But unfortunately, this solution is not only unmotivated, but also inconsistent. Suppose that agents a and b agree not only on the ordinal ranking and ratio of utility differences with respect to A, B, and C, but also on the ordinal ranking and ratio of utility differences with respect to D, E, and F. And suppose that for a, the utility difference between A and B is very large relative to that between D and E, while for b, the utility difference between D and E is very large relative to that between A and B. Then, if we run Sepielli’s procedure using A, B, and C as our privileged triplet of propositions (so that Ua (A) = Ub (A), Ua (B) = Ub (B), and Ua (C) = Ub (C)), we will get very different results than if we run it using D, E, and F as the privileged triplet (so that Ua (D) = Ub (D), Ua (E) = Ub (E), and Ua (F) = Ub (F)). 18 Note that denying that people have the same preferences over extended alternatives does not entail that people differ with respect to how good a given extended alternative would be for them, unless we are committed to a preference-satisfaction theory of well-being.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against reflection principles with her different values, to have the opposite preference. So her extended preferences are different from mine. The reason I have mine is that an academic has some slight chance of making a worthwhile contribution to knowledge. I recognize that, if I were a financial adviser, with all the characteristics of a financial adviser, I would not then value knowledge as I do now. Nevertheless, I do value knowledge, and that is why I prefer to be an academic.

Harsanyi seems to defend his claim that everyone has the same extended preferences by appeal to the fact that “different individuals’ behavior and preferences are at least governed by the same basic psychological laws” (Harsanyi (, )). Let  be a variable which takes as possible values all of the relevant causal factors which result in agents having the particular preferences that they do, things like upbringing, genes, age, physical and psychological abilities, and the like. Now, let Ui (−) = Vi (−; ) be the utility function (or rather, one of the family of utility functions) that individual i would have had if causal factors  had obtained. And as Harsanyi notes, if the same basic psychological laws govern everyone’s preferences and behavior (so that if two individuals were subject to exactly the same causal factors, they would have the same preferences and behavior), then all differences between their utility functions Ui (−) = Vi (−; i ) and Uj (−) = Vj (−; j ) are due to the differences between the causal factors i and j , “and not to differences between the mathematical form of the two functions Vi and Vj .” Put briefly: everyone has the same function Vi , since everyone’s preferences are governed by the same psychological laws. But it would be illegitimate to leap, as Harsanyi seems to do, from the true claim that everyone has (or is subject to) the same function Vi from causes to preferences to the claim that everyone has the same extended preferences. As Broome () argues, this leap rests on a confusion between objects of preference and causes of preference. In the function Vi (−; −), it is important to emphasize that what comes after the semicolon is a slot for causes of preferences, not objects of preferences (though of course one can also have preferences over factors that are causally relevant to one’s preferences). The fact that Vi (A; j ) > Vi (B; k ) does not mean that everyone prefers A’s obtaining when causal factors j obtain over B’s obtaining when causal factors k obtain. For instance, let  refer to the causal condition of having been turned into a zombie. Let x refer to feasting on the flesh of the living and y refer to refraining from doing so. Since zombies desire nothing more than to feast on the flesh of the living, V(x; ) > V(y; ). Being subject to the causal condition of having been zombified would result in one’s preferring feasting on flesh to not doing so. But the non-zombies among us generally prefer the extended alternative < y,  > to the extended alternative < x,  >. As a non-zombie, I prefer that, were I to become a zombie, I refrain from feasting on the living, even though, were I to become a zombie, I would prefer to give in

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

against reflection principles  to feasting on the living. So Harsanyi’s function Vi (−; −) may be the same for everyone, but this is irrelevant, since it does not represent anyone’s (extended) preferences; instead it represents the way in which their differing preferences are governed by causal factors. In sum, Harsanyi’s appeal to extended preferences fails to solve the problem of intertheoretic comparisons of utility (again on a preference interpretation of utility). We do not all have the same extended preferences, even though we are the same with respect to how causal factors determine our extended preferences. No doubt there are other attempts to solve the problem, but an exhaustive survey would go far beyond the scope of this book. Nonetheless, having looked at the most prominent proposed solutions and found them wanting, I conclude that it is insoluble on a preference interpretation of utility functions which is relevant here. Interpreted this way, it is natural to think that while desires and ordinal preferences may be “psychologically real,” the zero points and scales of utility functions have no psychological reality; there is no fact of the matter whether the agent’s utility function is really U as opposed to aU +b (a > ). And ultimately, it is this fact that means that the problem of inter- or intra-personal comparisons of utility cannot be solved. If I am right, then Utility Reflection is unworkable unless rational agents must always have the same ultimate preferences. There is no analog of Preference Reflection that deals with fine-grained attitudes like credences and utilities instead of beliefs and preferences.19 Where does this leave us? In Chapter , I will suggest that Utility Reflection can be salvaged if we adopt a strong uniqueness thesis for preferences, according to which everyone is rationally required to have the same ultimate preferences, preferring one thing to another just in case the one is better than the other. As 19 What about a formalization of Preference Reflection that replaces talk of beliefs with talk of credences, but does not likewise replace talk of preferences with talk of utilities? There are two potential problems. First, the spirit behind Preference Reflection is such that whether you ought now to prefer A to B or B to A depends not only on your credence that you’ll prefer A to B and your credence that you’ll prefer B to A, but also on how strongly you think you’ll later prefer A to B or B to A. But talk of strengths of preferences, I have suggested, makes sense only in the context of a utility function. Second, suppose we ignore strengths of preferences and say only that if your credence that you’ll later prefer A to B is above some threshold n, then you ought now prefer A to B. This sort of proposal threatens to yield intransitive preferences. For instance, suppose that you have credence / that you’ll prefer A to B to C, / that you’ll prefer B to C to A, and / that you’ll prefer C to A to B. Then, you have credence / that you’ll prefer A to B, / that you’ll prefer B to C, and / that you’ll prefer C to A, and so if our threshold n is below /, you will be required to have intransitive preferences and prefer A to B, B to C, and C to A. Nor can we avoid the problem altogether by raising the threshold n, for more complicated cases involving preferences over more propositions will still yield intransitivity. This is an instance of the problem of judgment aggregation. See List and Pettit () for discussion.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 against reflection principles I noted above, if you are certain that all possibilities in which your future self is rational involve having the same ultimate preferences, then we can solve the problem of interpersonal comparisons of utility by requiring that each of your possible future preference orderings be represented by a utility function, all of which assign the same utility to maximally specific possibilities. Interestingly, such a uniqueness thesis for preferences would yield Utility Conditionalization and Utility Reflection as instances of more general, impersonal, and synchronic which require that everyone, at all times, has the same ultimate preferences, namely the uniquely rational ones. Of course, this is a very strong claim about the rationality of preferences. I will give my best attempt at motivating it in Chapter  and will tentatively endorse it. If you are not convinced, then Utility Conditionalization and Utility Reflection should be rejected outright, but if you are sold, then they can be subsumed under more general time-slice-centric principles.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 The Diachronic Tragedy Argument We have seen that there are powerful objections to diachronic principles and reflection principles, both for beliefs and for preferences. However, there is also a powerful argument in their favor. Violating these principles leaves you vulnerable to predictable exploitation over time. I call this phenomenon of predictable exploitation over time Diachronic Tragedy. In a tragedy, the protagonist suffers some misfortune. What makes this misfortune tragic is that it is foreseeable well before it occurs. In some tragedies, the misfortune is foreseeable only by the audience. But in others the misfortune is in some sense foreseeable by the protagonist herself. The protagonist can foresee that her own views and desires will drive her to engineer her ruin but nonetheless fails to depart from this disastrous course. There is something particularly bad about this second sort of tragedy. It is one thing to suffer misfortune through no fault of your own, where you couldn’t have avoided disaster even if you tried. But if you head down a disastrous course knowing full well that it is disastrous, then it seems there must be something wrong with you. It is therefore natural to think that if some type of attitude— such as attitudes that violate diachronic principles or reflection principles—can lead to this sort of exploitation, then it is irrational to have such attitudes. The Diachronic Tragedy Argument, as I will call it, concludes that some attitude is irrational from the premise that it threatens to yield this sort of exploitability over time. It is the strongest and most famous argument in favor of the kinds of principles I reject. In this chapter, I begin by describing how violations of Conditionalization, Utility Conditionalization, Reflection, and Preference Reflection give rise to predictable exploitation over time, or Diachronic Tragedy. But there are a number of other instances of Diachronic Tragedy that have been discussed in the literature. Unfortunately, they have typically been discussed in isolation from each other, and I aim to rectify this deficiency by providing a unified presentation of the Diachronic Tragedy Argument. The next two chapters will then be dedicated to developing a time-slice-centric account of practical rationality and using it to rebut the Diachronic Tragedy Argument.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 the diachronic tragedy argument For that task, it will be useful to introduce some technical terms. Let us say that a Tragic Sequence is a sequence of actions S such that at all times during the performance of the sequence you prefer performing some other possible sequence of actions S over performing S . And a Tragic Attitude is one such that if you have it, there are possible cases where you will prefer performing each member of a Tragic Sequence at the time it is available, even though you prefer not to perform the sequence as a whole: that is, you will prefer A to B at time t , and you will prefer A to B at time t , even though at both t and t you prefer the sequence of actions < B , B > to the sequence < A , A >. Tragic Attitudes, then, license you to perform each member of some Tragic Sequence, and hence threaten you with Diachronic Tragedy. A terminological note: The arguments I describe go by different names in the literature. Some have been called Diachronic Dutch Book Arguments (or Dutch Strategy Arguments), and others have been called Money Pump Arguments. I prefer the term Diachronic Tragedy Argument, since it highlights the common structure of all these different arguments, not all of which involve betting and not all of which involve money. Finally, let me briefly note that in order to appeal to considerations of Diachronic Tragedy to argue that a given sort of attitude is irrational, it is important to show not only that having that attitude leaves you vulnerable to Diachronic Tragedy, but also that lacking that sort of attitude makes you invulnerable to Diachronic Tragedy (provided you don’t have one of the other Tragic Attitudes as well). After all, if you would be vulnerable to Diachronic Tragedy whether or not you had attitude X, the fact that having attitude X leaves you vulnerable to Diachronic Tragedy would not be an argument against the rationality of attitude X. I will not provide these “Converse Diachronic Tragedy Arguments” (showing that lacking the relevant attitude leaves you invulnerable to Diachronic Tragedy) on a case-by-case basis. Instead, I will simply point out now that, owing to the work of Lehman (), Kemeny (), Strotz (–), van Fraassen (), and Lewis (), we know that what we might call an “orthodox Bayesian agent” who has precise credences that obey the axioms of the probability calculus, maximizes expected utility, obeys Conditionalization and Reflection, maintains the same fundamental preferences at all times (thus discounting exponentially if at all), and faces only finitely many decision points, is invulnerable to Diachronic Tragedy. Therefore, for each of the Tragic Attitudes X to be considered, we know that if you are an otherwise orthodox Bayesian agent, then having attitude X will leave you vulnerable to Diachronic Tragedy (the positive part of a Diachronic Tragedy Argument) while lacking attitude X will leave you immune to Diachronic Tragedy (the negative part of a Diachronic Tragedy Argument).

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

the diachronic tragedy argument 

. Conditionalization and Reflection Lewis () gives an argument (first reported in Teller ()) showing that if you violate Conditionalization, you will be predictably exploitable. A Dutch Book is a set of bets that together guarantee you a loss. Lewis shows that if you violate Conditionalization, then your credences will license you to accept bets at different times which together constitute a Dutch Book. He argues that violating this principle is irrational, since doing so will sometimes lead you to perform predictably disadvantageous sequences of actions (accepting certain bets in sequence). To see how violating Conditionalization leaves you vulnerable to a Dutch Book, suppose you violate Conditionalization in the following specific way (a more general presentation will follow shortly). Your current credence in E is . and your current conditional credence in H given E is ., but the credence in H you will have if you learn E is .. Now, at t , prior to your learning whether E, a bookie offers to pay you one cent if you take Bets  and : Bet : pays $ if H ∧ E, $- if ¬H ∧ E, and $ if ¬E Bet : pays $ if E and $- if ¬E

Right now, your credences in E and in H given E commit you to regard Bets  and  themselves as perfectly fair (each has an expected value of $). Therefore, accepting the deal (taking one cent and Bets  and ) has positive expected value (and hence higher expected value than rejecting the deal), and so you are rationally required to accept it. At t you will learn whether E. If you then learn that E, the bookie offers to pay you one cent if you take Bet : Bet : pays $- if H and $ if ¬H.

Your later credence in H (.) will commit you to regarding Bet  as perfectly fair (having an expected value of $). Therefore, accepting the deal (taking one cent and Bet ) has positive expected value, and you are rationally required to accept it. If E is false, and so you aren’t offered Bet , Bets  and  together guarantee you a loss of $, no matter whether H is true. And if E is true, Bets , , and  together guarantee you a loss of $. Either way, you will have accepted bets which guarantee you a loss of $, having gained only one or two cents (depending on whether the second deal is offered) in return. So no matter whether E is true, your credences, which violate Conditionalization, will require you to accept deals which together guarantee you a loss. Predictably, it would be better to decline all of the deals than to accept them and guarantee yourself a loss of nearly $.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 the diachronic tragedy argument Van Fraassen gives a Diachronic Dutch Book Argument for Reflection which is exactly parallel to Lewis’s argument for Conditionalization. Just replace E above with P (H) = ., the proposition that at t you will have credence . in H. Then, the Dutch Book is exactly the same. This was one example of a Diachronic Dutch Book for a violator of Conditionalization. Here is the more general structure.1 In what follows, P is the credence function you have at t , prior to learning whether E, and PE is the credence function you have later on if and only if you learn that E is true. And we assume that if the expected value of a bet (relative to your credences) is $, then you are willing to accept either side of the bet (that is, you are willing to “buy” or “sell” the bet). Let P (H | E) = n Let PE (H) = r Let P (E) = d for  < d < 

Then, Bets  and  are made at t , when you have credence function P , and Bet  is made later if and only if you learn E, at which point you have credence function PE . Bet 

H∧E ¬H ∧ E ¬E

$ − n $−n $

Bet 

E ¬E

$(d − )(r − n) $d(r − n)

Bet 

H ¬H

$r −  $r

When Bets  and  are offered, each has an expected value of $. And, if you learn E, Bet  will also have an expected value of $. EV(Bet)

= P (H | E)( − n) + P (¬H | E)(−n) = n( − n) + ( − n)(−n) =

EV(Bet)

= P (E)(d − )(r − n) + P (¬E)(d)(r − n) = d(d − )(r − n) + ( − d)(d)(r − n) =

EV(Bet)

= PE (H)(r − ) + PE (¬H)(r) = (r)(r − ) + ( − r)(r) =

No matter whether you wind up learning E, the buyer’s net winnings from Bets  and  are $d(r − n). If ¬E is true, and so Bet  is never offered, the buyer’s total 1

Here I have followed the helpful presentation given in Briggs ().

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

the diachronic tragedy argument  winnings are $d(r − n). If you learn E, then the buyer wins a combined $(r − n) on Bets  and  and wins $(d − )(r − n) on Bet , for a net total once again of $d(r − n). So either way, the buyer wins $d(r − n). But if you violate Conditionalization, either r > n or r < n. If r > n, the buyer’s net winnings of $d(r − n) are positive and so favors the buyer, while if r < n, they are negative and so favors the seller. But either way, the bookie can decide which side of each bet to offer you, and since the expected value of each bet is $ at the time it is offered you will be willing to accept the bookie’s offer. Therefore, the bookie can guarantee that he, rather than you, will be the party coming out ahead in the end. And you, sadly, will come out behind, with a sure loss. The above can also be turned into a formula for constructing Diachronic Dutch Book for violators of Reflection. Simply replace E above with the proposition P (H) = r, that is, the proposition that at t you will have credence r in H. Then, the same bets can be offered to you if you are a violator of Reflection to guarantee you a loss, no matter what your future credences in fact turn out to be. Therefore, credences that violate Conditionalization or Reflection license you to perform each member of a Tragic Sequence (the sequence of actions consisting of accepting all the bets you are offered) and are therefore an example of Tragic Attitudes. Note, importantly, that this means there is a Diachronic Tragedy Argument not just for Modified Reflection, but for the original, counterexample-ridden version of Reflection.

. Utility Conditionalization Utility Conditionalization was the diachronic principle for preferences stating that your ultimate preferences—preferences over maximally specific possibilities— must stay constant, with your other, non-ultimate preferences changing only in response to changes in your evidence. While Utility Conditionalization is problematic, it can be supported by a Diachronic Tragedy Argument. Violating Utility Conditionalization will sometimes permit you to perform each member of some Tragic Sequence. Suppose you are the Russian Nobleman imagined by Parfit (). You are a  year old fervent leftist. But you know that by middle age, you will become an equally fervent rightist. Consider: The Russian Nobleman You will receive an inheritance of , rubles at age sixty. Right now, you have the option (call it Donate Early) of signing a binding contract which will require , rubles to be donated to left-wing political causes. No matter whether you take this option, you will at age sixty have the option (call it Donate Late) of donating , rubles to rightwing political causes. (No greater donation is permitted under Tsarist campaign finance

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 the diachronic tragedy argument laws.) Right now, you most prefer donating , rubles to left-wing causes and nothing to right-wing causes. But you also prefer donating nothing to either side over donating , rubles to each side, as the effects of those donations would cancel each other out.

Right now, regardless of whether your later self will Donate Late, you prefer to Donate Early.2 But at age sixty, no matter what you do now, you will prefer to Donate Late. But the sequence of actions is a Tragic Sequence. It is predictably disadvantageous in the sense that at all times, you disprefer it to the sequence . It is better to save your money than to give it all away in two donations that cancel each other out. So, just as violating Conditionalization leaves you vulnerable to predictable exploitation, so too does violating Utility Conditionalization.

.. Time-Bias There is a small lacuna in the preceding exploitability argument for Utility Conditionalization. In particular, the argument applies only to preferences which you can act on. You can act on your political preferences by donating money, for instance, and so it is possible to show that shifting political preferences can lead to Diachronic Tragedy. But there may also be preferences that you cannot act on, and if so there will be no exploitability argument in support of the claim that shifts in those ultimate preferences are irrational. You might think there is one case in particular where (i) it is rationally permissible, or even rationally required, to shift your preferences over time, and (ii) it is impossible to act on the preferences which are subject to these shifts. This is the case of time-bias, in particular bias toward the future. Recall the case from Chapter  showing that bias toward the future will cause your ultimate preferences to shift over time.3 The Early Course You will have  hours of painful surgery on Tuesday and  hour of painful surgery on Thursday. The Late Course You will have no surgery on Tuesday and  hours of painful surgery on Thursday.

2 If your later self does not Donate Late, you would rather give , rubles to left-wing causes, since this is your most preferred outcome. And if your later self does Donate Late, you would rather cancel the effect of that donation by giving , rubles to left-wing causes than let that later rightwing donation go unchecked. 3 This example comes from Dougherty (), whose exploitability argument against bias toward the future will be discussed shortly.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

the diachronic tragedy argument  On Monday, you prefer the Late Course, for it involves the lesser amount of future pain. But on Wednesday, you prefer the Early Course, for relative to Wednesday, the Early Course involves less future pain than the Late Course. It is tempting to think that, unlike the political preferences considered in The Russian Nobleman, you cannot act on your bias toward the future. It is a practically inert preference. After all, you cannot affect whether your pains are in the past rather than the future. (Here there is a contrast between bias toward the future and bias toward the near, for while you cannot affect whether your pains are in the past rather than the future, you often can affect whether they come sooner as opposed to later.) Dougherty () shows that this tempting thought is wrong. Even though you cannot affect whether some pain of yours occurs in the past or the future, being biased toward the future will in some cases make a difference to how you act. And it will do so in a manner than leaves you open to predictable exploitation. More exactly, Dougherty shows that if you are biased toward the future and risk averse, then you will be licensed by your preferences to perform each member of a Tragic Sequence. Say that you are risk averse if you prefer a gamble with a smaller difference between the best and worst possible outcomes to a gamble with a higher difference between its best and worst possible outcomes, even if the expected value of the first gamble is somewhat lower than the expected value of the second. So, for instance, you are risk averse if you prefer a bet on the toss of a fair coin which pays $ if heads and $ if tails to a bet which pays $ if heads and $ if tails, even though the expected value of the former bet is $. while the expected value of the latter bet is $.4 Now, suppose you are both time biased and risk averse. Consider: Uncertain Pain A coin was flipped to determine which of two surgery regimes you will undergo. If it

4 There is some controversy about how to model different types of risk aversion. Standardly, risk aversion is taken to amount to your having decreasing marginal utility for goods. In the case of money, each additional dollar you have is worth less to you than the one before. A dollar is worth less to Bill Gates than to a philosophy graduate student. So, $ is worth more than half as much to you as $. If so, then the bet which pays $ if heads and $ if tails will have higher expected utility than the bet paying $ if heads and $ if tails, even though it has lower expected (monetary) value. Buchak () argues that not all rational risk aversion can be modeled as decreasing marginal utility for goods. Her argument takes aim at the sure-thing principle. It is a close relative of the Substitution Axiom, which I discuss shortly since it, like so many other proposed principles of rationality, is supported by a Diachronic Tragedy Argument. For purposes of Dougherty’s argument, your risk aversion should not be modeled using Buchak’s risk sensitivity model, since in that case any irrationality could be blamed not on time-bias, but on the fact that you violated the sure-thing principle. But outside the context of Dougherty’s argument against time-bias, I do not endorse any particular view about how to model actual agents’ risk aversion or which types of risk aversion are rationally permissible.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 the diachronic tragedy argument landed heads, you will undergo the Early Course— hours of painful surgery on Tuesday and  hour of painful surgery on Thursday. If it landed tails, you will undergo the Late Course—no surgery on Tuesday and  hours of painful surgery on Thursday. Either way, you will be given amnesia on Wednesday, so that you won’t remember whether you had surgery on Tuesday (though you will remember everything else). There is a clock next to your bed, so you always know what day it is.

On Monday and Wednesday, you will be offered the pills Help Early and Help Late, respectively. Each reduces the difference between the highest possible amount of future pain and the lowest possible amount of future pain: Help Early If you are in the Early Course, then taking Help Early will reduce the time of your Thursday surgery by  min. If you are in the Late Course, then taking Help Early will increase the time of your Thursday surgery by  min. Help Late If you are in the Early Course, then taking Help Late will increase the time of your Thursday surgery by  min. If you are in the Late Course, then taking Help Late will decrease the time of your Thursday surgery by  min.

On Monday, you prefer taking Help Early to refusing it. Why? Because it reduces the difference between the highest and lowest amounts of possible future pain (by reducing the future pain in the Early Course scenario involving the most future pain and increasing the future pain in the Late Course scenario involving the least future pain) at a cost of increasing your expected future pain by only one min. This is true whether or not you take Help Late. On Wednesday, you prefer taking Help Late. Why? Because it reduces the difference between the highest and lowest amounts of possible future pain without changing your expected future pain at all. This is true whether or not you took Help Early. But taking both Help Early and Help Late just guarantees you one more minute of pain on Thursday than if you had refused both pills. At all times, you prefer performing the sequence of actions consisting of refusing Help Early and then refusing Help Late over performing the sequence of actions consisting of taking both pills. Thus, if you are biased toward the future and risk averse, then you will be rationally required to perform each member of a sequence of actions, even though at all times you disprefer that sequence of actions to an alternative sequence of actions available to you. The combination of bias toward the future and risk aversion is a Tragic Attitude, licensing you to perform each member of a Tragic Sequence. Dougherty argues on this basis that it is irrational to be both risk averse and biased toward the future. But the sort of risk aversion in play is clearly rational.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

the diachronic tragedy argument  And it would be odd, to say the least, if risk aversion and bias toward the future were each rational, but combining them was irrational. Dougherty concludes that time-bias must go: it is irrational to be biased toward the future. Summing up, Utility Conditionalization is the natural analog of Conditionalization for the case of preferences. And while it is objectionable, it can be supported by Diachronic Tragedy Argument, just like Conditionalization and Reflection. One might think, however, that there are some inert preferences which cannot be acted on and which therefore escape the argument for Utility Conditionalization. In particular, it is tempting to think that bias toward the future, which leads to shifts in your ultimate preferences, is rational and is not subject to the argument that preference shifts leave you open to exploitation. But Dougherty shows that this is wrong. Bias toward the future, like other preference shifts, can in fact leave you hanging in the breeze.

. Preference Reflection Finally, Preference Reflection can be supported by a Diachronic Tragedy Argument (though admittedly it seems weaker than the Diachronic Tragedy Arguments for Conditionalization, Reflection, and Utility Conditionalization, in part because I only know how to give suggestive examples of how particular ways of violating Preference Reflection might leave you vulnerable to Diachronic Tragedy, as opposed to a proof that any violation of Preference Reflection will leave you so vulnerable). We have seen that if your preferences in fact shift, then you are subject to exploitation. But you violate Preference Reflection if you believe that your preferences will shift, even if in fact they do not shift. So a Diachronic Tragedy Argument for Preference Reflection must show that if you believe your preferences will shift, then you are exploitable, even if your belief was false and your preferences in fact do not shift. Suppose that at present you want to quit smoking, but you believe that within a couple hours you will want a cigarette. You have the option right now of paying someone $ to prevent you from buying any cigarettes. This in effect closes off your future options, so that the only thing you can do later on is not smoke. But whether you wind up wanting a cigarette (as you now believe you will) or not, you wind up with a suboptimal outcome, since no matter what, you always prefer not smoking to not smoking plus being $ poorer. In this case, you wind up worse off because you are willing to pay to narrow down the options you will have in the future. If you violate Preference Reflection, you might also wind up worse off by being willing to pay to keep your options open. Suppose that at t you prefer going to

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 the diachronic tragedy argument Argentina next year over going to Brazil, but you think that there is a good chance that at t , you will prefer Brazil over Argentina. You now have to decide which ticket to buy. You can buy a non-refundable ticket to either Argentina or Brazil, each of which costs $, round trip. Or you can buy a partially refundable ticket to either Argentina or Brazil. The partially refundable ticket costs $,, but if you request the refund, you only get back $,. Because you think you might well change your mind in the future and prefer Brazil over Argentina, you buy the refundable ticket to Argentina rather than the non-refundable one. Then, if you do not change your mind, you wind up with the outcome of a trip to Argentina minus $,. But you disprefer this outcome to the outcome you would have achieved had you bought the nonrefundable ticket, that is, the outcome of a trip to Argentina minus $, (we may assume that you do not care about any anxiety you might have felt had you bought the non-refundable ticket). And if you do change your mind, then you wind up getting the partial refund of $, on your $, ticket (net cost to you: $) and buying the $, ticket to Brazil. So you wind up with the outcome of a trip to Brazil minus $,, which you disprefer to the alternative outcome you could have achieved had you initially bought the non-refundable Brazil ticket, namely the trip to Brazil minus $,. So, if you violate Preference Reflection, you are at risk of predictably winding up worse off than you could have been, since you will have reason to pay a fee to either close off or keep open future options. Preferences that violate Preference Reflection are therefore Tragic Attitudes.

. Other Cases of Diachronic Tragedy My main concern is with the cases of Diachronic Tragedy involving the diachronic and reflection principles incompatible with Time-Slice Rationality. However, there are a number of other cases of Diachronic Tragedy that have been discussed in the literature. A comprehensive look at these other cases of Diachronic Tragedy will help to illuminate their common structure.

.. Intransitive Preferences Suppose you have intransitive preferences. You prefer Apple Pie to Blueberry Pie, Blueberry Pie to Cherry Pie, and Cherry Pie to Apple Pie. Consider: The Money Pump You start off with an Apple Pie, a Blueberry Pie, and a Cherry Pie. You will be offered three deals in succession no matter what. Deal : receive a Blueberry Pie in exchange for your

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

the diachronic tragedy argument  Cherry Pie and  cents. Deal : receive an Apple Pie in exchange for a Blueberry Pie and  cents. Deal : receive a Cherry Pie in exchange for an Apple Pie and  cents.5

If you act on your preferences at each time, you will be turned into a money pump. You will accept the first deal, giving up ten cents and a Blueberry Pie in exchange for a Cherry Pie. Why? Because regardless of whether you will go on to accept the second and third deals, you would prefer to move up from Cherry Pie to Blueberry Pie, even at a cost of ten cents. For perfectly analogous reasons, you will accept the second and third deals as well. But having accepted all three deals, you wind up with the same assortment of pies that you started with despite your outlay of thirty cents. The sequence is a Tragic Sequence, since throughout the whole process, it is dispreferred to the sequence of declining all three deals. So intransitive preferences are an example of Tragic Attitudes.

.. Imprecise Preferences Suppose your preferences are imprecise, or “mushy.” You have no preference between a scuba trip to Australia (A) and a safari trip to Botswana (B), but you also do not regard them as equally desirable. For adding $ to one of them wouldn’t make you then prefer it to the other. You don’t prefer A+ to B or B+ to A (even though you prefer A+ to A and B+ to B). In the jargon, your preferences are negatively intransitive. Imprecise preferences can lead you to misfortune. Consider: Scuba or Safari There are two boxes. You see that Box A contains a ticket for the scuba trip A, while Box B contains a ticket for the safari trip B. You know in advance that at t you will get to decide whether $ is placed in Box A or Box B, and then at t you will get to take one of the boxes.

You have no preference about which box to put the $ in at t (since the situation is symmetric). Suppose you put the $ in Box A. Then at t your preferences license you to take either Box A or Box B. In particular, they license you to take Box B (since you do not prefer scuba plus $ over the safari). But the sequence 5 The original money pump argument is due to Davidson et al. (). I have presented an improved version of the standard Money Pump case due to Dougherty (). In the standard case, where you start off with Cherry, are then given the opportunity to pay to switch to Blueberry and then again to Apple, and then again back to Cherry. The standard case has the disadvantage of being such that you can avoid ruin by simply refusing the first deal, since the later deal (e.g. paying to switch from Blueberry to Apple) cannot be offered unless you accept the first deal (paying to switch from Cherry to Blueberry). Dougherty’s case blocks this escape route, since each deal can be offered no matter whether you accept or decline the deals that were offered before).

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 the diachronic tragedy argument is a Tragic Sequence, since at all times you prefer the outcome (safari plus $) that would have resulted from putting the $ in Box B and taking Box B. (Similarly, mutatis mutandis, for putting the $ in Box B and taking Box A.)

.. Imprecise Credences Suppose that your credences are imprecise, or “mushy.” You do not regard A as more likely than B, nor do you regard B as more likely than A, nor do you regard them as equally likely. The relation “you regard . . . as more likely than . . .”. is negatively intransitive. Elga () argues that imprecise credences license you to perform a Tragic Sequence and hence qualify as Tragic Attitudes. Elga’s argument relies on a specific conception of how to model imprecise credences, so I will present a more general version of his argument which does not rely on a specific formal model of such credences. Consider the proposition that Hillary Clinton will win the presidency in  (C) and the proposition that Joe Biden will win it (B). Suppose you do not regard C as more likely than B, or vice versa. But you also do not regard them as equally likely. For you think it quite unlikely that Lyndon LaRouche will win the presidency (L), but you do not regard C ∨ L as more likely than B, and you do not regard B ∨ L as more likely than C (which would be required if you have the same credence in C as in B and assigned a positive credence to L). Now consider: Boxes with Bets You enter a room with two boxes. Box  contains a slip of paper labeled “C.” Box  contains a slip of paper labeled “B.” You are given a slip of paper labeled “L.” At t you must decide whether to put the slip of paper labeled “L” into Box  or Box . And at t you must decide which box to take. You receive $ if the box you take contains a slip of paper labeled with a letter denoting a true proposition. Else you receive nothing.

Your credences license you to put the slip labeled “L” into either box at t . Suppose you put it in Box . Then at t your degrees of belief license you to take either Box  or Box . Why? Because you do not regard the disjunction of C and L as more likely than B, or vice versa. Suppose you then take Box . You will thereby have performed a Tragic Sequence, since at all times you prefer putting the slip labeled “L” into Box  and taking Box  to putting the slip into Box  and taking Box . (Similarly, mutatis mutandis, for putting the slip into Box  and taking Box .) Imprecise credences thus sometimes license you to perform a Tragic Sequence, and hence qualify as Tragic Attitudes.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

the diachronic tragedy argument 

.. The Substitution Axiom The Substitution Axiom, or close variants of it, plays a key role in many versions of expected utility theory.6 It requires you to prefer A to B if and only if you prefer a lottery which has A as a possible prize to an identical lottery which has B as a possible prize. More formally, the Substitution Axiom requires that for all A, B, and C: A ≥ B if and only if {A, p; C,  − p} ≥ {B, p; C,  − p}

where ≥ denotes the at-least-as-preferred relation and {A, p; C,  − p} is a lottery which yields A with non-zero probability p and C with probability  − p. As Bermudez () makes clear, if you violate the Substitution Axiom, you will prefer each member of a Tragic Sequence, and hence preferences which violate the Substitution Axiom are Tragic Attitudes.7 Actually, violating the Substitution Axiom will license you to perform a Tragic Sequence only if you violate it in a strong way, preferring B to A while preferring {A, p; C,  − p} to {B, p; C,  − p}. You need not perform a Tragic Sequence if you prefer B to A while being indifferent between the two lotteries. I will restrict my attention to cases where you violate the principle in this strong way. So suppose that you violate the Substitution Axiom by preferring B to A while preferring a lottery with A as a possible prize to an identical lottery with B as a possible prize in place of A. That is, B > A and {A, p; C,  − p} > {B, p; C,  − p}. Assume that these preferences are strong enough that there is some amount of money, even one cent, such that you would be willing to pay to switch from the dispreferred thing to the preferred one. Now suppose that you are given a ticket for the lottery {B, p; C,  − p}. Consider the following two deals. At t , you will be given the option of paying one cent to exchange the lottery ticket {B, p; C, −p} for the lottery ticket {A, p; C,  − p}. Because you prefer the latter lottery to the former, you will be willing to pay one cent to make the trade. Then, if the probability  − p event occurs, you wind up with prize C, less the one cent you paid to make the trade. If the probability p event occurs, then you win prize A, but then at t you will be given the option of paying one cent to trade A for B. Because you prefer B to A, you will be willing to make that exchange.

6 Resnik’s () axiomatization employs the Substitution Axiom as stated here. Relatives include von Neumann and Morgenstern’s () Independence Axiom and Savage’s () Sure-Thing Principle. These related principles can likely also be supported by a Diachronic Tragedy Argument along lines similar to that pursued here for the Substitution Axiom. 7 Bermudez attributes an embryonic form of this argument to Raiffa (, ).

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 the diachronic tragedy argument No matter whether the probability p event or the probability  − p event occurs, you wind up worse off than you could have been. Suppose that the probability −p event occurs. You would have been better off refusing to pay one cent to trade the lottery {B, p; C, −p} for the lottery {A, p; C, −p}, since then you would have won prize C rather than prize C minus the one cent. Alternatively, if the probability p even occurs, then you would have been better off refusing both trades rather than paying for both. For paying for both deals leaves you with B less the two cents you paid for the trades, while refusing both deals leaves you with B at no further cost. So the sequence of actions consisting of accepting all the deals offered is a Tragic Sequence, since you always prefer the sequence of actions consisting of declining all the deals offered, and so preferences which violate the Substitution Axiom are Tragic Attitudes.

.. Infinite Decisions One option A dominates another option B if and only if option A yields a better outcome than B in every state of the world. It is widely accepted that you are rationally permitted, and even required, to take dominant options. But Arntzenius et al. () argue that in some infinite cases, taking dominant options will lead to trouble: Satan’s Apple Satan has cut an apple into infinitely many slices. At each of various times ti , you are asked whether you would like to eat slice #i.8 If you eat infinitely many slices, you go to Hell, while if you eat only finitely many slices, you go to Heaven. Your first priority is to go to Heaven rather than Hell. Your second priority is to eat as much apple as possible.

For each slice i, eating that slice dominates not eating it. For eating it will not make the difference between eating only finitely many slices and eating infinitely many slices, and so it will not affect whether you go to Heaven or to Hell. But if you take the dominant option for each slice—eating it—then you will wind up eating infinitely many slices and be condemned to an eternity in Hell! The sequence of eating every slice is a Tragic Sequence, since it yields a worse outcome than myriad other sequences of actions (e.g. that of refusing every slice). So the preference for dominant options is a Tragic Attitude.9 8 The t are arranged so that the choosing constitutes a supertask—t is  sec from now, t is  sec i   from now, t is  sec from now, and so on. 9 It is an interesting feature of Satan’s Apple that each possible sequence of actions is worse than some other sequence of actions. Therefore, even if you had the ability to decide all at once which slices of apple to eat, it is unclear what you ought to do, since whatever sequence you choose, there will be some better one that you could also have chosen. Perhaps there is some threshold number of slices such that you are permitted to choose any sequence in which you eat at least that many slices (but not infinitely many). But any such threshold will inevitably be arbitrary. Perhaps in this case, we must

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

the diachronic tragedy argument 

. Common Structure In all of these myriad cases, we can represent your decision situation with this tree:10 UP t2 UP

DOWN

t1 DOWN

UP t2 DOWN

At each of t and t , you can either go UP or DOWN. At the first node, you prefer going UP to going DOWN, no matter what you will later do at t . That is, you prefer the sequence over the sequence and prefer the sequence over the sequence . And at t , you prefer going UP to going DOWN, no matter what you did at t . That is, you prefer the sequence over and prefer the sequence over .11 But at both t and t , you prefer the sequence over the sequence . In this way, the sequence is a Tragic Sequence, but at each time, you prefer performing the member of this Tragic Sequence available at the time.12 Let’s go through this with a simple example. In The Russian Nobleman, Donate Early plays the role of going UP at t and Donate Late plays the role of going UP at t . Right now, as a young leftist, you prefer to Donate Early, no matter what you will do later at age sixty (that is, you prefer sequences in which you Donate Early to corresponding sequences in which you don’t). But at age sixty, you will prefer abandon the binary ought/ought not distinction in favor of more graded evaluations, in which we can only speak of an action’s being more rational than another. 10 Of course, the number of decision points required will differ in some cases, like Money Pump (three decision points) and Satan’s Apple (infinitely many decision points), but the basic structure is the same. 11 The cases of imprecise preferences and imprecise credences are slightly different. In that case, you do not actually have the preference at each of t and t for going UP; rather, you just lack the contrary preferences at t and t . I set this detail aside for the sake of clarity. 12 There is another slight complication for some of the instances of Diachronic Tragedy. In cases where you violate Conditionalization or Reflection, you do not face the same sequence of options no matter what. Instead, you are first offered two bets. But then, you are offered a third bet at a later time only if you wind up learning some proposition E. If you instead learn ¬E, then you are left with just the first two bets, which, given the falsity of E, guarantee you a loss. I ignore this complication in what follows for ease of explication.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 the diachronic tragedy argument to Donate Late, no matter what you did as a young leftist. But both now and at age sixty, you prefer the sequence over the sequence . Similarly, mutatis mutandis for all the other cases in Section . Crucially, I have been assuming in all these examples that you lack the ability to self-bind.13 That is, there is nothing you can do now which will causally determine which actions your future time-slices will perform. If you have the ability to selfbind and know it, then if you are rational, you will not perform a Tragic Sequence even if you have Tragic Preferences. This is because, by definition, you prefer performing some other sequence of actions over performing the Tragic Sequence. So, if you know that you can bind yourself to any sequence of actions, then you rationally ought to bind yourself to one of these preferred sequences of actions and thereby avoid performing the Tragic Sequence. Because Tragic Preferences only lead to trouble if you either lack the ability to self-bind or don’t know that you have this ability, I will assume for present purposes that you are either unable to self-bind or are ignorant of whether you have this ability.14 I will briefly return to self-binding in Chapter . Because the Diachronic Tragedy Argument draws conclusions about the rationality of attitudes from claims about the rationality of actions, evaluating it requires looking closely at the rationality of actions. That is the task of Chapter . 13

This term comes from Arntzenius et al. (). One might take the Diachronic Tragedy to show not that Tragic Preferences are irrational, but rather that it is a requirement of rationality that one have the ability to self-bind (and know it); it is a requirement of rationality that one be able to make decisions that bind one’s later time-slices to certain courses of action. This line of argument may be supported by the work by Hinchman (), Bratman (), and Korsgaard (), who emphasize the importance to rationality and agency of the capacity to form and execute intentions which guide one’s later actions. 14

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 Options and Time-Slice Practical Rationality . Introduction I have so far discussed the rationality of attitudes, in particular doxastic (belieflike) and conative (desire-like) attitudes, but in order to have a comprehensive picture of rationality, we must develop an impersonal, time-slice-centric theory of the rationality of actions as well. An impersonal, time-slice-centric theory of practical rationality is needed not only for the sake of completeness, but also because there are interesting interactions between the rationality of attitudes and the rationality of actions. We have seen that the Diachronic Tragedy Argument draws conclusions about the rationality of attitudes from premises about the rationality of actions. An impersonal, time-slice-centric theory of practical rationality will provide us with the tools to rebut the Diachronic Tragedy Argument in the next chapter. It is helpful to start theorizing about practical rationality by observing that determining what you ought to do can be broken down into two stages. The first stage is determining what your options are, and the second stage is ranking those options. The second stage has been widely explored by philosophers of all stripes, from ethicists to decision theorists to epistemologists to action theorists. And standardly, ranking options is done in a time-slice-centric way, so that how options are ranked depends only on your present beliefs and desires (or credences and utilities). Certainly this is the case with the most prominent theory of rational action—expected utility theory—as well as its main competitors. The first stage has received comparatively less attention,1 but it is no less important.

1 This is not to say that it has received no attention. Weirich (), Lewis (), Jeffrey (), Jackson and Pargetter (), and Portmore (ms), among others, have discussed the issue. Bratman (, –) has also discussed options, giving a planning theory of the admissibility of options, but as I read him, he is primarily addressing the second stage rather than the first; it takes the notion of an option as given and holds that certain options are ruled out as inadmissible by your prior plans.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and time-slice rationality Let me emphasize that I am using the term “option” as a technical term here. An agent’s options, on this usage, are the things which are evaluated by the correct decision theory, whatever that may be, such that the option that gets ranked highest by our decision theory is the one that the agent rationally ought to perform. In this way, we can say that options are the things which in the first instance an agent ought rationally to do. After presenting the problem of options, I will argue that the fact that what you ought to do depends on your uncertainty about the world, ultimately forces us to conceive of your options as consisting of all and only the decisions you are presently able to make. These decisions are the things that should be ranked by our decision theory in order to yield answers about what you rationally ought to do. In this way, oughts apply in the first instance only to decisions, and not to the non-mental acts that we ordinarily evaluate for rational permissibility. That is, what you rationally ought or ought not do is to make certain decisions rather than to perform certain non-mental actions. This may initially seem like a radical proposal quite at odds with our ordinary ways of speaking. We often say that an agent ought to go buy groceries, say, and not just that she ought to decide to go buy groceries. But in Section ., I show how such ordinary ways of speaking can be accommodated within a decision-based theory of practical rationality, by saying that while only ought claims applied to decisions can be non-derivatively true, it may nevertheless be true in a derivative sense that an agent ought to perform some non-mental act, if it is (non-derivatively) true that she ought to decide to perform that act, and her making that decision would cause her to perform that act. This proposal allows us to accommodate such ordinary ways of speaking without deviating from a timeslice-centric view of practical rationality on which an agent’s options are all and only the decisions she is able to make.

. Rationality and the Subjective Ought Recall the case with which I began this book. Your friend has a headache, and you have some pills that you justifiably believe to be pain relievers. But you’re wrong. They are really poison. Ought you give the pills to your friend? While there may be a sense in which the answer is “no,” there is also a sense in which the answer is “yes.” Sometimes philosophers call the sense of ought in which you ought to give your friend the pills the subjective ought. What you subjectively ought to do depends not on how the world actually is, but on how you believe the world to be. Since you believe the pills to be pain relievers, you subjectively

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and time-slice rationality  ought to hand them over, even though your belief is false. The sense of ought in which you ought not give your friend the pills is often called the objective ought. What you objectively ought to do does not depend on your uncertainty about the world.2 The subjective ought plays a central role in the theory of rationality. Recall from Chapter  the three theoretical roles to be played by the notion of rationality. First, the subjective ought is supposed to give you guidance about how to proceed, given your uncertainty about the world. In the case of practical rationality (rationality of actions), it is to be action-guiding, in the sense of being sensitive to your uncertainty about the world. What you ought to do depends on the information you have about the world, rather than simply on how the world in fact is, unbeknownst to you. Second, consider the evaluative role. Whether you are subject to rational criticism depends on what information you had available, rather than just on how the world in fact is (though of course it also depends on things like whether your act was free). You would be subject to criticism if you failed to hand your friend the pills because this was the action that looked best in light of the information you had at hand. Third is the predictive/explanatory role. Knowing what you believe and desire, we can predict what you will do, against the background assumption that you are rational. And what we would predict you will do depends on what information we think you have. We would predict that, insofar as you are rational, you will give your friend the pills, since we know that this is what your evidence suggests you should do.3

2 Note that the distinction between objective and subjective oughts need not be interpreted as amounting to an ambiguity in the word “ought,” and it also need not be thought of as an exhaustive catelogue of possible senses of ought. On a Kratzer-style semantics for modals (see Kratzer ()), the objective/subjective distinction can be understood as a matter of the relevant modal base (e.g. whether it includes facts of which the subject of the ought claim is ignorant) and ordering source (e.g. whether the closeness of worlds depends on the extent to which the subject maximizes value or instead the extent to which she maximizes expected value). And if a Kratzer-style semantics is right, one would expect there to be contexts giving rise to all sorts of modal bases and ordering sources, not just those which yield objective and subjective oughts. 3 Of course, the use of the subjective ought in predicting and explaining behavior works only against the background assumption that you are rational (and so the third role of the subjective ought is not entirely separate from the second). Sometimes, we have evidence that you fall short of ideal rationality in various respects, and in these cases we will not want to predict that you will do what you subjectively ought to do. For instance, we may have evidence from behavioral economics that you employ certain biases and heuristics that lead you to be irrational in certain systematic ways, and if such biases and heuristics are relevant in the case at hand, we will not want to predict that you will in fact do what you subjectively ought to do.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and time-slice rationality For these reasons, I think that the rational ought—the ought that is central in the theory of rationality—is what is sometimes called the subjective ought, rather than the so-called objective ought. But even if you disagree and think that the objective ought is the central ought of rationality, you can interpret the arguments to follow as applying only with respect to the rational subjective ought, which is the one targetted by decision theory.

. The Problem of Options Giving a theory of the rational ought (or the rational subjective ought, if you disagree that that is the central rational ought) for actions requires giving a theory of what your options are. Because the rational ought is sensitive to your uncertainty about the world, this theory of options must also take into account this uncertainty. The problem of specifying what your options are, in a way that is appropriately sensitive to your uncertainty about the world, can be made precise by considering expected utility theory, the dominant account of the rationality of actions. Expected utility theory provides a framework for assigning numbers to propositions, relative to a credence function P (representing your doxastic state) and a utility function U (representing your conative state). The expected utility of a proposition A is the sum of the utilities assigned to the possible outcomes Oi , weighted by the probability that A gives to Oi .4 More formally: Expected Utility: EU(A) =



i

P(Oi ; A)U(Oi )

Expected utility theory, then, provides a way of ranking propositions. Indeed, it is a time-slice-centric way of ranking propositions, since it ranks them in a way that depends only on your present credences and utilities. But expected utilities only provide a way of ranking propositions—nothing more. As Broome (, ) describes expected utility theory, “it is only a collection of axioms and a proof.” Something more needs to be said in order to connect expected utilities to rational action. The connection between this ranking and practical rationality is standardly expressed in the slogan, “You ought to maximize expected utility.” That is, you ought to bring about the proposition with the highest expected utility.

4 This description of expected utility, and the formula below, is intended to be neutral between Evidential Expected Utility Theory and Causal Expected Utility Theory. For evidentialists, P(Oi ; A) will be interpreted as the conditional probability of Oi given A (i.e. P(Oi | A)), while for causalists, it will be interpreted as something like the probability that Oi would result, if A were to be true (i.e. P(A Oi )). The difference between evidential and causal versions of expected utility theory will play no role in what follows.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and time-slice rationality  But now a problem arises. Expected utilities can be assigned to any proposition whatsoever. Consider, for instance, the proposition that someone discovered a cure for cancer two weeks ago. This proposition has a very high expected utility. But even if this proposition has higher expected utility than any other proposition, there is no sense in which I ought to bring it about that someone discovered a cure for cancer two weeks ago.5 Intuitively, the expected utility assigned to the proposition that someone cured cancer two weeks ago is irrelevant to the question of what I ought to do now because bringing about this proposition simply isn’t one of my options! Therefore, in order to have a theory of what I ought to do, we need some way of specifying a narrower set of propositions, such that what I ought to do is bring about the proposition with highest expected utility in that narrower set. Let us call such a narrower set of propositions a set of options. In this sense, I am using “option” as a technical term defined by the just-mentioned role it plays in decision theory. Our task, then, is to say what counts as a set of options.6 In what follows, I argue that a theory of options must satisfy two desiderata. First, if something is an option for you, you must be able to do it (or, more exactly, taking options to be propositions, you must be able to make it true). Second, your options supervene on your mental states. I reject three initially tempting theories of options on the grounds that they each violate at least one of these desiderata. I then present my own theory of options, which I argue does satisfy the desiderata: your options are all and only the decisions you are presently able to make.

5 Of course, there would certainly be other propositions with higher expected utility, such as the proposition that someone discovered a cure for cancer and deposited $, in my bank account. In fact, it may be that there is no proposition with highest expected utility. 6 The problem of options can also be seen by looking at Savage’s () decision theory. The basic elements of his theory are states, outcomes, and acts, and he defines the set of acts as the set of all functions from states to outcomes. So the set of acts includes things like every constant function, which yields the same outcome no matter what state of the world obtains. Needless to say, many things which count as acts on Savage’s liberal definition are not the sorts of things that an agent can really perform. Consider an outcome in which I live a long life of health and happiness. Even though in decision problems where this is an outcome, Savage’s set of acts will include a function which outputs this happy outcome no matter which state of the world obtains. It is generally not the case that I have available some act (on an ordinary understanding of the term) which will result in my health and happiness no matter what. So, in order to yield plausible claims about what an agent rationally ought to do in a given decision problem, we need to come up with some subset of the set of Savage acts which are the ones that should count as genuine options for the agent (or, if we want to maintain that the options available to an agent include all Savage acts, we need some criteria for coming up with the relevant states and outcomes in a given decision problem, so that the set of Savage acts does not include ones that are intuitively unavailable to the agent, like the act which results in health and happiness no matter what).

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and time-slice rationality

. Skirting the Issue: A Minimalist Proposal Before turning to these desiderata and to different theories of options, including my own, I want to consider a response which simply denies the existence of the problem I have raised. Such a response denies the need to come up with constraints on what can count as a set of options. This sort of minimalist approach would lower our ambitions for decision theory and thereby eliminate the need for a theory of what counts as a set of options. A proposal along these lines might take a number of forms. Nonetheless, they all leave something to be desired. A first version of the minimalist proposal would be to take a hyper-pluralistic approach to sets of options. On this view, any set of mutually exclusive propositions counts as a set of options. And so what we have are lots of oughts, each relative to a different set of options. Thus, for each set of mutually exclusive propositions S, we can talk about what the agent oughtS to do, but there is no ought that isn’t relatived in this way. But this proposal is less than satisfying. If an agent is looking for guidance about what to do, it is unhelpful to simply throw out a multitude of different oughts, each relativized to a different set of propositions. It would be tempting for the agent to then ask which of these oughts she should pay attention to, and it is here that this hyper-pluralistic version of the minimalist proposal remains silent. Relatedly, having merely a profusion of different oughts makes it impossible to use decision theory to predict how rational agents will behave in any straightforward way. To predict an agent’s behavior, given her beliefs and desires and the assumption that she is rational, it would seem that we would have to fix on one of those oughts as the one that is relevant for predicting behavior. But to privilege one of these oughts would be to abandon the minimalist proposal. Lastly, as mentioned in the previous section, lots of propositions are such that their ranking by expected utility is completely irrelevant to what an agent ought to do. The proposition that someone cured cancer a couple of minutes ago has highest expected utility among the set consisting of that proposition and its negation. But there is simply no sense in which I ought to bring it about that someone cured cancer a couple weeks ago. I am subject to no rational criticism whatsoever for failing to make that proposition true. A second version of the minimalist proposal is that decision theory only tells agents what their preferences should be, given their beliefs and desires (in particular, an agent should prefer A to B if and only if A has higher expected utility than B). That is, decision theory outputs recommended preferences, rather than recommended actions. And, for decision theory to output

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and time-slice rationality  recommended preferences, there is no need to specify what would count as a set of options.7 But this view is unsatisfying, and for much the same reasons as the first. First, on this proposal decision theory gives agents guidance about what to do only by telling them what their preferences ought to be. It is silent about what to do with these preferences. Second, decision theory would no longer tell us when an agent’s actions make her subject to rational criticism. Of course, it does say when an agent’s preferences make her subject to rational criticism, but it is silent about the link between irrational preferences and irrational actions. Finally, decision theory would be of limited use in predicting and explaining the behavior of rational agents. It would still help us predict and explain agents’ preferences, but this is not the same as predicting and explaining their actions. Now, it may be that it is impossible for a theory to say more than this, but we should first look at other alternatives before retreating to this more humble position. Perhaps the agent must bring more to the table than just her credences and utilities. Perhaps she must also have some particular set of options in mind in order for decision theory to be able to help her. This suggests a third version of the minimalist proposal, on which decision theory only applies to agents who already have some set of options in mind, where having a set of options in mind is a sort of sui generis mental state not captured merely by the agent’s beliefs. And for agents who conceive of their available options as consisting of the members of the set S, decision theory says that they ought to bring about the member of S with the highest expected utility. On this view, decision theory cannot answer the question, “What should I do?” Rather, it can only answer questions of the form, “Of A, B, and C, which should I do?” This version of the minimalist proposal avoids some of the problems facing the two previous versions. It does provide substantial guidance to agents, albeit only ones that have some set of available options in mind. And it does allow us to predict their behavior, albeit only when we know what set of available options they have in mind (in addition to their beliefs and desires). But it is still unsatisfying. First, is there really no guidance that we can give to agents who don’t already have a given set of available options in mind? Shouldn’t 7 On this proposal, decision theory only constrains your attitudes and says nothing about actions. Arntzenius () also defends a view on which decision theory only outputs recommended attitudes. But on his view, the recommended attitudes are credences about what you will do. Decision theory doesn’t say what you ought to do, but only what credences you ought to have about what you will do. Note that Arntzenius’s view thus still requires the theory to say something about your options, since it must output recommended credences about which of those options you will take.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and time-slice rationality decision theory have something to say to agents who simply haven’t yet thought about what their options are? Second, what about cases where an agent is mistaken about what actions are available to her? Suppose an agent has in mind the set of available options consisting of A and B, but there is in fact a much better option C. Shouldn’t decision theory tell her to do C in this case? The minimalist proposal, in any of its various forms, avoids the need to come up with a theory of what counts as a set of options, but only at the cost of having decision theory play a more limited role than we might have hoped in providing guidance to agents, telling us when they are subject to rational criticism, and allowing us to predict and explain their behavior. Now, it might be that decision theory cannot play any more expansive role. But this should be our fallback position rather than our starting point. We should first explore other options to see if we can come up with a theory of sets of options that would allow decision theory to give us all that we would like it to provide, and retreat to this less ambitious position only if forced.

. Desiderata for a Theory of Options There are constraints that a theory of options must satisfy if the rational ought is to play the evaluative, predictive/explanatory, and action-guiding roles outlined earlier. The first desideratum is easy. It is an ought implies can principle. Thinking of a set of options as a set of propositions (such that the one ranked highest by Expected Utility Theory is the one you rationally ought to make true), this first desideratum says: Desideratum  If a proposition P is a member of a set of options for an agent S, then S is able to bring about P.

Why do we need this ought implies can principle? First, you are not subject to any rational criticism for failing to do something which in fact you couldn’t have done. So if the rational ought is to play the crucial evaluative role in our theory, options cannot include things that you are unable to bring about. Second, ought implies can is essential for the predictive role of the rational ought, since we clearly would not want to predict that you would do something which in fact you cannot do. Third, the rational ought gives you poor guidance if it tells you to do something that you cannot do. So if the rational ought is to be actionguiding in some sense, options must consist only of propositions you are able to make true.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and time-slice rationality  The second desideratum is a supervenience thesis. It says: Desideratum  If something is a set of options for an agent S, then it is a set of options for any agent with the same mental states as S. What an agent’s options are supervenes on her mental states.

Why adopt this supervenience desideratum?8 If what your options are failed to supervene on your present mental states, then what you rationally ought to do would likewise fail to supervene on your present mental states. On standard frameworks for ranking your options, such as Expected Utility Theory, how options are ranked depends only on your present mental states, and it is natural to expect our way of identifying your options to likewise do so in a way that depends solely on your present mental states. To the extent that expected utility theory ranks options in a way that is sensitive to your evidential state, I think we should expect our theory of what your options are in the first place to be similarly sensitive to your evidence. In other words, it would be unsatisfying to pair Expected Utility Theory, which is all about ranking options in a manner determined entirely by your mental states, with a theory of options on which your options depend on how the world in fact is, perhaps unbeknownst to you. The progress made by Expected Utility Theory toward a theory of rationality that is sensitive to your uncertainty about the world would be for nought. Moreover, Desideratum  is needed in order for the rational ought to play the three theoretical roles highlighted earlier. This can be seen by considering a specific case: Your Doppelgänger You are hiking through the forest when the trail leads to a raging creek. Meanwhile, your doppelgänger is hiking through her own forest and also comes to a raging creek. Facing your respective creeks, you must each decide what to do. At the moment of decision, you have the same mental states but different physical abilities. You are able to ford the creek, but your doppelgänger is not. Perhaps she has weaker leg muscles, or perhaps her creek is somewhat deeper than yours. We needn’t specify; the important point is that although the two of you are mentally just the same, you differ in your physical abilities.

If options failed to supervene on your mental states (e.g. by consisting of things that you are physically able to do), we could get the result that while you ought to ford the creek, your doppelgänger ought to do something quite different, like give up and head home. 8

I recently learned that Gibbard (, ) is sympathetic to this supervenience desideratum. Using the term “alternatives” for what I mean by “options,” he writes, “Alternatives must be subjectively characterized, so that the same alternatives are available on subjectively equivalent occasions.”

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and time-slice rationality But this result is incompatible with the rational ought’s being able to play its three theoretical roles. It would fail in its predictive role. In the case of you and your doppelgänger, we would certainly not want to predict that you would ford the creek while your doppelgänger would, for instance, do an about-face and head home without so much as getting her feet wet. We should at least predict that you would start off doing the same thing, even if down the line you gained different evidence as a result of your differing physical abilities and thereupon changed course. For instance, it would be entirely reasonable to expect that you might both head into the water intending to ford, but that a short way in your doppelgänger might realize that she is unable to complete the crossing, for her muscles are too weak or the water too deep, and then abandon the attempt and head home while you complete the ford. But what would be unreasonable would be to predict that the two of you would start off on completely different paths, and this is the risk if options failed to supervene on mental states. (As a preview, what seems reasonable to expect is that you might both try to ford the creek. In effect, I will be proposing to interpret options as tryings, albeit where tryings are understood as mental actions.) Second, consider the action-guiding role. If our theory told you to ford the creek but told your doppelgänger to head home, it would fail to be action-guiding, as this advice would fail to be sensitive to your information about the world. You have the same information about the world and are in the same mental states, so if the advice given by our theory differed for the two of you, this advice would not appropriately take into account your uncertainty about the world. Last is the evaluative role. If you forded the creek while your doppelgänger immediately turned around and headed home without even trying to ford, we would think that at least one of you was being irrational. At least one of you wasn’t behaving in a way that made sense, given your perspective on the world. Therefore, what your options are should supervene on your mental states.9

9 It is tempting to add a third desideratum stating that if something is an option for you, you must be certain that you are able to do it. For without this desideratum, it might be that you ought to perform some option even though you doubt whether you can do so and believe that the costs of trying but failing to do so would be dire. The expected utility of an option does not take into account any uncertainty about whether you can perform that option or the costs of trying but failing. But this third desideratum is not needed to motivate my account of options, and in any event it is highly questionable whether any theory of options could satisfy it. Perhaps there is no sort of action—not even a mental act of making a decision—such that you can rationally be certain about whether you are able to perform it. Therefore, I do not endorse this potential third desideratum.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and time-slice rationality 

. Unsuccessful Theories of Options Inspired only by the ought implies can principle embodied in Desideratum , it is initially tempting to adopt the proposal that an agent’s options consist of all the actions that she is in fact able to perform. More specifically, this yields: Options-as-Actual-Abilities A set of propositions is a set of options if it is a maximal set of mutually exclusive propositions, each of which is such that the agent has the ability to bring it about.10

Not only is Options-as-Actual-Abilities intuitively attractive, but it also has a formidable pedigree, having been defended by many prominent decisions theorists, including Richard Jeffrey and David Lewis. Jeffrey (, ) regards options as acts, where “An act is then a proposition which is within the agent’s power to make true if he pleases.” And in “Preference among Preferences,” he writes that “To a first approximation, an option is a sentence which the agent can be sure is true, if he wishes it to be true” (Jeffrey (, )). In “Causal Decision Theory” Lewis (, ) writes, Suppose we have a partition of propositions that distinguish worlds where the agent acts differently . . . Further, he can act at will so as to make any one of these propositions hold; but he cannot act at will to make any proposition hold that implies but is not implied by (is properly included in) a proposition in the partition. The partition gives the most detailed specifications of his present action over which he has control. Then this is a partition of the agents’ alternative options.

But Options-as-Actual-Abilities fails in virtue of violating Desideratum . Actual abilities do not supervene on mental states. Which actions you are able to perform depends not only on your mental states but also on your physical state and your environment. So which actions you are able to perform can vary independently of your mental state. This was the upshot of Your Doppelgänger. Focusing on Desideratum  leads to another possible theory of options, on which an agent’s options consist of all and only the actions that she believes she is able to perform. More precisely:

10 The set must be maximal in the sense that there is no other proposition incompatible with the members of that set which is also such that the agent has the ability to bring it about. Note that this proposal allows for the possibility of multiple sets of options for an agent, since we can cut up the things that she is able to bring about in more or less fine-grained ways and still have a maximal set of mutually exclusive propositions, each of which she is able to bring about.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and time-slice rationality Options-as-Believed-Abilities A set of propositions is a set of options if it is a maximal set of mutually exclusive propositions, each of which is such that the agent believes she has the ability to bring it about.11

Options-as-Believed-Abilities satisfies Desideratum , since it is a theory of what your options are depends on your mental states. But Options-as-Believed-Abilities is also a step back relative to Options-as-Actual-Abilities, for it fails to satisfy Desideratum  (ought implies can). If you are sure you can ford the creek, and fording the creek has the highest expected utility among the things that you believe you can do, then we get the result that you ought to ford the creek, even if you are in fact unable to do so. Options-as-Actual-Abilities failed because it had an agent’s options being determined solely by how the world actually is, irrespective of how the agent believes the world to be. Options-as-Believed-Abilities failed because it had an agent’s options being determined by how she believes the world to be, irrespective of how it actually is. Perhaps these problems can be solved by taking an agent’s options to be the things that are deemed to be options by both of these proposals. That is, we might take an agent’s options to be the set of propositions that she correctly believes she can bring about. At this point, we might even replace the mention of true belief with reference to knowledge, giving us: Options-as-Known-Abilities A set of propositions is a set of options if it is a maximal set of mutually exclusive propositions, each of which is such that the agent knows that she is able to bring it about.12

This proposal clearly satisfies Desideratum , since knowledge is factive. But does it satisfy Desideratum ? That depends on whether knowledge is a mental state. Williamson (, ch. ) argues that it is. His argument proceeds primarily by rejecting reasons for thinking that knowledge is not a mental state. First, one might think that knowledge isn’t a mental state, since whether you know a proposition depends on factors external to your physical state (e.g. on whether the proposition is true). But this is true of mental states more generally, since the contents of your attitudes are determined by factors external to your physical state. Second, one might think that knowledge isn’t a mental state because you are not always able to tell whether you know, as opposed to merely believe, a proposition. But if Williamson’s anti-luminosity argument (ch. ) is sound, no states—mental or 11 Again, the set must be “maximal” in the sense that there is no other proposition incompatible with the members of that set which is also such that the agent has the ability to bring it about. 12 Once again, the set must be “maximal” in the sense that there is no other proposition incompatible with the members of that set which is also such that the agent knows that she is able to bring it about.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and time-slice rationality  non-mental—are such that whenever they obtain, you are in a position to know that they obtain. If Williamson is right, then this third proposal—that your options are all and only the actions you know you are able to perform—does indeed satisfy both of our desiderata. And I want to remain neutral on whether knowledge is in fact a mental state.13 But I think that there are in fact good grounds for strengthening Desideratum  so that options must supervene not only on your mental states, but on your nonfactive mental states, to yield: Desideratum + What your options are supervenes on your present non-factive mental states.

Why adopt this strengthened version of Desideratum ? What is special about non-factive mental states? There may be theoretical reasons for privileging nonfactive states in theorizing about rationality. For instance, Wedgwood () argues that where a factive attitude is constituted by a non-factive attitude, as in the case of knowledge and belief, rational requirements should make reference to the latter, since the latter will figure in more proximal explanations of belief and behavior. Wedgwood’s argument for the centrality of non-factive mental states is worthy of consideration, but I think that there is a simpler, less theoretical motivation for strengthening Desideratum  to Desideratum +. Consider two cases, one in which you know you are able to φ, and one in which you are in a Gettier situation and hence only justifiably believe the true proposition that you are able to φ. Intuitively, I think, the facts about what you rationally ought to do are the same in both cases; either you ought to φ in both cases or you ought not φ in both cases.14 Case : You are hiking through the forest and come to a raging creek. You must either ford it or turn back. Your hiking partner, who knows how deep the creek is, knows that you are able to ford it and tells you that you are able to do so. On the basis of this testimony, you gain knowledge that you are able to ford the creek. Case : Just as in Case , except your hiking partner in fact doesn’t know how deep the creek is, but instead merely confidently asserts it. On the basis of this testimony, you gain the justified true belief that you are able to ford the creek. But because your hiking partner

13

See Fricker () for the opposing view that knowledge is not a mental state. Note that we cannot get around this issue by saying that your options are all and only the actions you truly believe you are able to perform. For this proposal fails to satisfy Desideratum . Which actions you truly believe you are able to perform does not supervene on your mental states, regardless of whether we count factive attitudes like knowledge as mental states. True belief is not a mental state by anyone’s lights. 14

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and time-slice rationality did not know that you are able to ford it, your justified true belief that you are able to ford the creek does not constitute knowledge.15

In Case , you know (i) that you are able to ford the creek, and (ii) that you are able to turn back. In Case , you know (ii) but not (i), since your belief that you are able to ford the creek does not constitute knowledge, despite being true and justified. So if your options are all and only the actions you know you are able to perform, then fording the creek is among your options in Case  but not in Case . Assuming that you much prefer getting across to turning back, this means that we will get the result that you rationally ought to ford the creek in Case  but not in Case . This is, to my mind, a highly problematic result. Return to the evaluative and predictive roles of the rational ought. By hypothesis, fording the creek is your best option in Case . And the only difference between the cases is that your justified true belief that you can ford the creek constitutes knowledge in Case  but not in Case ; the cases are otherwise identical in all physical and mental respects. Therefore, if in Case  you turned around and headed back instead of fording the creek, we would be inclined to judge you harshly. We would deem you highly irrational. Moreover, we would want to predict that insofar as you will go ahead and ford the creek in Case , you will also do so in Case .16 One might object that the above argument assumes that if in Case  fording the creek isn’t an option, then turning back is the best option. But even if fording the creek isn’t an option, since you don’t know you are able to do so, maybe trying to ford the creek is still an option. After all, even though you don’t know that you are able to successfully ford the creek, you presumably do know that you can try. And given that you want to get across and believe you are able to do so, presumably trying to do so looks better than just giving up and heading back. So while we would indeed evaluate you harshly if you just turned around, this doesn’t require that fording the creek count as an option for you, but only that trying to ford counts as an option. And similarly, we still get the result that we would predict that you would try to ford.

15 This assumes that testimony only yields knowledge of the testified-to proposition if the testifier knows that proposition. See Fricker (, ) for a defense. If you object to this view of testimony, substitute another way of giving you a Gettier-ized belief that you are able to ford the creek. 16 Of course, as Williamson (, ch. ) notes, the fact that your justified true belief that you can ford the creek is Gettier-ized in Case  means that you are more likely to encounter defeating evidence down the road. Perhaps your friend will have a change of heart and confess his ignorance, whereas your friend in Case  is unlikely to inexplicably lie and claim ignorance. But the point is that we would predict that you would at least start off in the same way in Case  and Case , even if the cases differ in terms of what evidence you might encounter later on.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and time-slice rationality  However, once we move to evaluating tryings, it seems to me that we should simply abandon the proposal that your options are the actions you know you are able to perform (which may include tryings), and instead say that your options are simply the tryings themselves.17 This is the approach I advocate.18

. Options as Decisions The proposals considered above each failed to simultaneously satisfy both of our desiderata on a theory of an agent’s options—that an agent’s options consist only of things she is able to bring about and that they supervene on her non-factive mental states. But we can satisfy these desiderata by conceiving of an agent’s options as consisting of the decisions open to her. This is a natural proposal. In a case where you don’t know whether you are able to do something like ford a creek, it is tempting to think that the option we should really be evaluating is something like your trying to ford the creek. Of course, we might ordinarily think of tryings as physical actions, so that trying to ford the creek requires actually wading in, for instance. But on this conception of tryings, which things you can try to do will fail to supervene on your mental states, just as which ordinary actions (non-tryings) you can perform will fail to supervene on your mental states. For instance, whether you are able to wade into the water will depend not just on your mental states but also on whether your legs are working, whether your shoelaces are tied together, and the like. But if tryings are understood as mental actions which start the ball rolling, so to speak, then I am happy to think of your options as tryings. To enforce

17 Relatedly, another objection to Options-as-Known-Abilities is that, plausibly, one can be able to perform some action (and also know that one is able to do so) even if, were one to decide to do it, one would fail. I may be able to jump across the creek even if on this occasion, were I to decide to try to jump it, my foot would slip slightly and I’d land in the water. But an agent is in no way irrational if she makes the decision to perform the best among the actions she knows she can perform, but the external world gets in the way of her actually succeeding in performing that action. But Options-as-KnownAbilities entails that in this case, the agent has failed to act as she rationally ought to have. Here again, it is natural to say that the agent is rational because, even though she didn’t succeed in performing that action, she nonetheless tried to do so. This further suggests that the options we should really be evaluating, in the first instance, are tryings, understood as decisions about what to do, rather than ordinary physical, temporally extended actions. 18 Jeffrey (, ) discusses this approach (but without endorsing it) when he writes that “one can always take a strict point of view from which the agent can only try to perform the act indicated by the declarative sentence. He can try to bring red wine, but may fail through picking up the wrong bottle, or by dropping the bottle en route.” I am sympathetic to the spirit of this strict point of view, but I emphasize that my view is not that tryings are the only actions an agent is “really” able to perform (a metaphysical thesis about action), but rather that tryings are the actions which are, in the first instance, to be evaluated for rationality or irrationality (a normative thesis about rationality).

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and time-slice rationality this reading, I will call these mental tryings “decisions.” This approach yields the following theory of options:19 Options-as-Decisions A set of propositions is a set of options for agent S at time t if it is a maximal set of mutually exclusive propositions of the form S decides at t to φ, each of which S is able to bring about.20

Then, the highest-ranked such proposition (by expected utility) will be such that the agent ought to bring it about. That is, she ought to make that decision. (I am intending “S decides to φ” to be read in such a way that is incompatible with “S decides to φ ∧ ψ,” even though there is a sense in which if you decide to do two things, you thereby decide to do each one. If you have trouble getting the intended reading, just add in “only” to the proposal, so that a set of options is a maximal set of propositions of the form S only decides at t to φ, each of which the agent is able to bring about.21 ) But will Options-as-Decisions satisfy desiderata  and +? Well, it clearly satisfies Desideratum  (ought implies can), since it is built into the statement of Options-as-Decisions that only decisions you are able to make qualify as options. So the question is whether Options-as-Decisions satisfies Desideratum +. Is it the case that which decisions you are able to make supervenes on your non-factive mental states? The answer to that question depends on what it takes to be able to make a given decision. I do not want to be fully committal about abilities to make decisions, 19

Weirich () is an early defender of this sort of approach. I also have an ally in John Broome, who holds that rational requirements supervene on mental states, and who concludes that therefore practical rationality can only tell you to perform certain mental acts such as forming intentions. He writes that the supervenience of rationality on mental states “rules out the view that rationality requires you to take a means to your ends, when taking means involves a non-mental act. Suppose you fail to take a means to an end of yours through no fault of your own. Say you unexpectedly find yourself unable to make the necessary physical movements. Alternatively, although you are able to take the means, suppose something in the outside world prevents you from doing so . . . In these cases, what prevents you from taking means to your end is something outside your mind. According to the principle that rationality supervenes on the mind, you may nevertheless be rational” (Broome (, )). 20 Once again, “maximal” means that there is no proposition of the form S decides at t to φ which is not a member of the set but which is incompatible with each member of the set. Note that maximality and mutual exclusivity apply not to the contents of a decisions, but to propositions about which a decision was made. Hence the set {S decides at t to φ,S decides at t not to φ} will not count as a set of options, since it does not include propositions about other decisions that S might have made (e.g. the proposition that S decides at t to ψ). 21 In the terminology of Portmore (), Options-as-Decisions entails that all options are “maximal options,” where a maximal option is defined as an option φ such that there is no other option ψ such that your ψ-ing involves your φ-ing, but not vice versa. This follows from the fact that, on my view, your options form a partition; being mutually exclusive, performing one option cannot involve your performing another option as well. See Portmore () for further discussion of the nature of options.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and time-slice rationality  but I think that on any attractive theory of decisions, which decisions you are able to make will supervene on your mental states in a way that allows Options-asDecisions to satisfy Desideratum +. For instance, we might hold that you are able to decide (or intend) to φ if and only if you do not believe that, were you to decide to φ, you would not φ.22 You can make a decision so long as you do not believe that you wouldn’t carry it out. This account could be supported, inter alia, by the Toxin Puzzle (Kavka ()). In the Toxin Puzzle, you are presented with a drink containing a mild toxin which will cause you moderate temporary discomfort but will not result in any long-term harm. You are offered a large sum of money if at midnight tonight you decide to drink the toxin tomorrow afternoon. You do not actually have to drink the toxin to gain the reward; you only have to make the decision at midnight to do so. It seems that despite the benefits of deciding to drink the toxin, you are unable to do so, for you realize that were you to decide at midnight to do so, you would promptly reconsider and overturn your earlier decision. You cannot decide to drink the toxin because you believe that, were you to decide to do so, you would not carry out this decision. Now, the Toxin Puzzle only supports the left-to-right direction of the above biconditional—that if you are able to decide to φ, then you do not believe that, were you to decide to φ, you would not φ.23 But if we take this to be the only restriction on which decisions you are able to make, we get the full biconditional. And if this account is right, then Options-as-Decisions satisfies Desideratum +, since it is your beliefs which determine which decisions you can make. Other accounts of abilities to decide will also allow Options-as-Decisions to satisfy Desideratum +. One might hold that in order to be able to decide to φ, you must not only lack the belief that your decision to φ would be ineffective; you must also have the belief that your decision to φ would be effective. Or it might be that whether you are able to make some decision depends not only on your beliefs, but also on your desires, so that e.g. you are unable to decide to do something to which you have an extremely strong aversion. But if this aversion is a desire or a fear or otherwise part of your mental state, then this is still compatible with the claim that Options-as-Decisions satisfies Desideratum +. Other sorts of psychological pathologies might also impact which decisions you can make, but again, if these psychological pathologies are part of your mental state, my account of options is still in good shape. (Of course, psychological pathologies might not 22 We might also want to require that you possess the relevant concepts in φ to avoid the result that an unsophisticated agent who lacks e.g. the concept of special relativity is able to decide to research special relativity, even though she lacks the belief that, were she to decide to research special relativity, she wouldn’t do so. Thanks to Rachael Briggs for pointing this out. 23 Thanks to Douglas Portmore for emphasizing this.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and time-slice rationality be propositional attitudes, but I am conceiving of your mental state as including both attitudinal and non-attitudinal states.) But what if an agent’s abilities to make decisions are restricted not just by her own mental states, but also by external forces? In another context, Frankfurt () considers the possibility of a demon who can detect what’s going on in your brain and will strike you down if he finds out that you are about to make the decision to φ. Plausibly, you lack the ability to decide to φ, even if you believe that, were you to decide to φ, you would φ. The possibility of such demons threatens the claim that which decisions you are able to make supervenes on your mental states, since which decisions you can make depends also on whether or not such a demon is monitoring you. The worry is most pressing in the case where the Frankfurtian demon aims to prevent you from making a decision that would otherwise be optimal. For what is really important about the relationship between options and your mental states is that what your best option is should supervene on your mental states, so that what you ought to do supervenes on those states. This is what is required to ensure that rational agents with identical mental states will behave (at least initially) in identical ways. It is less important that what your sub-optimal options are likewise supervenes on your mental states, for what sub-optimal options a rational agent has will not affect what she will or ought to do. For this reason, we should really weaken Desideratum + so that instead of saying that what your options are supervenes on your (non-factive) mental states, it instead says that what your expected-utility-maximizing option is supervenes on your (non-factive) mental states. I leave this modification implicit in what follows. Consider, then, a case in which a Frankfurtian demon is monitoring you with an eye toward preventing you from deciding to φ, where this decision would maximize expected utility if you were able to make it. In my view, this is a situation where you lack the freedom to exercise your rational capacities which is necessary in order for you to be subject to the demands of practical rationality in the first place. In the case at hand, the decision to φ looks best out of all the decisions you believe you are able to make, but the demon will strike you down if it detects that you are about to φ. What ought you to do in this case? Certainly, it is not that you ought to make some decision other than the decision to φ, since all such decisions look inferior. And it is not the case that you ought to decide to φ, since ought implies can. Instead, there simply isn’t anything that you ought to do; rather, you ought to be in a state of being about to decide to φ, where this will lead to your being struck down before you are actually able to do anything at all. The rational ought thus only applies to agents who are not subject to the whims of Frankfurtian demons who aim to prevent them from making the decision that looks best by

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and time-slice rationality  their own lights. In this way, once we restrict our attention to agents to whom the rational ought applies, what an agent’s best option is will supervene on her mental states. I conclude that Options-as-Decisions will satisfy the two other desiderata to which a theory of options is subject. Indeed, I think that it is the only theory of options which can satisfy these desiderata. Note also that Options-as-Decisions is very likely the only time-slice-centric theory of options available. If options include temporally extended actions (or, more exactly, propositions about such actions) that are carried out over a long period of time, then what actions you are able to perform will depend on problematic facts about personal identity over time. In Double Teletransportation, for instance, what actions Pre is able to perform will depend on the facts about identity. Suppose that she will enter the machine at noon, and consider an action like typing an e-mail, which is performed, if at all, over an extended period of time. Whether at : Pre is able to perform that action will depend (inter alia) on whether she is identical to Lefty, to Righty, to neither, or to both. If she is identical to neither, then she is unable to type the e-mail, since she will not be around long enough to complete the task. Now suppose that Lefty will be able-bodied, while Righty will be handless. If Pre is identical to Righty and not to Lefty, then she is unable to type the e-mail, since she will shortly lose her hands.24 But if she is identical to Lefty or to both Lefty and Righty, then she is able to type the e-mail. But in my view, what Pre rationally ought to do does not depend on these facts about personal identity over time, and so options should be understood in a time-slicecentric way. Options-as-Decisions does just this, since on this view, options are the sorts of things that can be done instantaneously, or as close to instantaneously as possible. This is a further virtue of Options-as-Decisions.

. Options and the Semantics of Ought One might worry that my theory of options is committed to an implausible error theory about our ordinary use of the term “ought.” While we do say things like “Jane ought to decide to head home,” we also apply “ought” to phrases referring to ordinary sorts of actions, as in “Jane ought to head home.” But if heading home is, strictly speaking, not among Jane’s options (though deciding to head home is), is the latter sentence false (or perhaps even truth-valueless)? And if so, why are we happy to assert it in the envisaged situations? 24 I am assuming that typing an e-mail requires one to use one’s hands. Other ways of composing e-mails, such as using voice-recognition software, would count as writing an e-mail, but not typing an e-mail.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and time-slice rationality I think that there are two possible responses here. The first is to say that, indeed, a sentence like “Jane ought to head home” is false (or truth-valueless), since it involves a sort of category mistake. But we can give a pragmatic explanation of why such a sentence is often assertable. For instance, we might point out when “Jane ought to head home” is assertable, it is true that her deciding to head home has highest expected utility among her options and that if she decides to head home, she will in fact do so. And so an assertion of “Jane ought to head home” will not mislead. Instead, it will convey the true proposition that Jane ought to make the decision to head home, and it will do so in a more concise manner than the (slightly) more unwieldy and prolix “Jane ought to decide to head home.” A second possible response, which I prefer, allows that assertion to be true in the strictest sense. It does so by distinguishing derivative and non-derivative ways in which ought claims, and the propositions they express, can be true. A claim of the form “S ought to φ” is non-derivatively true just in case φ-ing is an option for S and has highest expected utility among S’s options. If Options-asDecisions is true, then the only ought claims that are non-derivatively true are those where “φ” denotes a decision that S is able to make. So claims like “Jane ought to decide to head home” can be non-derivatively true while claims like “Jane ought to head home” cannot be non-derivatively true. But an ought claim can also be derivatively true even if the complement of “ought” does not denote one of the subject’s options. We might say that the proposition that Jane ought to head home is derivatively true just in case (i) the decision to head home has highest expected utility among Jane’s options and (ii) Jane’s deciding to head home would nondeviantly cause her to head home. More generally, the proposition that S ought to φ is true just in case S’s deciding to φ has highest expected utility among her options and would non-deviantly cause her to in fact φ. On this proposal, many of the ought claims that we are willing to assert will not be non-derivatively true, since they do not contain a decision-denoting phrase as the complement of “ought.” But they will still be true, only derivatively so.25

25 Note that derivative and non-derivative truth are not really different kinds of truth. It is not as though derivative truth is truth of a lower status. Rather, the point is just that the sentences or propositions that I am calling derivatively true are such that their truth depends on, and is partially grounded in, the truth of a related sentence or proposition. The truth of a sentence of the form “S ought to φ” depends on, and is partially grounded in, the truth of a sentence of the form “S ought to decide to φ,” but not vice versa. For on my view, “S ought to φ” entails, but is not entailed by, “S ought to decide to φ.” So the distinctive between derivative and non-derivative truth is not a distinction between different grades of truth (I doubt that there are any such different grades of truth), but rather a distinction between ways in which the truth of some sentences or propositions depend on the truth of others.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and time-slice rationality  I think that this proposal gives the right results. There will be no propositions that we intuitively regard as clearly true and that we are happy to assert which nevertheless come out false on my view. All of the assertable propositions that we regard as clearly true will be true on my view, though some of them will be only derivatively true. This second view about the truth conditions of ought claims is, therefore, not a revisionary one, and so facts about which ought claims we are willing to assert and which we regard as true pose no threat to Options-asDecisions.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 Options and Diachronic Tragedy In Chapter , we saw that there is one style of argument which can be marshalled in support of a wide variety of potential principles of rationality, including the principles I rejected in Chapters  and —diachronic principles and reflection principles. Recall that a Tragic Sequence is a sequence of actions S such that at all times during the performance of the sequence, you prefer performing some other possible sequence of actions S over performing S . A Tragic Attitude is one such that, if you have it, you will sometimes prefer performing each member of some Tragic Sequence even though, by definition, you prefer not to perform the sequence as a whole. The Diachronic Tragedy Argument concludes from the fact that a certain sort of attitude is Tragic, that it is irrational to have that sort of attitude. If sound, the Diachronic Tragedy Argument means that it is irrational to violate Conditionalization, to violate Reflection, to violate Utility Conditionalization, to violate Preference Reflection, to be time-biased, to have intransitive preferences, to have imprecise preferences or credences, to violate the Substitution Axiom, and even to prefer dominant options. A powerful argument indeed! The inference from predictable exploitability (in the form of preferring each member of some Tragic Sequence) to the irrationality of the relevant sort of attitude is often taken as intuitively compelling and not in need of further argument. But we should be reluctant to trust brute intuition here. For one thing, while some Tragic Attitudes seem quite clearly irrational, such as intransitive preferences, others seem paradigmatically rational, such as the preference for dominant options. For another, the Diachronic Tragedy Argument draws conclusions about the rationality of attitudes from claims about the rationality of actions. But we now have a theory about the rationality of actions. We have a time-slice-centric theory of what your options are—Options-as-Decisions. And we have the standard timeslice-centric theory of how options are to be ranked—Expected Utility Theory. So it will be important to look carefully at how the Diachronic Tragedy Argument fares in light of this time-slice-centric picture of practical rationality endorsed by

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and diachronic tragedy  Time-Slice Rationality. When we do so, we find two compelling reasons to reject the Diachronic Tragedy Argument.

. Diachronic Tragedy and the Prisoner’s Dilemma The Diachronic Tragedy Argument begs the question against a defender of TimeSlice Rationality. It is uncontroversial that collections of distinct agents can act in a way that produces a tragic, mutually disadvantageous outcome without there being any irrationality. The proponent of the Diachronic Tragedy Argument must assume that this cannot happen with collections of time-slices of the same agent; if a collection of time-slices of the same agent produces a tragic outcome, there is ipso facto something irrational going on. Needless to say, this assumption will not be granted by the defender of Time-Slice Rationality, who thinks that the relationship between time-slices of the same agent is not importantly different, for purposes of rational evaluation, from the relationship between time-slices of distinct agents. Let me spell out this point. Cases of Diachronic Tragedy have the structure of Prisoner’s Dilemmas, with your different time-slices as the prisoners. In the Prisoner’s Dilemma, prisoners A and B have each been arrested and accused of burglary. They must each choose whether to defect or cooperate. Defecting amounts to ratting on the other guy, while cooperating amounts to obeying the criminals’ code of silence. If both prisoners defect, then each will get five years in prison. If both cooperate, then each will get three years in prison. But if one prisoner defects and the other cooperates, then the former will get off with no prison time, while the latter will face ten years in prison. Assuming that each prisoner cares only about getting the minimum prison time possible, this gives us the following decision matrix, where (xth , yth ) indicates that the outcome is ranked xth best by A and yth best by B: A defects A cooperates

B defects (rd, rd) (th, st)

B cooperates (st, th) (nd, nd)

The matrix shows that each prisoner is better off defecting, no matter what the other one will do. Consider things from A’s point of view. Suppose B will cooperate. Then it is better for A to defect, since defecting will yield no prison time whereas cooperating will yield three years of prison time. Now suppose B will defect. Then it is better for A to defect, since defecting will then yield five years in prison instead of the ten which would result from cooperating. So A is better off defecting, no matter whether B will defect or cooperate. A prefers the collective action

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and diachronic tragedy to the collective action and likewise prefers to . And since the situation is symmetrical, B likewise is better off defecting, no matter what A does. But A and B are each better off if they both cooperate than if they both defect. For by both cooperating, each faces three years in prison instead of the five years each will face if they both defect. So, when A and B each acts in his own interest and defects, their actions predictably result in a mutually dispreferred outcome. The collective action is thus analogous to a Tragic Sequence, since both parties disprefer it to the alternative collective action . In cases of Diachronic Tragedy, your t and t time-slices1 play the roles of prisoners A and B. To see the analogy, consider the decision matrix for the case of the Russian Nobleman (where (xth , yth ) indicates that the outcome is ranked xth best by your youthful t self and yth best by your older t self): Donate Early Don’t Donate Early

Donate Late (rd, rd) (th, st)

Don’t Donate Late (st, th) (nd, nd)

Each of your t and t selves prefers to donate, no matter what the other does. But by each acting on her preferences at the time of action and donating, your t and t selves yield a mutually dispreferred outcome. Each prefers that both refrain from donating rather than that both go ahead and donate. So cases of Diachronic Tragedy are intrapersonal Prisoner’s Dilemmas. In the standard, interpersonal Prisoner’s Dilemma, it is natural to think that neither prisoner is being irrational when she defects.2 Nor is there any sort of group-level irrationality.3 The Prisoner’s Dilemma is just a case where two people predictably wind up with a mutually dispreferred outcome without anyone being 1 Of course, the number of time-slices will differ depending on the number of decision points in each case. Most cases of Diachronic Tragedy involve two decision points, while the case involving intransitive preferences involves three and the one involving the preference for dominant options involves infinitely many decision points. I ignore this complication here for ease of exposition. 2 For a dissenting view, see Gauthier () and McClennen (). 3 One might be tempted to go further and argue that there are no such things as group agents. But this eliminativist position is not needed for a defense of my view, and it also faces powerful objections. List and Pettit () defend a view they call “non-reductive realism” about group agents. The realism about group agents is motivated by the utility of taking what Dennett () calls the “intentional stance” toward groups—predicting, explaining, and criticizing their behavior by means of attributing intentional states such as beliefs and desires to them. The non-reductive aspect of their view, on which the beliefs and desires of group agents are not reducible in any simple and straightforward way to the beliefs and desires of their members, is motivated by impossibility theorems in the theory of judgment aggregation, which show that, for instance, the beliefs and desires of a group cannot be a simple

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and diachronic tragedy  irrational. The defender of Time-Slice Rationality will insist that we say the same thing about the intrapersonal Prisoner’s Dilemmas found in cases of Diachronic Tragedy (or at least that Diachronic Tragedy does not show the time-slices in question to be irrational; they may be irrational on independent grounds). There are cases where time-slices of the same person act in ways that produce a mutually disadvantageous result without there being any irrationality. Of course, there are some disanalogies between the interpersonal and intrapersonal Prisoner’s Dilemmas. I consider two disanalogies below and argue that they do not threaten my conclusion that the Diachronic Tragedy Argument is questionbegging. First disanalogy: The two prisoners do not care about each other, whereas your t self presumably does, and perhaps ought to, care a great deal about your t self, and vice versa. But this fact about rational self-concern does not undermine my claim that inter- and intra-personal Prisoner’s Dilemmas ought to be treated the same. To begin with, insofar as you rationally ought to care about your future selves, arguably you also rationally ought to care about other people. So if your t self rationally ought to care about your t self (and vice versa), arguably Prisoner A also rationally ought to care about Prisoner B (and vice versa). Moreover, insofar as you ought to be concerned for your future selves (or, perhaps, for persons psychologically continuous with you), this should be reflected in your current preferences rather than through an add-on principle to the effect that vulnerability to an intrapersonal Prisoner’s Dilemma is ipso facto irrational. (Similarly, insofar as Prisoner A ought to be concerned about the well-being of Prisoner B, this should

function of the beliefs and desires of their members without the group winding up with incoherent belief and desire states. My time-slice-centric theory of rationality is, I believe, compatible with their non-reductive realism about group agency. First, and most importantly in the present context, while they argue that there are such things as group agents and group irrationality, they do not take mutual defection in the Prisoner’s Dilemma to be a case of group irrationality, nor do they take the fact that mutual defection predictably yields a mutually dispreferred outcome to mean that each individual’s preference for defecting is irrational. Therefore, their position does not threaten my insistence that intrapersonal and interpersonal Prisoner’s Dilemmas be treated in parallel ways, such that in neither sort of case does an attitude’s risk of producing tragic outcomes show that that attitude is irrational. Second, it might be thought that if we sometimes treat groups of individuals as agents capable of rationality or irrationality, so we must also treat groups of time-slices of the same person as agents capable of rationality or irrationality, and that this conflicts with my time-slice-centric approach. But this thought is mistaken, for I do not claim that there are no such things as persons, nor do I claim that temporally extended persons cannot be rational or irrational. I only claim that the fundamental norms of rationality are time-slice-centric in the sense that what attitudes an agent (usually a temporally extended individual agent, but perhaps also a group agent) ought to have at some time depends in no special way on the attitudes that agent has, or believes she has, at other times. This claim is perfectly consistent with the existence of temporally extended individual agents as well as with the more controversial existence of group agents.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and diachronic tragedy be reflected in the preferences that A ought to have, not in an add-on principle specifically about the Prisoner’s Dilemma.) And importantly, in most of the instances of Diachronic Tragedy discussed above, your t and t selves do care a great deal about each other. Indeed, in all but three (the case of the Russian Nobleman involving changes in political values, Dougherty’s case involving bias toward the future, and the smoking case involving a violation of Preference Reflection), your t and t selves care about exactly the same things; they have the very same preferences over maximally specific possibilities. So in general, it is false that Diachronic Tragedy results from some lack of self-concern on your part; it is not as though your t and t selves regard each other as adversaries, as prisoners A and B might. This is especially clear in the cases of Conditionalization and Reflection. There, you are predictably exploitable not because your t self doesn’t care about your t self (or vice versa), but rather because they have conflicting opinions about the optimal way to promote their shared interests. Your t self thinks that the best way to make money (and hence promote your future well-being) is to accept the bets offered at t (Bets  and ) while declining the bet offered at t (Bet ). But your t self thinks that your wellbeing would best be promoted by declining Bets  and  and accepting Bet . For this reason, the fact that you care about yourself more than others does not entail that intra- and inter-personal Prisoner’s Dilemmas should be treated differently. Second disanalogy: Prisoners A and B are, by stipulation, unable to communicate with each other in order to coordinate their actions. But this is because they are separate agents. By contrast, your different time-slices can, in a sense, communicate and coordinate with each other. But this disanalogy does not threaten my conclusion. If prisoners A and B could communicate, this would still not be enough to ensure that they would both cooperate. A and B could each tell the other that he would cooperate but then renege with the hope of achieving the best outcome in which he defects while the other cooperates. In order to ensure that they each cooperate, they would also need the ability to somehow bind themselves and each other to that course of action. This brings us back to the issue of self-binding mentioned in Chapter . If at t you have the ability to self-bind (and are sure that you have this ability), then you rationally ought to bind yourself to the course of action that you deem optimal at t . For instance, in the Russian Nobleman, if you are able to self-bind then when you are a young liberal, you ought to bind yourself to the course of action of donating to liberal causes now but declining to donate to conservative causes in your old age when your political views have shifted.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and diachronic tragedy  In Section  of this chapter, I will address the question of whether you are irrational if you lack the ability to self-bind (and, more generally, whether there are diachronic norms governing intentions) and answer in the negative. But for present purposes, the important thing to note is that if lacking the ability to selfbind (or believing that you might lack this ability) is a defect of rationality, then this verdict would itself threaten the claim that Tragic Attitudes are ipso facto irrational. For if Tragic Attitudes only get you into trouble if you lack the ability to self-bind, we could conclude that any irrationality involved in performing a Tragic Sequence is to be blamed not on the Tragic Attitudes themselves, but instead on your inability to self-bind. This would further support my conclusion that Diachronic Tragedy does not show that Tragic Attitudes are ipso facto irrational. To sum up this section, cases of Diachronic Tragedy are structurally isomorphic to Prisoner’s Dilemmas and should be treated as such. By acting on your beliefs and desires at each time, you wind up with an outcome which is worse than an alternative outcome that you could have obtained, had you acted differently. But as in the Prisoner’s Dilemma, this does not show that you were in any way irrational.

. Depragmatization and the No Way Out Argument The fact that Tragic Attitudes put you at risk of exploitation is only a pragmatic reason not to have these attitudes; it does not show that these attitudes are themselves irrational. At most it shows that you should perhaps try to cause yourself not to have them, or at least not to have them in situations where having them constitutes a risk to your wallet. Moreover, there are many attitudes that are predictably disadvantageous without being irrational. For instance, there is evidence that overrating your own talents increases your chances of success in a wide range of endeavours (Taylor and Brown ()), but this does not mean that it is irrational to have an accurate, evidence-based self-conception! Compare the dialectic regarding the Synchronic Dutch Book Argument, which is a synchronic analog of the Diachronic Tragedy Argument. This argument purports to show that it is irrational to have credences that violate the probability calculus, since such credences would license you to accept each member of some set of bets which together guarantee you a loss. But arguably this shows at best that it is pragmatically disadvantageous, as opposed to epistemically irrational, to have credences that violate the probability calculus.4 Many philosophers have been persuaded by this criticism, which has led Christensen () and Skyrms (), among others, to try to “depragmatize” the Dutch Book Argument, reinterpreting 4

See e.g., Kennedy and Chihara (), Rosenkranz (), and Joyce ().

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and diachronic tragedy it so that it really demonstrates the epistemic irrationality of such credences. Christensen, for instance, focuses on the notion of credences or beliefs sanctioning as fair some betting odds. Then, he writes that “if a single set of beliefs sanctions as fair each of a set of betting odds, and that set of odds is defective, then there is something amiss with the beliefs themselves” (). In this way, Christensen seeks to reinterpret the Dutch Book Argument so that it really demonstrates a kind of inconsistency in credences that violate the probability calculus. And Skyrms () interprets vulnerability to a Dutch Book as showing that you evaluate bets differently depending on how they are described, and this is likewise a problem with your credences themselves. I will not seek to evaluate the success of these attempts to depragmatize the Synchronic Dutch Book Argument.5 The question that we are concerned with is whether the Diachronic Tragedy Argument can be similarly depragmatized. Can we argue that Diachronic Tragedy illustrates that the Tragic Attitudes are themselves in some sense inconsistent? Offhand, this is likely to be no easy task. As Christensen () notes, while there is something inconsistent about believing both H and ¬H at one time, there is nothing inconsistent in the diachronic case about believing H at one time and ¬H at another. Here is my best attempt at depragmatizing the Diachronic Tragedy Argument (I will shortly show why it fails). The proponent of the argument should hold that Tragic Attitudes are irrational because, in cases of Diachronic Tragedy, having those attitudes means that you cannot help but do something you rationally ought not to do. Either you will perform a particular action that you rationally ought not to perform, or you will perform a sequence of actions that you rationally ought not to perform. And it is in giving rise to conflicting ought claims—ought claims that cannot all be satisfied—that Tragic Attitudes are in some sense inconsistent. Tragic Attitudes leave you caught in a bind with no way out: The No Way Out Argument P: A set of attitudes is irrational if there are cases where no matter what you do, you will have done something that, given those attitudes, you rationally ought not to have done. P: If you have Tragic Attitudes, then in some cases no matter what you do, you will have done something that, given those attitudes, you rationally ought not to have done. C: Tragic Attitudes are irrational. 5 In Hedden (), I argue that depragmatization is beside the point, since the Synchronic Dutch Book Argument is unsound in any event. Credences can violate the axioms of the probability calculus while provably not licensing you to accept each member of a Dutch Book. Such credences involve what I call “negation incoherence,” in which your credences in a proposition and its negation fail to sum to . Negation incoherent credences have been overlooked in the Synchronic Dutch Book Argument, for the argument is based on the assumption that your credences equal your fair betting quotients, and this entails that your credences are negation coherent.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and diachronic tragedy  Why believe P? Rational agents do not do things that they rationally ought not do. So, if having certain attitudes entails that no matter what you do, you will have done something you rationally ought not to have done, then you cannot be a rational agent and have those attitudes. A set of attitudes is irrational if you cannot be a rational agent and have those attitudes. Hence, a set of attitudes is irrational if no matter what you do, you will have done something that, given those attitudes, you rationally ought not to have done. (Note that there are a number of points at which one might object to my sketch of an argument for P. Perhaps there are genuine “rational dilemmas” in which even rational agents cannot avoid doing something they ought not to do. And perhaps there are attitudes that are rational, even though no rational agent would have them. For example, it may be that no perfectly rational agent would have the desire to improve her reasoning skills, for the simple reason that she already has excellent reasoning skills, but this does not mean that the desire to improve one’s reasoning skills is itself irrational. But let us grant P for now.) Why believe P? Well, one might argue for P by saying that no matter what you do, either you will have performed some particular act you ought not to have performed, or you will have performed some sequence of acts that you ought not to have performed. Consider The Russian Nobleman. You rationally ought not to perform the sequence of acts , since at all times you preferred performing some other sequence of acts that was available to you. But you rationally ought to perform the particular act Donate Early, since at the time it is available you prefer to perform it. And you rationally ought to perform the particular act Donate Late, since at the time it is available you prefer to perform it. So, you rationally ought to Donate Early, you rationally ought to Donate Late, but you rationally ought not to perform the sequence . So you cannot do all that you ought to do. For it is logically impossible for you to Donate Early, Donate Late, but not perform the sequence . (Similarly for the other cases in Chapter .) But, as I will argue, we should reject P. First, in light of the account of options defended in the previous chapter, the quick argument for P sketched above fails (though P could still be true for other reasons). The crucial assumption in that argument-sketch is that the rational ought applies to both particular acts like Donate Early and Donate Late, and also to sequences of acts like the sequence . If, then, you ought not to perform the sequence but ought to perform each member, then you are stuck with no way out. But in the previous chapter, I defended Options-as-Decisions, a conception of options on which your options at a time consist of all and only the decisions you are able

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and diachronic tragedy to make at that time.6 Therefore, neither sequences of acts like , nor indeed their members, count as among your options because they are not mental acts of decision-making. So, in particular, because a sequence like is not among your options, we cannot simply infer from the claim that you prefer not to perform the sequence the claim you ought not perform it.7 I don’t want to overstate my case. As Douglas Portmore impressed upon me, even on my account, the fact that is not among your options does not mean that the claim You rationally ought not to perform the sequence is false. For in . I suggested that ought claims applied to acts that are not mental decision-making acts, and hence not among your options, can still be derivatively true. A claim of the form You rationally ought φ is derivatively true if the mental act of deciding to φ has highest expected utility and would (non-deviantly) cause you to φ. So the claim You rationally ought not to perform the sequence will be true if the decision the sequence has highest expected utility and would (non-deviantly) cause you not to perform the sequence . This will be important in what follows, when I argue that P is actually false. But right now I am just pointing out that the quick argument for P sketched above fails. And even in light of this complication about derivatively true ought claims, that argument sketch is invalid, for it relies on the inference from the claim that you prefer not to perform a given sequence of acts to the claim that therefore you ought not perform that sequence. This move is invalid. It can be true that you prefer not to perform that sequence but false that you ought not to perform it, for instance if the decision not to perform the sequence does not have highest expected utility, or if making this decision would not (non-deviantly) cause you not to perform the sequence. So, even given my account of derivatively true ought claims, the argument sketched in favor of P is invalid. 6 These decisions can have contents of very different types; they can be decisions to perform particular actions, decisions to perform sequences of actions, decisions to defer deliberation until a later time, and the like. But it is the mental acts of making decisions, and not the contents of the decisions, that count as your options. 7 Note also that on my account, sequences of decisions do not count as options for you. Consider a sequence of decisions consisting of decision D at t and decision D at t . Thinking back to Desiderata  and , it may be that you are able to make each decision at the relevant time. And it may be that whether you are able to make decision D at t supervenes on your mental state at t and that whether you are able to make decision D at t supervenes on your mental state at t , but crucially whether you are able to perform the sequence < D at t , D at t > does not supervene on your mental state at t . This is why this sequence of decisions does not count as an option for you at t . Whether you can perform this sequence of decisions depends not just on your mental states at t but also on facts about how things will be in the future.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and diachronic tragedy  Of course, to undercut one argument in favor of P is not yet to show that P is false. I turn now to that latter task. To show that P is false, I need to show that you can have Tragic Attitudes in a case like the Russian Nobleman without doing anything that, given those preferences, you rationally ought not to have done. To do this, let us suppose that you in fact perform the Tragic Sequence of Donating Early and Donating Late. Crucially, given Options-as-Decisions, whether your performing a Tragic Sequence involved your doing anything you rationally ought not to have done depends on whether you made any decision you rationally ought not to have made (or failed to make some decision that you rationally ought to have made). For even in light of the caveat in . about derivatively true ought claims, it remains the case that if you make all and only the decisions you rationally ought to make, then you will have done everything that you rationally ought to have done. Suppose you rationally ought to decide to φ, and you in fact decide to φ. If your decision to φ (non-deviantly) causes you to φ, then it will also be derivatively true that you ought to φ, and by supposition you satisfy this ought. By contrast, if your decision to φ fails to (non-deviantly) cause you to φ, then it will not be derivatively true that you ought to φ, so even if you fail to φ, you will still have done all and only the things that you rationally ought to have done. Let us see, then, how it is possible to perform a Tragic Sequence while making all and only the decisions you rationally ought to make. Here is one such way: You fully believed that you would carry out whichever decision you made (i.e. you believed yourself to be able to self-bind), but you were in fact wrong about this. Suppose that in your youth, you made the decision to perform the sequence . Given your belief that you would do whatever you decided to do, this was the decision you rationally ought to have made (since you preferred this sequence of actions over any other). But despite having made this decision in your youth and carried out the first part of it (Donating Now), you reopened deliberation later on in life and revised the decision you made in your youth, deciding instead to Donate Late. (The fact that your decision to perform the sequence was not causally efficacious (due to your inability to self-bind) means that it was not the case that you ought to have performed it, but only that you ought to have decided to perform it.) Your new decision to Donate Late was also one that (having reopened deliberation) you rationally ought to have made, since your older self preferred Donating Later to Not Donating Later and believed that deciding to Donate Late would result in Donating Late. Having carried out this new decision and Donated Later, you wound up performing the Tragic Sequence . But at no point in the process did you make any decision that you rationally ought

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and diachronic tragedy not to have made, given your beliefs and preferences. Your performing the Tragic Sequence was the result not of having made any decision that you rationally ought not to have made, but of having falsely believed that you would carry out whichever decision you made. Here is a second way: Your performing a Tragic Sequence was the result of failing to believe that you would carry out whichever decision you made (i.e. failing to believe yourself able to self-bind). Suppose that in your youth, you believe that your present decision will make no difference to which action you perform at age sixty; it will only determine whether you would Donate Early or not. In this case, three decisions are tied for best: (i) the decision to perform the sequence , (ii) the decision to perform the sequence , and (iii) the decision to Donate Early (deferring until age sixty the question of whether to then Donate Late). Suppose you make the third decision and carry it out. This decision is perfectly rational in light of your beliefs and preferences. Then, at age sixty you have to deliberate anew and ultimately decide to Donate Late. This new decision is also perfectly rational, since at that point you prefer to Donate Late. You thus wind up performing the Tragic Sequence , but at no point did you make a decision that you rationally ought not to have made. Your decision in your youth to Donate Early was perfectly rational (given your belief that you could not control your sixty year old self), as was your decision at age sixty to Donate Late. Your performing the Tragic Sequence was the result not of having made a decision that you rationally ought not to have made, but of having failed to believe that you would carry out whichever decision you made.8 8 The considerations I raise here amount to a defense of what has been called “sophisticated choice.” McClennen () distinguishes three different ways of evaluating options in situations where you face a sequence of choice points and anticipate that your preferences may change in the middle of the sequence. The three methods of evaluation are myopic choice, sophisticated choice, and resolute choice. The differences between these three methods of evaluation are most easily seen not in an intrapersonal Prisoner’s Dilemma, but rather in a case of anticipated weakness of will. In the case of Professor Procrastinate (Jackson and Pargetter ()), you are offered the opportunity to write a tenure report for the department which will be due on Friday. The best thing would be for you to accept the offer and then write the report. The worst thing would be for you to accept the offer but then fail to write the report. Declining the offer would be somewhere in the middle. Unfortunately, you believe that if you accept the offer, you will ultimately fail to write the report on time. If you are a myopic chooser, you will accept the offer, ignoring your belief that you will later procrastinate; you will “short-sightedly fail to trace out the implications of the situation for what you will do in the future” (, ). If you are a sophisticated chooser, you will decline the offer, for “To be a sophisticated chooser is first to project what you will prefer, and thus how you will choose, in the future, and then reject any plan that can be shown, by such a projection, to be one that you would subsequently end up abandoning” (, ). But if you are a resolute chooser, you will accept the offer and then go on to write the report on time; a resolute chooser “can choose sequentially in such a way that he or she chooses subsequently by reference backward to the plan that was previously chosen” (, ). (If you are resolute, what will you do in the case of the Russian Nobleman? Will you Donate

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and diachronic tragedy  In sum, given Options-as-Decisions, P is false. To begin with, we cannot immediately infer from your performing a Tragic Sequence that you did something you rationally ought not to have done, since sequences of actions are not options. This is analogous to the common denial that in the Prisoner’s Dilemma the prisoners, by each defecting, perform some “group action” which is irrational; just as there are no such group actions subject to rational evaluation, so I am arguing that sequences of actions are not (non-derivatively) subject to rational evaluation. Whether your performing a Tragic Sequence involved your doing anything you rationally ought not to have done depends on whether the individual decisions leading to that Tragic Sequence were themselves irrational. But in a wide range of cases your performing a Tragic Sequence is instead the result of a sequence of perfectly rational decisions. So P is false and the most promising way of depragmatizing the Diachronic Tragedy Argument fails. Absent some other, more compelling argument in their favor,9 this means we can reject the non-time-slice-centric diachronic and reflection principles without embarrasment. The task of the next chapter will be to replace them with superior time-slice-centric principles. But first, I conclude this chapter with a discussion of intentions and their stability over time.

. Rationality and the Stability of Intentions I have made reference to so-called self-binding at various points in the discussion of Diachronic Tragedy. At this point, having concluded my rebuttal of the Diachronic Tragedy Argument, I want to address self-binding itself. Arntzenius Early and then Not Donate Late, effectively carrying out the plan that your youthful self most prefers? Or will you Not Donate Early and then Not Donate Late, thereby executing a sort of compromise between your earlier and later selves? From McClennen’s gloss on resolute choice, I cannot tell. His emphasis on always choosing in accordance with previously chosen plans suggests the former; if you know you are a resolute chooser, you will execute the plan that your earlier self most prefers. But McClennen also uses his defense of resolute choice to defend the rationality of cooperating in the interpersonal Prisoner’s Dilemma, and Not Donating Early and then Not Donating Late is the analog of both prisoners cooperating. I think that it is a problem for McClennen’s defense of resolute choice that it is unclear what it has to say about a lot of specific cases.) McClennen rightly notes that resolute choosers will typically wind up better off than myopic or even sophisticated choosers. But the fact that resolute choosers do better than sophisticated choosers does not show that resolute choice is rationally required. An attitude or method of choice can be irrational despite predictably yielding better consequences for the agent. Moreover, resolute choice, while beneficial when successfully executed, can be highly irrational, and for fairly straightforward reasons. If you have serious doubts about whether you will choose resolutely at the later stage, it seems crazy to act on the assumption that you will do so. If you very much doubt that you will write the report, it wouldn’t make sense for you to nevertheless accept the offer. Similarly, at the later stage, choosing resolutely requires you to act against your preferences (unless you have a preference for executing previously formed intentions), which seems paradigmatically irrational. 9

I discuss the epistemic utility argument for Conditionalization in footnote  of the next chapter.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and diachronic tragedy et al. () do not say much about how to understand the self-binding, other than that it involves some sort of causal control over your later self. I suggest that the most natural way to understand self-binding may be in terms of intentions. Being able to self-bind is being able to form intentions that are effective in governing your later actions. Being unable to self-bind is being unable to form such effective intentions, either because you cannot form intentions at all, or because you sometimes fail to execute the intentions you do form. Intentions are related to decisions, which play an important role in my view of practical rationality by constituting a decision-maker’s options. A decision might even just be the formation of an intention, as Bratman () and Mele (), among others, hold. There is a worry that perhaps intentions are more “heavyduty” than decisions. Earlier, I decided to take a short break from writing, but offhand it seems odd to say that I formed an intention to take a break. But perhaps this is just odd and not strictly false. For instance, Mele (, ) distinguishes between proximal intentions (intentions to do something right away) and distal intentions (intentions to do something in the future). On this way of thinking, I did in fact form an intention to take a break, but it was a proximal rather than a distal intention. In any event, I am thinking of decisions in a thin way, as a kind of mental act that precedes every intentional action. Whether I can go along with Bratman and Mele and regard decisions as acts of intention-formation then depends on whether some intentions (such as proximal intentions) can be understood in a similarly thin way. I think of intentions as genuine mental states (and decisions as genuine mental acts), and probably not reducible to belief-desire complexes of some sort. And I also endorse synchronic norms for decisions and intentions, such as norms of consistency (e.g. that you not intend to φ and intend to ¬φ) and, most importantly, the norm generated by decision theory stating that you ought to make the decision with highest expected utility. Do we also need diachronic norms here? Intentions play an important role in our mental economies. Bratman’s () influential account focuses on the stabilizing role that they play in our deliberations (this is most plausible in the case of distal intentions, and so I focus on them in what follows). As Holton (), following Bratman, writes, “Intentions stand as fixed points in our reasoning.” If we had to deliberate anew at every moment or every choice point, we would incur substantial costs in time and cognitive effort. Moreover, they allow us to deliberate now, when conditions for deliberation are favorable (when it is quiet and we are clear-headed, for instance) and store the result of that deliberation in the form of intention, rather than having to deliberate just before the time to decide arrives, when conditions might be less favorable. In this way, intentions

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and diachronic tragedy  serve as mental sticky notes. Finally, intentions can help us to preemptively resist anticipated temptation. If I want to go to the bar with friends but don’t want to drink, I am often well-served by forming an intention not to drink prior to entering the bar (though to say that intentions are helpful in this regard is not to say exactly how they manage to play this helpful role). On this way of thinking, the stabilitizing role of intentions is important to us primarily because we are limited agents. If we were ideally rational, we would automatically make the optimal decision without having to waste time and effort on conscious deliberation, and we wouldn’t be susceptible to the sorts of temptations that prove so threatening to ordinary agents.10 The stabilizing role of intentions is a diachronic one: intentions are useful because of their tendency to persist over time. But the fact that intentions have this diachronic role does not threaten Time-Slice Rationality, unless the stabilizing role of intentions is undergirded by diachronic norms governing intentions. But is it? Broome (, ) proposes a diachronic norm for intentions that, in effect, says that it is a rational requirement that intentions display a certain stability over time: Persistence of Intention: If t is earlier than t , rationality requires of N that, if N intends at t to F, and no cancelling event occurs between t and t , then either N intends at t to F, or N considers at t whether to F.

You must retain at t previously formed intentions unless a cancelling event occurs or you begin to reconsider the issue at t . Broome does not give a definition of cancelling event, but examples include coming to believe that you have already executed your intention, coming to believe that it is impossible to execute your intention, and having already reconsidered your intention prior to t . Other authors such as Holton () endorse related diachronic norms for intentions. In particular, Holton holds that in many cases, it is rational to stick with an intention and not reconsider it, even though, were you to reconsider, it would be rational for you to revise and come to have some other intention instead. The important common ground between these authors is that there are diachronic norms on intentions that ground their stabilizing role.

10 Bratman () emphasizes some roles played by intentions that do not obviously depend for their usefulness on our being cognitively limited agents, including a social role in helping to coordinate joint action. But if I am right that we can account for the stabilizing role of intentions without appeal to diachronic norms for intentions (see below), it is also likely that we can account for the social coordination role of intentions along similar lines.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and diachronic tragedy But there is an alterantive picture that I want to propose on which the stability of intentions is a mere causal fact (albeit a very useful and important one!) that does not need to be buttressed by a rational norm.11 It is in the nature of intentions to tend to persist over time, and this persistence is at least much of what makes them useful parts of our mental toolkit, but we do not also need to say that you are rationally required to retain your intentions in the absence of reconsideration or cancelling events. Suppose that to intend to φ is in part to be disposed to φ, a claim that Broome () endorses. If so, then intentions must at least tend to persist over time, else they wouldn’t tend to cause you to satisfy their contents, contra the assumption that they are in part dispositions to satisfy their contents. Similarly, consider Holton’s view that it can be rational not to reconsider an intention, even when you would rationally come to drop the intention if you were to reconsider it.12 If we think of intentions as tools we have that are useful in virtue of their causal powers, we can accommodate Holton’s insight without appeal to diachronic norms. Sometimes, if you find yourself in a certain causal state (e.g. the state of having an intention), it can be rational not to tinker with that causal state (e.g. reconsider the intention) even though, if you were to tinker with it, it would yield a different outcome. We don’t need diachronic norms to say why you shouldn’t tinker with that causal state, since you know that that sort of causal state typically results in good outcomes, whereas tinkering with it typically leads to problems. After all, if you have background evidence that you typically are rational in what intentions you form, and reconsidering intentions often stems from temptation, it will be rational not to re-open deliberation unless you have strong evidence that this case is different. I have no knock-down argument for favoring my stance, which grounds the stability of intentions in facts about the causal role of intentions rather than in diachronic norms, over alternatives like Broome’s or Holton’s, but it is certainly a live option and one that it is quite natural to adopt insofar as you are sympathetic to the overall time-slice-centric picture I advocate. In fact, in his discussion, Broome already points the way toward my preferred stance on the stability of intentions. He observes that dropping an intention resembles forgetting, and that in some cases the failure of an intention to persist is

11 As Michael Bratman informed me, this is a version of what he calls the “modest extension of the belief-desire model” (Bratman (, ). 12 In a related vein, Bratman (, ) writes that “an intention at which you have sensibly and confidently arrived earlier is a rational default, though a default that is normally overridden if— perhaps by way of new information—you newly come to take your grounds, as specified by your practical standpoint, strictly to favor an incompatible alternative.”

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

options and diachronic tragedy  in fact due to forgetting.13 And he notes that one might not want to class failures of memory as instances of irrationality. Indeed, I want to say the same thing about failures of intentions to persist as I have already said (Chapter ) about forgetting. Both are disadvantageous and suboptimal, but suboptimality must be distinguished from irrationality. Of course, to say that failures to persist in an intention needn’t be irrational is not to say that they are never irrational. And indeed, many cases in which you drop an intention will count as irrational even on time-slice-centric grounds, as when the mere sight of a beer causes you to drop your intention not to drink without changing either your underlying preferences or your beliefs about the best way to satisfy those preferences.14 My stance on the stability of intentions mirrors what Arntzenius et al. () say about self-binding. They hold that the ability to self-bind is beneficial but that lacking it does not entail falling short of perfect rationality. For them, the ability to self-bind is like the ability to run a four-minute mile; it’s great to have that ability, but this doesn’t mean that lacking it constitutes some rational failing on your part. In discussing the preference for dominant options in infinitary cases (see Chapter , Section ..), Arntzenius et al. argue that if you lack the ability to selfbind, then having this preference will leave you vulnerable to exploitation, but that this does not mean that either the preference for dominant options or the inability to self-bind is therefore irrational. I think this is right. But I think that their conclusion holds not just for the infinitary cases which are their specific concern, but for all cases of in which the inability to self-bind proves disadvantageous. The ability to self-bind would let you avoid misfortune, but having this ability is not rationally required. 13 More exactly, Broome (, ) writes that “A failure of persistence is a sort of forgetting.” Regardless, the important thing is that failures of intentions to persist are relevantly similar to instances of forgetting. 14 Consider also the sort of reasoning that often accompanies giving-in to temptation—that giving in just this once won’t make a difference. Having previously formed the intention to go to the gym this morning, you wake up groggy and tell yourself that skipping the gym just this once won’t make the difference between staying (or getting) in shape and not. In most instances, this sort of reasoning is just plain wrong. For one thing, skipping this time will probably make you more likely to skip in the future, not just because of setting a precedent, but because the better shape you are in, the more enjoyable (or less tortuous) exercise is, and the more likely you are to do it. But even if this isn’t so—if skipping this morning has no causal influence on your future exercise practices—going to the gym this morning does make a difference. It might not affect which side of the arbitrary “in shape”/“out of shape” line you fall on, but it will subtly affect the underlying physical facts that determine how “in shape” you are, and it seems to me that it is rational to care about whether you count as “in shape” only insofar as you care about the underlying physical facts that affect things such as life expectancy, appearance, and the like. And because going to the gym just this once will make a difference to these underlying physical facts, albeit in a minor way, it is just a mistake to say that skipping the gym just this once won’t make a difference. See Kagan () for excellent discussion of the mistakes involved in this sort of reasoning.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 options and diachronic tragedy While I have suggested that the suboptimality of failures of intentions to persist over time needn’t constitute irrationality; it does mean that it will often be rational to take steps to improve the effectiveness of your intentions, just as the suboptimality of forgetting means that it will often be rational to try to improve your memory. And one way to improve the effectiveness of your intentions in the future is execute the intentions you have right now. This is borne out by empirical research. Baumeister’s research suggests that willpower is like a muscle in many respects, including that exercise makes it stronger (see his () for an overview). By exercising willpower (which can take many forms, only one of which is maintaining and executing previously formed intentions), you can cause yourself to have greater willpower in the future. Muraven, Baumeister, and Tice () found that participants in a study who exercised willpower over a two week period by standing up straight, recording their diet, and regulating their emotions performed better on laboratory measures of willpower and self-control than participants who had not engaged in such willpower exercises. The flip-side is that sometimes exercising willpower can be counterproductive, since deploying that willpower now can deplete your reserves and leave you less able to resist temptation in the near future. Baumeister, Bratslavsky, Muraven, and Tice () performed an experiment in which some participants were seated in front of a tray of cookies but told to only eat from a bowl of radishes, while participants in control groups were either allowed to eat the cookies as well or were not seated in front of any food. The first group, who had to exercise willpower to keep from eating the cookies, then gave up much faster on a subsequent task measuring willpower. One lesson one might take from Baumeister’s research is that, if we think of intentions and willpower as tools we can employ to serve our other ends, there will be no exceptionless, blanket claim about when employing them will be in our interests. Sometimes it will be advantegous to employ them, both to achieve our immediate aims and to strengthen our willpower for the future, while other times it will be advantageous not to (either by not forming intentions, not retaining them, or otherwise giving-in to temptation) to conserve our willpower for later use. I take this to be more grist for my mill that there are no diachronic norms for intentions, and that whether and when one rationally ought to form, drop, or retain an intention must be determined on a case-by-case basis using the tools we already have for thinking about instrumental rationality. If this approach is right, and how to deploy intentions is something governed by standard expected utility considerations (or by your favorite alternative theory), then the fact that intentions are useful in virtue of their stabilizing role does not pose a threat to the general time-slice-centric approach to rationality advocated in this book.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 Replacing Diachronic Principles . Replacing Conditionalization Conditionalization is widely accepted as a diachronic principle by Bayesian epistemologists. But as I argued in Chapter , it is untenable. Here I argue that it should be replaced by a synchronic principle which does much of the work that Conditionalization was intended to do and which is compatible with Time-Slice Rationality.

.. Uniqueness Let us start by stepping back and thinking about why one might have thought that diachronic principles were necessary in the first place. Many epistemologists believe in Permissivism, the claim that given a body of total evidence, there are multiple doxastic states that it is rationally permissible for one to be in. But if we take as a datum that rational agents have beliefs that evolve steadily over time in response to evidence rather than fluctuating wildly (as in the case of Fickle Frank), then Permissivists must invoke some further principle to prohibit you from switching around between these multiple rationally permissible doxastic states. Diachronic principles like Conditionalization fit the bill. In effect, it says that once you opt for one of the permissible prior probability functions, you have to stick with it and update with respect to that function when you gain evidence. How can defenders of Time-Slice Rationality avoid the need for diachronic principles while still respecting the datum that wildly fluctuating beliefs are (ceteris paribus) irrational? One way is to replace Conditionalization with a claim about rational dispositions and say that at each particular time, you ought not to have a disposition or policy of abandoning your current credences in favor of other credences. On this view, it is permissible to fail to conditionalize (e.g. due to forgetting or a change of heart), but it would be irrational to plan or be disposed to do so. This claim about rational dispositions is a purely synchronic constraint, as it says only what dispositions you ought to have at a time. (Note, however, that

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles since dispositions can fail to manifest themselves, this view will sometimes allow dramatic fluctuations in belief.) Alternatively, the defender of Time-Slice Rationality could say that while there is no epistemic reason to update in a certain way, there are good practical reasons for trying to cause yourself to be a conditionalizer, for being a conditionalizer will likely better serve your interests than not being one (as we saw in Chapter ). On this proposal, there is not a diachronic principle of practical rationality that says that you ought to update by conditionalization, but rather at a time it may be rational for you to attempt to do things that will cause your later selves to be disposed to conditionalize, since trying to cause yourself to be a conditionalizer may have higher expected utility than other options.1 But my preferred version of Time-Slice Rationality starts by abandoning Permissivism in favor of Uniqueness: Uniqueness Given a body of total evidence, there is a unique doxastic state that it is rational to be in.2

I will not attempt a full defense of Uniqueness here, but will instead settle for giving some preliminary considerations in its favor.3 First, Uniqueness captures the intuition that rationality is incompatible with thinking that your beliefs are arbitrary. For Permissivists, the only reason you ought to be in your actual doxastic state, instead of one of the other permissible ones, is that you happen to have gone for a certain set of credences (your priors, in the jargon) sometime in the past. But this fact about your priors is a mere historical accident. Second, what you ought to believe is determined by your reasons. In the practical sphere, it may be permissible to go beyond your reasons. For instance, in the case of Buridan’s Ass, it is rationally permissible to just go for one bale of

1 Greaves and Wallace () argue that obeying Conditionalization maximizes expected utility, and their argument is thus suggestive with regard to the present consideration. But showing that obeying Conditionalization maximizes expected utility is different from showing that performing actions to attempt to cause yourself to obey Conditionalization likewise maximizes expected utility. 2 A clarification: Uniqueness says that if two people have the same total evidence but different doxastic states, then at least one of their doxastic states is not rational. It does not say that given a body of total evidence, there is a unique set of beliefs such that they would be rational if you were to form them. For it might be that your evidence supports a given proposition, even though if you were to form a belief in that proposition, this would involve your evidence changing in such a way as to no longer support that proposition. Note also that if the fundamental facts about evidential support are contingent (e.g. if they depend on contingent facts about what the perfectly natural properties are), then Uniqueness must be stated in a world-indexed way, saying that no two people in the same world can be such that they have the same total evidence and different doxastic states, without at least one of their doxastic states failing to be rational. Thanks to John Hawthorne for discussion of these points. 3 See White () for more extensive arguments in favor of Uniqueness, and Meacham () for a rebuttal.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  hay over the other, even though the reasons are equally balanced in favor of going for each particular bale. But epistemic rationality is about responding to evidence, not going beyond your reasons. Practical rationality may be an active endeavor, but epistemic rationality is a passive one. If this thought is correct, then we have an argument against Permissivism. For in a case where, by the Permissivist’s lights, doxastic states D and D are both permissible, you would be going beyond your reasons (your evidence plus any a priori reasons you might have) in plumping for D over D , or vice versa. In this way, Permissivism conflicts with the claim that epistemic rationality is about passively responding to epistemic reasons, where what epistemic reasons you have is determined entirely by what your evidence is. In addition to facing objections, Permissivism is undermotivated. Much of the appeal of Permissivism comes from intuitive case judgments. Consider Rosen (, ): It should be obvious that reasonable people can disagree, even when confronted with a single body of evidence. When a jury or a court is divided in a difficult case, the mere fact of disagreement does not mean that someone is being unreasonable. Paleontologists disagree about what killed the dinosaurs. And while it is possible that most of the parties to this dispute are irrational, this need not be the case.

Rosen’s point is that people often disagree about what the evidence supports, but it would be rash to say that therefore one of them is being irrational. But Rosen is too quick here. First, the defender of Uniqueness holds only that if two people disagree despite having the same total evidence, then at least one diverges from ideal rationality. But saying that someone fails to meet the demanding standard of ideal rationality is not to say that that person is crazy. So while Rosen is right that none of the jurors need be irrational, in the sense of being significantly less rational than the rest of us, this doesn’t mean that none need be irrational, in the sense of failing to be ideally rational. Second, even though the jurors might share the same evidence, in the sense of having seen the same presentations by the defense and the prosecution, this does not mean that they share the same total evidence. For a juror’s total evidence includes not only the evidence presented in court, but also her background knowledge, memories, and the like.4 To get a counterexample to Uniqueness, we need a case where people with the same total evidence disagree without any of them being irrational. Third, it is plausible that once the jurors, or the paleontologists, learn about their disagreement, they should converge in their opinions. This is controversial,5 but if it’s right, then once they share their total 4

Goldman () makes this point. This may follow from certain versions of the Equal Weight View of disagreement. See Elga () for a defense. See Kelly () for an argument against. 5

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles evidence, which includes evidence of which conclusions each initially arrived at, they really ought to have the same opinion about the matter at hand.6 Now, I suspect that many epistemologists might object to Uniqueness because, when cashed out in a orthodox Bayesian framework, Uniqueness entails the existence of a uniquely rational prior probability function, caricatured as some “Magic Probability Function in the Sky.”7 What could possibly privilege a single probability function as uniquely rational? But note that Uniqueness is compatible with its being indeterminate what your evidence supports (or what your evidence is). In such cases, it is indeterminate what you ought to believe. But its being indeterminate what you ought to believe is quite different from its being determinately the case that there are multiple permissible doxastic states, given the same evidence.8 Second, Uniqueness can be divorced from the assumption that rational doxastic states must be precise and representable by a single probability function. Defenders of Uniqueness could appeal to imprecise credences, which we have already seen in Chapter . More on imprecise credences shortly. Third, 6 One might also argue for Permissivism on the grounds that even when your evidence supports P, it is permissible not to form the belief that P if the question of whether P is irrelevant to your concerns. Harman’s () principle of Clutter Avoidance states that “one should not clutter one’s mind with trivialities” (). It is possible to modify Uniqueness to allow for Harman’s Clutter Avoidance principle. A modified version of Uniqueness would say that, given a body of total evidence, there is a unique set of propositions which are supported by your evidence and hence which are rationally permissible for you to believe. But in my view, the Clutter Avoidance principle is misguided. First, as Harman notes, the Clutter Avoidance principle assumes that beliefs are explicitly represented in the mind as token sentences in a language of thought which are stored in some sort of “belief box” (Fodor ()). Then, given that our minds are finite, there are finitely many token sentences we can have in our belief boxes, and so we must choose wisely which sentences to token. But I am skeptical whether this is the right way to think about belief. On a more dispositionalist view of belief, what it is to believe a proposition is, roughly, to be disposed to act in ways that would satisfy your desires if that proposition were true (see esp. Lewis () and Stalnaker ()). On such a view, there need be no limit on the number of propositions it is possible to believe. Second, motivating the Clutter Avoidance principle presupposes that what it is rational to believe depends not only on what your evidence supports but also on what your practical interests are. But while practical considerations may affect whether it is rational for you to perform mental acts like explicitly thinking about P or to coming to believe P occurrently, rather than merely dispositionally, it is doubtful whether practical considerations affect what you ought to believe simpliciter (Kelly ()). 7 Kelly () and Schoenfield () raise this sort of objection against Uniqueness. 8 Similarly, those tempted by contextualism about knowledge might also be tempted by the thought that it is a context-sensitive matter what the evidence supports, or what the evidence is (this would follow, for instance, if we combine contextualism about knowledge with Williamson’s () claim that your evidence is your knowledge). If this is so, it will be a contextual matter what you ought to believe (or, more carefully, it will be the case that which proposition is expressed by a sentence of the form “S ought to believe P” will vary by context). But its being a contextual matter what you ought to believe is different from its being true in a context that there are multiple doxastic states that it is rationally permissible for you to be in (or, more carefully, it is different from there being a context in which the sentence “There are multiple doxastic states that it is rational for you to be in” expresses a truth).

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  while some opponents of Uniqueness may object that the uniquely rational prior probability function cannot be singled out and justified in purely formal terms, moderate versions of Permissivism are in the same boat in needing to appeal to substantive, rather than purely formal, constraints on rational priors. We can see this by looking at different forms that Permissivism might take. Different versions of Permissivism vary in how permissive they are. In a Bayesian framework, the most extreme form of Permissivism holds that the only constraints on rational prior probability functions are formal constraints, including the axioms of the probability calculus and perhaps a few additional constraints such as the Principal Principle and Regularity. (The Principal Principle says that your credence in H, conditional on H’s objective chance being n, ought to equal n. Regularity says that your prior probability function should not assign probability  to any contingent propositions; it should not rule out any possibilities in advance of inquiry.) But extreme Permissivist views like these, where there are only formal constraints on rational credences, will allow all kinds of intuitively irrational credences to count as perfectly rational. Consider the skeptic. At least one version of the skeptic is someone for whom the proposition that it appears to her that she has hands and the proposition that she in fact has hands are probabilistically independent (and similarly for other external world propositions). There is no reason to think that the skeptic’s credences must violate the axioms of the probability calculus, the Principal Principle, or Regularity.9 But in my view, the skeptic is not just wrong but irrational, and if so, extreme Permissivism is false. I concede that it is controversial whether the skeptic is irrational, however, so consider also the counterinductivist, who commits what might be termed a global gambler’s fallacy. For the counterinductivist, the fact that all the emeralds observed so far have been green is great evidence that the next one will not be green. And the fact that counterinductivism has worked miserably in the past means that counterinductivism is likely to get things right from now on. Again, there is every reason to think that the counterinductivist’s credences could satisfy the axioms, the Principal Principle, and Regularity.10 But the counterinductivist is irrational. Now, at least the skeptic and the counterinductivist have credences that have a certain sort of structure and comprehensibility. So if you’re still not convinced that extreme Permissivism must deem rational some clearly irrational credences, we can also construct credence 9 The skeptic will be quite agnostic about what the objective chances are, but this is consistent with satisfying the Principal Principle. 10 The counterinductivist will have very odd views about what the chances are, but the Principal Principle does not tell you what to believe about the chance, but rather how your credences in other propositions should be related to your credences about chances.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles functions which satisfy all of these formal constraints but are otherwise totally random. Just go through each proposition and randomly assign it a real number between  and , making sure only that each assignment is consistent with the axioms, the Principal Principle, and Regularity. Such a credence function will be even more intuitively irrational than that of the skeptic or the counterinductivist, and yet will be deemed rational by extreme Permissivism. It is doubtful whether we could just supplement probabilistic coherence, the Principal Principle, and Regularity with a finite number of other formal principles which would together deem irrational all the credence functions that are intuitively irrational, like those of the skeptic, the counterintuitivist, and the randomly constructed credences mentioned in the last paragraph. This is one lesson of the failure of Carnap’s project of Inductive Logic (Carnap ()). Carnap sought a finitely and formally specifiable system which would assign to each proposition a degree of confirmation, given the evidence. But most philosophers became convinced (in part due to the arguments of Goodman ()) that this project was doomed to failure. It is also a lesson of the repeated failures to come up with a consistent Principal of Indifference which states that where your evidence does not discriminate between the members of a set of possibilities, you should assign equal credence to each of them (see van Fraassen () for discussion). Formal principles alone are not enough to proscribe all intuitively irrational credences. If we want to rule out intuitively crazy credence functions as irrational, we need substantive (i.e. non-formal) constraints on rational priors in addition to purely formal constraints. For instance, rational credence functions perhaps must favor simplicity, assigning higher credences to simpler hypotheses. And perhaps they must embody a sort of Inference to the Best Explanation (Harman ()), so that one’s conditional credence in a hypothesis given the data is higher, the better the hypothesis is as an explanation of the data.11 And they must project natural properties instead of unnatural grue-like properties. And so on. Assuming that simplicity, explanatory adequacy, and naturalness cannot be measured formally, these additional constraints will not be able to be captured with simple equations in the way that the injunction to proportion your credences to believed objective chances is captured by Principal Principle, say. It seems to me that here lies the best argument for Permissivism, and it supports a more moderate form of Permissivism than that which we just considered.12 11 See Weisberg (b) for discussion of how to incorporate Inference to the Best Explanation in the Bayesian framework. Weisberg’s proposal is essentially what I have just suggested—making the conditional credence of a hypothesis given a body of evidence proportional to the degree to which the hypothesis is a good explanation of that evidence. 12 I first heard about this sort of argument for Permissivism from Miriam Schoenfield (p.c.).

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  Arguably, there are multiple “epistemic values” that should be reflected in rational priors—simplicity, explanatoriness, naturalness, etc. But these epistemic values can sometimes conflict.13 For example, sometimes the simplest hypotheses are not the most explanatory. And when these values conflict, there is no privileged way of weighing them against each other to come up with a uniquely rational credence function. Instead, there are a variety of different ways, some of which give great weight to simplicity and naturalness and less weight to explanatoriness, some of which treat explanatoriness as all-important, and so on. Each of these different rationally permissible ways of assigning weights to the various epistemic values results in a different prior probability function. So there is a set S of probability functions, each of which is permissible to have as your priors. But the hypothesis of competing epistemic values does not really support Permissivism as such. Rather, it opposes one particular version of Uniqueness which holds that there is a precise prior probability function that is determinately the uniquely rational one. In my view, if there really are these epistemic values and no privileged way of weighing them up, this really supports a version of Uniqueness that appeals to indeterminacy, or coarse-grained doxastic states, or both. Start with indeterminacy. It might be determinately the case that there is a uniquely privileged way of trading off the competing epistemic values against each other, but indeterminate what this privileged way is. Thinking of prior probability functions as embodying different ways of making these trade-offs, this proposal would have it that it is determinately the case that there is a unique precise probability function that it is rational to have as your priors, but it is indeterminate which probability function it is. Compare: it may be determinately the case that there is a precise cut-off between red and non-red, but indeterminate what this precise cut-off is. Where the Permissivist says that each member of set S of probability functions is rationally permissible to have as your priors, the defender of Uniqueness can say that it is determinately the case that just one member of S is rational, even though it is indeterminate which member of S is the rational one and no member of S is determinately irrational. Second, rational doxastic states might be more coarse-grained. Instead of its being the case that you have to pick one probability function out of the set S to be your own, you ought to be “mushy” over all the members of S. Then, in the absence of any evidence, you ought to be in a coarse-grained doxastic state represented

13 Moreover, in some cases some of the epistemic values may not even be applicable. Rachael Briggs (p.c.) points out that symmetry considerations, for instance, may only make sense with respect to certain sorts of physical setups or certain sorts of repeatable trials.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles by a set of probability functions (which, following van Fraassen (), is called your representor), in particular the set S resulting from all the different permissible ways of assigning weights to the various competing epistemic values. And if your present total evidence is E, your representor ought to be the set consisting of each member of S conditionalized on E. (Note that it may be necessary to introduce indeterminacy into the imprecise credences picture, as it may be indeterminate exactly which credence functions ought to be members of your representor; this is an analog of the problem of higher-order vagueness.) Interestingly, the main objection to imprecise credences is that they threaten to leave you vulnerable to predictable exploitation in the form of Diachronic Tragedy. In Chapter  we saw Elga’s () argument that, given a plausible decision theory for imprecise credences, acting on the basis of imprecise credences will in some cases permit to you perform each member of a sequence of actions, even though at all times you prefer performing none of the members of that sequence to performing all of them. But I also showed that Time-Slice Rationality provides a well-motivated rebuttal of all Diachronic Tragedy Arguments. So the defender of Time-Slice Rationality can resist Elga’s argument and make use of imprecise credences without embarassment. A brief aside: Moss (forthcoming) also gives a time-slice-centric defense of imprecise credences against Elga’s objection. On her picture, you “identify” with one of the probability functions in your representor and then ought to act on that basis, maximizing expected utility relative to the probability function with which you identify. If you do not change the probability function with which you identify, you will not be vulnerable to Diachronic Tragedy à la Elga. But if you do change your mind and begin to identify with a different member of your representor, then you may wind up with Diachronic Tragedy. But Moss argues that this is not irrational. Vulnerability to Diachronic Tragedy is no vice if it is the result of having a change of heart and identifying with a different member of your representor than that with which you previously identified. While I of course endorse Moss’s time-slice-centric outlook, I am uncomfortable with her specific defense of imprecise credences. This is because I do not know how to make sense of the notion of “identifying” with a member of your representor. I worry that this sort of talk illicitly reifies the formal machinery we are using to model a certain kind of doxastic state. It is not as if you have a list of probability functions explicitly written in your language of thought, so that you can think more fondly of one than another. Instead, appealing to a representor is supposed to be a way of helping to explain and rationalize a certain sort of behavior. And I am unclear on how having imprecise credences but identifying with probability function P in your representor is supposed to differ in any behavioral respects

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  from simply having a precise credal state represented by probability function P. Once we talk about identifying with precise probability functions in a representor, how are imprecise credences supposed to differ from precise credences? Moss is aware of this issue, and indeed says that it is a distinction without a difference. You could be interpreted as having imprecise credences but changing which precise member of your representor you identify with, or you could be interpreted as having precise credences but changing which precise credence function you have. Moss regards each as a possible interpretation, but thinks that in the sorts of cases which motivate the introduction of imprecise credences (e.g. where you have unspecific evidence), the former interpretation may be superior. I will not pursue this issue here. In any event, if Moss’s defense of imprecise credences is successful, then that is yet more grist for my mill. End of aside. Let me return briefly to Carnap. I said above that many people may find Uniqueness implausible because of the failure of Carnap’s project of Inductive Logic and the seeming inability to devise a consistent Principle of Indifference. But these failures only suggest that we cannot specify a uniquely rational prior probability function using only a finite number of formal principles (e.g. a Principle of Indifference). It does not show that there is no such uniquely rational prior probability function. Some philosophers might find mysterious the notion of constraints on rational credences which do not take the form of a formal principle. Why should we believe in such constraints? But insofar as one finds such substantive constraints on rational credences mysterious, one is led not just to the moderate form of Permissivism that many epistemologists would like to espouse but all the way to a radically subjective form of Bayesianism which allows skepticism, counterinductivism, and even the randomly constructed credences mentioned above to count as perfectly rational. Insofar as we want to rule these sorts of attitudes out as irrational, we have no choice but to adopt substantive, non-formal constraints on rational credences. In this respect, Uniqueness is in the same boat as moderate forms of Permissivism which likewise adopt substantive constraints on rational credences such as favoring simplicity, explanatoriness, and naturalness. At the end of the day, then, there are four live options for defenders of Uniqueness. These four views result from different combinations of stances toward indeterminacy and imprecision. The first view rejects indeterminacy and imprecision. It says that rational doxastic states must be represented by precise probability functions, and moreover that there is a precise probability function which is determinately the uniquely rational prior probability function. The second view espouses indeterminacy while rejecting imprecision. It says that rational doxastic states must be precise, but while it is determinately the case that there is a uniquely

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles rational prior probability function, it is indeterminate which function this is. The third view rejects indeterminacy while espousing imprecision. It says that there is a set of probability functions such that it is determinately the case that that set is the uniquely rational prior representor. And the fourth view combines indeterminacy with imprecision, saying that it is determinately the case that there is a set of probability functions which is the uniquely rational prior representor, but indeterminate which set this is. I will not attempt to argue for one of these versions of Uniqueness over the others. Provided that at least one of these versions of Uniqueness is superior to any version of Permissivism, Time-Slice Rationality is in good shape.

.. Synchronic Conditionalization Once we adopt Uniqueness, it is easy to devise a formal synchronic principle stating what doxastic state you ought to be in at a time, given your evidence at that time. I set aside imprecise credences for the time being, but it is easy to modify or interpret the following principle to fit with any of the four versions of Uniqueness mentioned at the end of the last section.14 Synchronic Conditionalization Let P be the uniquely rational prior probability function. If at time t you have total evidence E, your credence at t in each proposition H should equal P(H | E).15

This is a purely synchronic principle, since it specifies what your credences should be at any particular time as a function of what your total evidence is at that same time. It makes no reference whatsoever to your credences at other times.16 It is closely related, but not identical, to Williamson’s () Evidential Probability, Meacham’s () hp-Conditionalization, and Titelbaum’s () Generalized Conditionalization. Meacham (see his ), Titelbaum, and Moss (forthcoming) each also makes the observation exploited here that diachronic versions of Conditionalization (and related principles) are most naturally motivated by Permissivism while synchronic versions are most naturally motivated by Uniqueness. 14 If you like indeterminacy, then you should endorse Synchronic Conditionalization while holding that it is indeterminate which P is the uniquely rational prior. And if you like imprecise credences, you should interpret Synchronic Conditionalization as saying that if you now have total evidence E, then P(− | E) ought to be a member of your representor just in case P is a member of the uniquely rational prior representor. 15 Note that there is also a synchronic analog of Jeffrey Conditionalization. 16 To be fully rational, an agent must not only have the credences mandated by Synchronic Conditionalization but must also have them for the right reasons. An agent who was struck on the head and happened to wake up with the recommendated credences would not thereby count as perfectly rational. To be rational, the agent’s credences must also be based on the evidence in a certain way. I discuss basing and so-called doxastic justification in Chapter .

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  Epistemologists often say that you ought to proportion your beliefs to your evidence. The uniquely rational prior probability function P (or its surrogate if we adopt a non-Bayesian framework) can be thought of as characterizing this relation of proportioning. What it is for your beliefs to be proportioned to your evidence E just is for you to have a credence function which equals P(− | E). Similarly, being rational is in large part a matter of believing and behaving in ways that are sensible, given your perspective on the world. In the epistemic case, we can think of your perspective on the world as being constituted by your total evidence, and the “being sensible” relation as involving the uniquely rational prior P. What it is for your beliefs to be sensible, given your perspective on the world, is for you to have a credence function which equals the result of taking P and conditionalizing it on your total evidence E. (For this reason, Synchronic Conditionalization does not require that agents know what the uniquely rational prior is, any more than evidentialists who say that you ought to proportion your beliefs to your evidence think that doing so requires you to be able to fully characterize the relation of evidence support. You can have beliefs that are proportioned to your evidence without having the theoretical resources necessary to say very much about what it takes for a given body of beliefs to be proportioned to a given body of evidence.) Importantly, if you satisfy Synchronic Conditionalization at each time, then your credences will exhibit the sort of stability over time that we expect from rational agents. If you satisfy Synchronic Conditionalization at all times and your evidence grows monotonically, then your credences will change over time in exactly the manner required by the diachronic principle of Conditionalization. To see this, suppose that at t you have total evidence E and at t you gain evidence E , so that your total evidence is now E ∧ E . According to Synchronic Conditionalization, at t you ought to have credence function P (−) = P(− | E ) while at t you ought to have credence function P (−) = P(− | E ∧ E ). But, P is the probability function that results from taking P and conditionalizing on E , so satisfying Synchronic Conditionalization in such a case, where your evidence grows monotonically from E to E ∧ E , will result in your having the same credences that would be required by the diachronic principle of Conditionalization. If your evidence does not grow monotonically, then satisfying Synchronic Conditionalization at all times will not result in your having credences that conform to the diachronic principle of Conditionalization. This is as it should be, since it is an implausible feature of Conditionalization that it deems forgetting to be an irrational change in belief.17 17 Let me return to the epistemic utility argument for Conditionalization from Greaves and Wallace (), mentioned above in footnote  of this chapter. They argue for Conditionalization on the

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles One might worry that even if Synchronic Conditionalization works as a characterization of what it is to have perfectly rational credences, it fails to properly rank the suboptimal cases. There is something epistemically worse, the thought goes, about an agent who switches arbitrarily which prior probability function she employs than one who doesn’t, even if neither is employing the uniquely rational prior probability function. That is, there is something worse about an agent whose beliefs change in ways that aren’t driven by changes in evidence than one whose beliefs are always so driven, even if neither has beliefs which are supported by her evidence. We can go some way toward accommodating this thought, however, without moving back to a diachronic framework. First, suppose it is indeterminate what the uniquely rational response to a given body of evidence is (in a Bayesian framework, this amounts to its being indeterminate what the uniquely rational prior probability function is), but determinately the case that there is one. Suppose in particular that P and P are each such that it is indeterminate whether it is the uniquely rational prior. Then, an agent who switches between using P as her prior and using P as her prior is epistemically worse than an agent who always uses P as her prior, since for the former agent, it will be determinately the case that she does not always have rational credences. This is because it is determinately the case that P and P are not both rationally permissible priors.18 Of course, grounds that it has higher expected epistemic utility (understood as a measure of the accuracy, or distance from the truth, of your credences) than any alternative updating method. It might be thought that the existence of such an argument for Conditionalization is a problem for my view. For while I have rebutted one argument for Conditionalization in Chapter , I have not rebutted their epistemic utility argument. But this thought is mistaken. For one, there are general reasons for skepticism about epistemic utility arguments (see especially Berker ()). But I do not need to rely on these general considerations. For the argument from Greaves and Wallace does not support orthodox diachronic Conditionalization over my Synchronic Conditionalization. Rather, it just supports their disjunction. This is because Greaves and Wallace interpret Conditionalization in such a way that it only applies in cases where your evidence grows monotonically; they do not interpret it in such a way that it prohibits forgetting. Specifically, they show that, in cases where your evidence grows monotonically, updating by Conditionalization maximizes expected epistemic utility, with expected epistemic utilities being calculated relative to your present, pre-update credences. But I agree with them that in cases where your evidence grows monotonically, your later credences should be related to your present credences just as Conditionalization demands! So we agree that if you are rational, then in cases of monotonic evidence growth where you learn E, your later credences should equal your earlier credences conditional on E. This is all that follows from their argument. The question then is whether this is due to Synchronic Conditionalization or orthodox diachronic Conditionalization, and their mathematical results do not bear on this question. So it is open to me to opt for the former. In this way, the defender of Time-Slice Rationality need not have any quarrel with Greaves and Wallace. 18 Compare a case of vagueness. Suppose Andy has more hair than Bob, but each is a borderline case of baldness. It will then be indeterminate whether Andy is bald, and also indeterminate whether Bob is bald, but if I say that Andy is bald and that Bob is not bald, it is determinately the case that I have said something false, for there is no precisification of “bald” such that both claims come out true.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  this response only goes through if P and P are not determinately rationally impermissible priors. Suppose then that it is determinately the case that they are rationally impermissible, but that the agents do not know this. We can still say that there is something worse about the agent who switches between them, for insofar as she is aware that she is switching between them, she can tell from that fact alone that she is not at all times believing in accordance with her evidence. Even without knowing what the uniquely rational permissible prior probability function is, she can tell that she is isn’t always using it, since she uses a different prior at different times. And there is something epistemically worse about being in this position than being in a position where you don’t know whether you are always using the uniquely rational prior, as is the case for the agent who always employs P but does not know whether it is rationally permissible. Uniqueness, in the form of Synchronic Conditionalization, not only does much of the work that diachronic Conditionalization was supposed to do, but also gets around the problems facing the latter. First, Synchronic Conditionalization is compatible with internalism and nicely explains why you ought to be / confident that you traveled by the Mountains in Two Roads to Shangri-La (see Chapter ). The thought is that which route you took was determined by the result of a coin toss. And your current evidence that you seem to remember traveling by the Mountains does not discriminate between your having traveled by the Mountains and your having traveled by the Sea. So your credence that you traveled by the Mountains ought to equal /. And second, because Synchronic Conditionalization makes no reference to personal identity over time, it obviously faces no trouble about how to apply it in cases of teletransportation, double teletransportation, fission, Parfit’s Combined Spectrum, and the like. Synchronic Conditionalization makes reference only to your current total evidence.19 19 Abandoning standard Conditionalization in favor of Synchronic Conditionalization has other benefits as well. Weisberg (a) argues that Conditionalization is incompatible with the holist idea that all beliefs are subject to defeat by undermining evidence. Suppose that you see a jellybean and have a perceptual experience as of its being red, and in response you conditionalize on the proposition that it is red (or Jeffrey conditionalize with high credence that it is red). If you then get evidence that you are colorblind, you should reduce your confidence that it is red, but Weisberg shows that conditionalizing on the proposition that you are colorblind will leave your credence that the jellybean is red unchanged. The reason is that at the outset (before seeing the jellybean), you regarded colorblindness as evidentially irrelevant to the color of the jellybean, and conditionalizing on the proposition that it is red does not change this fact (due to the property of rigidity, discussed in Chapter ). (Note that saying that you ought only conditionalize on propositions about how things seem to you will not help, since even beliefs about perceptual seemings are subject to defeat.) Synchronic Conditionalization does not face Weisberg’s problem, however. The jellybean case is simply a case of shrinking evidence. After seeing the jellybean, your evidence includes the proposition that it is red. But upon gaining evidence that you are colorblind, your evidence no longer includes the

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles Let me reiterate that the general lesson here—that diachronic principles coupled with a commitment to Permissivism can and should be replaced by synchronic principles coupled with a commitment to Uniqueness—is independent of the particular formal framework we employ for representing doxastic states. For instance, if we want to work in a full belief framework rather than a Bayesian one, we should replace a picture which espouses Permissivism and employs a diachronic principle such as the AGM belief revision model with a picture which espouses Uniqueness and involves only a synchronic principle taking a body of total evidence to a unique rational full belief state. And similarly for all manner of alternative ways of modeling doxastic states, such as ranking functions (Spohn ()), partially defined probability functions, etc.

.. Time-Slice Evidence Synchronic Conditionalization, as I noted above, is neutral on what counts as evidence. It is compatible with any account of evidence whatsoever (provided that evidence is conceived as consisting of propositions). Nevertheless, one might worry that any plausible account of evidence will be inconsistent with Time-Slice Rationality, if what your evidence is depends on what attitudes you had in the past, or if we cannot characterize your evidence without making reference to the relation of personal identity over time. First, if content externalism is true, then what your mental states are might depend on your past attitudes and on facts about personal identity. And this means that what your evidence is will depend on your history unless your evidence is restricted to e.g. propositions about your retinal images which are arguably content-less (that is, the retinal images are content-less, not the propositions about them). If your evidence consists of, say, propositions about how things appear to you, then content externalism entails that what your evidence is depends on your past. For when you see a glass filled with clear liquid on the table, whether your evidence is that it appears to you that the glass is full of water, or instead that it appears that the glass is full of XYZ, depends on whether your past includes interactions with water or instead with XYZ.20 However, content externalism does not threaten Time-Slice Rationality. For Time-Slice Rationality does not hold that what you ought to believe at a time supervenes on your intrinsic physical properties at a time. It entails simply that proposition that the jellybean is red; that proposition has disappeared from your evidence. For this reason, if you always have the credences demanded by Synchronic Conditionalization, your credence that the jellybean is red will rise upon seeing the jellybean and drop upon hearing that you are colorblind. 20

See Putnam ().

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  what you ought to believe at a time does not depend on what attitudes you have at other times, except insofar as they affect your present mental states. But if content externalism is true, then facts about your past do affect what your evidence is (and hence what you ought to believe) precisely by affecting your present mental states. Facts about your past affect whether your present mental states include attitudes toward water or attitudes toward XYZ. A worry still remains: If the contents of your attitudes depend on how you were in the past, then to characterize your evidence, do we not have to make reference to the relation of personal identity over time? No. Content externalism does not entail that the contents of your attitudes depend on facts about personal identity over time. The contents of your attitudes depend, inter alia, on facts about the causal history of your psychological states, but not on facts about personal identity as such. Consider a case of Double Teletransportation, where Pre enters the machine, and at the instant her body is vaporized, two moleculefor-molecule duplicates, Lefty and Righty, are created in San Francisco and Los Angeles, respectively. Whether Lefty has thoughts about water or about XYZ depends on whether Pre had causal interaction with water or instead with XYZ, regardless of whether Lefty is the same person as Pre (and indeed, there are good arguments that Lefty cannot be identical to Pre, if identity is to be transitive). This is because the causal history of Lefty’s concept runs through Pre, whether or not Lefty is identical to Pre or merely R-related to Pre (where, again, Parfit’s R-relatedness is the relation of psychological continuity with the right sort of cause). So facts about past time-slices causally related to Lefty’s present time-slice affect the contents of Lefty’s thoughts, independently of whether or not these past time-slices bear the relation of personal identity over time to Lefty.21 The general lesson is that the contents of your present attitudes depend on facts about their causal histories, but they do not depend on facts about personal identity over time as such; it is just that the causal history of your attitudes will typically run through past time-slices of you. Similar comments apply to Williamson’s () theory of evidence. Williamson argues for E=K, the claim that your evidence consists of all and only the propositions that you know. Whether you have knowledge of a proposition, as opposed to mere true belief, may depend inter alia on facts about the past, such as facts about how you formed that belief in the first place. But Williamson holds that knowledge 21 Note also that in cases of testimony, where you acquire the concept by someone’s telling you “Water is good to drink,” whether your concept is of water or instead of H O depends on facts about the testifier. So what the contents of your mental states are depends on the causal history of your concepts, whether that causal history runs through your past time-slices or through time-slices of someone else (such as the testifier). The interpersonal and the intrapersonal are on a par in this regard.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles is a mental state. If he is right, then it may be that your past attitudes affect what you ought to believe (by affecting your knowledge, and hence your evidence), but they do so only by affecting your present mental state. This is perfectly compatible with Time-Slice Rationality.22 One might also worry that what you know depends on facts about personal identity over time. But this thought should be resisted. Insofar as whether a true belief of yours constitutes knowledge depends on the past, it depends on the causal history of that belief. And again, the causal history of a belief will typically run through past time-slices who are part of the same person as you. But whether a true belief is knowledge does not depend on personal identity as such. First, in cases where the causal history of your belief runs through another person (as in cases of testimony), facts about that person’s doxastic history may affect whether you know the proposition in question. And in a case like Double Teletransportation where a belief of Lefty’s was originally formed by Pre, whether Lefty’s belief constitutes knowledge may depend on how Pre initially formed that belief, even if Lefty and Pre are not identical, but merely R-related to each other. Thus, even if E=K is true, characterizing your present evidence does not require making reference to the relation of personal identity over time, and so Williamson’s theory of evidence is compatible with Time-Slice Rationality, provided he is correct that knowledge is a mental state. The same comments apply to Burge’s () theory of the epistemic role of memory. Burge’s focus is demonstration (or deduction), where memory is important since in lengthy demonstrations one cannot hold all of the premises and steps in the deduction in one’s consciousness at the same time. He is criticizing Chisholm (), who thinks that in demonstration, “we must rely upon memory at various stages, thus using as premisses contingent propositions about what we happen to remember.” Burge argues that Chisholm misidentifies the role of memory in demonstration. He writes (, ): Memory does not supply for the demonstration propositions about memory, the reasoner, or past events. It supplies the propositions that serve as links in the demonstration itself. Or rather, it preserves them, together with their judgmental force, and makes them available for use at later times.

We can think of Burge as arguing that the epistemic role of memory is not to provide you with evidence consisting of propositions about what you seem to 22 The degree to which an E=K view is in the spirit of Time-Slice Rationality may depend on one’s account of knowledge, however (Kelly (forthcoming)). A Williamsonian safety-based approach to knowledge, whereby whether you know P depends on modal facts about your belief state such as whether you could easily have falsely believed P, is more in the spirit of my view than, say, a reliabilist account of knowledge, on which knowing is tied directly to historical facts about your beliefs.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  remember, but rather by supplying as evidence the propositions that you seem to remember. So, if you are doing a demonstration and you remember the premise P, then later on in the demonstration your evidence is not the proposition that you seem to remember that P, but rather simply the proposition P. But regardless of whether Burge is correct about the epistemic role of memory, his view is compatible with Time-Slice Rationality if memory is a mental state (as Williamson would likely hold, given his argument that knowledge is a mental state). Granted, past events will play an important role in determining whether you are in this mental state, but that is acceptable by the lights of Time-Slice Rationality. What is important is just that what you ought to believe supervenes on your present mental states, even if these mental states include memory, which depends heavily on the past.23 A further worry for Time-Slice Rationality is that requiring your evidence to supervene on your present mental states is overly restrictive. Consider a case from Christensen (). You once encountered compelling evidence that the population of India is greater than that of the US. But you have now forgotten this evidence. You have no inkling of where you might have read that India is more populous than the US. But you still believe that proposition. And intuitively your belief is rational. But what is your evidence now for the truth of that proposition? Certainly, nothing about your present perceptual experience, we may suppose, has anything to do with the relative population sizes of India and the US. Is TimeSlice Rationality therefore committed to the highly counterintuitive claim that you ought in fact not to believe that India is more populous than the US? In answering this question, first note that Time-Slice Rationality is not committed to the claim that your evidence supervenes on your present occurrent or experiential mental states, but only that it supervenes on your present mental states 23 Of course, you might worry that memory crucially depends on facts about personal identity, so that giving an important epistemic role to memory conflicts with Time-Slice Rationality. This is certainly true for episodic memory. You cannot remember going to Paris unless you once went to Paris. This observation provides an objection to Locke’s memory criterion of personal identity, on which a later time-slice bears the relation of personal identity to a previous time-slice just in case the later time-slice remembers some of the experiences of the previous time-slice. If you can only remember things that you experienced, then Locke’s theory is circular. Two comments: First, while it is true that episodic memory—memory of experiences—depends on personal identity, it is not obvious that the same goes for declarative memory—memory of facts. Perhaps Lefty can remember that Tashkent is the capital of Uzbekistan in virtue of the fact that Pre once learned this fact, even if Lefty is not identical, but instead merely R-related, to Pre. And it is declarative memory with which Burge is concerned. Second, I suspect that Burge might be just as happy if we (following Parfit’s () discussion of Locke and episodic memory) replaced talk of memory with talk of quasi-memory, which is just like memory but without the requirement of identity. We could then say that if you quasiremember that P, then your evidence includes P itself, rather than just the proposition that you seem to (quasi-) remember that P.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles simpliciter. Hence your present evidence could include facts about e.g. memories that you have not accessed and made explicit in thought. Still, in the case where you have really forgotten the evidence on which your belief was originally based—where you do not even have some unaccessed memory of having read about India’s population on wikipedia—it is difficult to point to anything about your mental states, whether occurrent or non-occurrent, which could constitute evidence that India is more populous than the US. Goldman () calls this the “problem of forgotten evidence” and uses it to argue that facts about your past mental states must make a difference to what it is rational for you to believe. Harman’s () uses the same consideration to argue for the principle of Conservatism, which can be thought of as a diachronic norm saying that it is rational for you to continue to hold a belief in the absence of any reason not to. But Time-Slice Rationality has all the resources necessary to account for the rationality of your present India belief. As Christensen () notes, when you find yourself with a belief, the mere fact that you have the belief is some evidence for its content, since you have background evidence that most of your beliefs are formed on the basis of good evidence. He writes that “[your] reasons for maintaining that belief are on this view exhausted by [your] reasons for thinking [your] present belief likely to be accurate” (). So, in typical cases where you believe that most of your beliefs were originally based on good evidence, merely finding yourself with a belief will constitute evidence that it is true. What if in fact your original belief was baseless, the result of wishful thinking rather than any decent evidence?24 Goldman thinks that in such a case your present belief is irrational (even though, as we have seen, your having the belief will be some evidence for its content). I am not convinced. If you are ignorant of the fact that your belief was based on wishful thinking and have background evidence that most of your beliefs were formed on the basis of good evidence, it would be quite irrational for you not to have and maintain the belief that India is more populous than the US. As Smithies () argues, your India belief might not constitute knowledge, but it is nonetheless rational. Smithies compares this case, in which you have forgotten the bad original basis of your belief, with a case of belief formed from an irrational testimonial source. If you hear from a source you justifiably take to be reliable that India is more populous than the US, then you are rational in coming to believe that India is more populous, even if the source was in fact insincere or irrational. However, the insincerity or irrationality of your testifier might mean that your belief will not constitute knowledge. Here is Smithies (): 24

We will consider the issue of basing and so-called doxastic justification further in Chapter .

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  If I form two beliefs on the basis of testimony, derived from two distinct testimonial sources, my beliefs might be equally rational, even though only the first amounts to knowledge, because the second is derived from an irrational testimonial source. Similarly, if I preserve two beliefs in memory, they might be equally rational, even though only the first amounts to knowledge, because the second was formed in an irrational way.

It would be misguided to object that this view is too permissive, that it entails that you can become justified in having a belief just by forming it. For even if you can form beliefs at will (which is itself doubtful), if you are even minimally self-aware you will have evidence that, unlike the vast majority of your beliefs, this one in particular was not formed on the basis of good evidence. Therefore, you will not be able to bootstrap your way to rational beliefs in this way. Summing up, I have not defended any specific account of evidence. Instead, I have argued that, contrary to what you might have initially suspected, TimeSlice Rationality is compatible with a wide range of views of evidence, from a phenomenal conception of evidence on which your evidence consists of propositions about how things appear to you, all the way to Williamson’s view that your evidence is what you know. Since most views of evidence on the market hold that your evidence supervenes on your present mental states, Time-Slice Rationality will likewise be compatible with most of these views of evidence. Whatever your favored account of evidence, it can most likely be plugged in to the framework provided by Time-Slice Rationality.

.. Processing Speed You might worry that having only synchronic requirements like Synchronic Conditionalization conflicts with the fact that adjusting your credences in response to changes in evidence takes time. When you gain or lose a piece of evidence, it takes a little bit of time to get your credences back into line with Synchronic Conditionalization. By the same token, if it takes you a bit of time to come to make a decision, then (the objection goes) we cannot have a fully synchronic picture of practical rationality. Now, I will address the issue of conscious reasoning in Chapter , but the problem will arise even in the context of automatic subconscious mental processing. For even this subconscious processing takes time, if only a few milliseconds. Is that a problem for my view? How can we have a time-slice-centric picture of rationality when time-slices don’t last long enough to come to satisfy my proposed requirements of rationality?25 25 Importantly, in abstracting away from limitations in processing speed, my Synchronic Conditionalization is on a par with orthodox diachronic Conditionalization. For the latter says that at the same time that you gain some new evidence E, your credences should equal your old conditional credence on E. There is no time-lag allowed by orthodox diachronic Conditionalization, just as there is no time-lag allowed by Synchronic Conditionalization.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles One response on my part would be to slightly moderate my position. Instead of having a time-slice-centric picture of rationality with time-slices being understood as instantaneous time-slices, we could have the time-slices understood as very short-lived person segments—person segments just long enough for the requisite processing to take place. But this response clearly invites worries about a slippery slope. Once we have departed enough from a fully synchronic picture of rationality to allow for processing times, why not just go ahead and adopt standard diachronic requirements as well? For this reason, my preferred strategy is to say that ideal rationality does require satisfying requirements of rationality instantaneously. If it takes you some amount of time to come to satisfy a synchronic requirement of rationality, you count as irrational (albeit probably blamelessly so) in the meantime (cf. Broome (, ch. )). Suppose that at t your total evidence changes from E to E , but because of processing speed limitations it takes you until t to come to have credences which equal the uniquely rational priors conditionalized on E . In that case, I say, you deviate from ideal rationality at the times between t and t . Similarly, if at t your credences and utilities are such that you are rationally required to decide to φ, but because of processing speed limitations you do not actually decide to φ until t , then you likewise deviate from ideal rationality at the times between t and t . Ideally rational agents would not require time to come to satisfy the requirements of rationality. We do require time to do so, but that is because we are only imperfectly rational, even if blameless for our cognitive limitations. This response, and the accompanying methodological choice of focusing on the ideal, results in a simpler overall theory. It would introduce considerable complexity into a model of rationality to try to make it sensitive to our processing speed and other cognitive limitations, especially given that each of us has very different cognitive limitations. Better to abstract away from cognitive limitations that are difficult to incorporate into a model. On this view, rationality is whatever our best model of rationality says it is. If theoretical considerations of simplicity and elegance then mean that the best model of rationality is one that is not sensitive to contingent limitations on processing speed, for instance, then we should say that ideally rational agents would not have such limitations. You might worry that this is in conflict with my earlier insistence that ideal rationality does not require perfect memory or maximal strength of will (or the ability to self-bind). But it is not. For it is actually quite easy to incorporate imperfect memory and limited strength of will into a model of rationality. The move from diachronic Conditionalization to Synchronic Conditionalization means that our formal model is able to account for failures of memory (and this move was independently needed to account for loss of evidence that isn’t due to memory failures). And many models

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  of practical rationality do not assume maximal strength of will; a model which combines expected utility theory with Options-as-Decisions is one such model. So, I think that our best model of rationality will not allow for processing speed limitations, but will allow for imperfect memory and limited strength of will, and this motivates me to say that while the former constitutes a failure of ideal rationality, the latter does not.

. Replacing Utility Conditionalization We have seen that adopting a Uniqueness thesis for credences (or beliefs) alleviates the need for diachronic rational requirements on credences. Assuming as a datum that rational agents’ credences change only in response to changes in evidence, it is only the Permissivist who needs diachronic principles to account for this datum. I have argued that we should espouse Uniqueness and abandon diachronic principles. What about the case of preferences? An analogous Uniqueness thesis for preferences would do much of the work that the diachronic principle Utility Conditionalization, considered in Chapter , was meant to do in explaining the alleged datum that it is irrational to experience certain widespread changes in your preferences: Utility Conditionalization (rough, informal version) It is a requirement of rationality that your preferences over maximally specific possibilities do not change over time. Your preferences over non-maximally specific propositions change only as a result of rational changes in your credences. Preference Uniqueness Given a body of total evidence, there is a unique set of preferences that it is rational to have. Alleged Datum Rational agents do not change their preferences except in response to new information.

Clearly, Preference Uniqueness entails the Alleged Datum, thereby obviating the need for a diachronic principle like Utility Conditionalization. But Preference Uniqueness is an extremely strong thesis and likely yet more controversial than the already controversial Uniqueness thesis for credences. But note that the Alleged Datum may also be implausible. It may be that there are no specific constraints on how to change your preferences. Perhaps it is always rationally permissible to change even your ultimate preferences, provided you end up with preferences which are defensible in their own right. Of course, it may be true that rational agents do not undergo wild, dramatic fluctuations in their preferences. But this could be explained without appeal to diachronic principles for preferences,

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles but instead by appealing to the fact that there are typically strong pragmatic reasons to try to prevent yourself from changing your preferences back and forth moment to moment. After all, if you keep changing from one set of preferences to another, it is unlikely that you will satisfy any of these sets of preferences. But the existence of pragmatic reasons to try to cause yourself not to undergo massive preference shifts does not entail the existence of any diachronic principles for preferences. Therefore, given that the Alleged Datum is not obviously true, I claim only that the following biconditional is true: The Alleged Datum is true if and only if Preference Uniqueness is true.

Insofar as rational agents must not change their preferences except in response to new information, this is because their evidence fixes what they ought to prefer. For instance, if it is rationally impermissible to change from ultimately caring only about the total well-being in the world to ultimately caring only about your own lifetime well-being, it is more plausible that this is because the latter sort of preference is irrational in its own right (given your evidence) than that there is a sui generis diachronic norm against changing what you ultimately care about. But is Preference Uniqueness true? In the remainder of this section, I investigate the prospects for defending Preference Uniqueness. While I do not fully endorse Preference Uniqueness, I think that it is less implausible than you might think.

.. Humeanism On an extreme permissivist view of preference, there are no rational requirements on preference whatsoever. Reason is concerned with beliefs, not preferences. Hume, in the Treatise of Human Nature, held something like this view, famously writing that, ’Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger. ’Tis not contrary to reason for me to chuse my total ruin, to prevent the least uneasiness of an Indian or person wholly unknown to me. ’Tis as little contrary to reason to prefer even my own acknowledg’d lesser good to my greater, and have a more ardent affection for the former than the latter. (...)

On Hume’s view, rationality does not dictate which specific desires or preferences you ought to have. Now, Hume does sometimes suggest that preferences or desires could be irrational, but that the irrationality of preferences could only derive from the irrationality of the beliefs to which they are linked. (I set aside the interesting question of the relation between preferences/desires and other “passions”.)

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  Passions can be contrary to reason only so far as they are accompany’d with some judgment or opinion. According to this principle which is so obvious and natural, ’tis only in two senses, that any affection can be call’d unreasonable. First, When a passion, such as hope or fear, grief or joy, despair or security, is founded on the supposition of the existence of objects, which really do not exist. Secondly, When in exerting any passion in action, we chuse means insufficient for the design’d end, and deceive ourselves in our judgment of causes and effects . . . In short, a passion must be accompany’d with some false judgment, in order to its being unreasonable; and even then ’tis not the passion, properly speaking, which is unreasonable, but the judgment. (...)

So for Hume, a desire can be irrational only if it is based on a false belief (and, as the last clause of the above quote suggests, it may be that even then it is the belief which is irrational, and the desire is simply likely to be dislodged when reason criticizes the belief). For instance, a desire not to board an airplane is irrational if it is based on a false belief about the dangers of air travel, and a desire to go to the movie theatre is irrational if it is based on a false belief about which film is showing. Hume would do better to replace talk of false beliefs with talk of irrational beliefs, for a desire not to board the airplane would, I think, be quite rational if based on a rational, evidence-based belief about the dangers of air travel, even if this belief in fact turned out to be false. Regardless, we might interpret Hume as thinking that ultimate preferences over maximally specific possibilities cannot be irrational; only derived preferences which result from combining your ultimate preferences with your credences can be irrational, and then only if those credences are themselves irrational. This sort of extreme Preference Permissivism is not plausible, for it does not even require that your preferences obey certain purely formal or structural constraints. For instance, it does not even require that your preferences be transitive in order to count as rational. But plausibly, intransitive preferences are ipso facto irrational (although, given the general rebuttal of the Diachronic Tragedy Arguments in Chapter , this claim would need to be supported by something other than the classic money pump argument). So, a more moderate Preference Permissivism would hold that any set of ultimate preferences is rationally permissible, provided that they satisfy various formal constraints such as transitivity, irreflexivity, and the like.

.. Broome and Parfit against Humeanism But even this more moderate Preference Permissivism may be untenable. Broome () and Parfit () have argued that there must be substantive constraints on rational preferences in addition to merely formal constraints. In my view, Parfit’s argument is more persuasive than Broome’s, and so I will begin with the latter.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles Broome argues that substantive constraints on rational preferences are necessary in order to give the purely formal constraints any bite. Consider transitivity. Broome imagines Maurice, who given a choice between visiting Rome (R) and mountaineering in the Alps (M) would choose Rome, and who given a choice between staying at home (H) and visiting Rome would choose to stay home. But between staying home and going mountaineering, he would choose mountaineering. Maurice’s preferences seem to be intransitive; he seems to prefer H to R, R to M, and M to H. Is Maurice therefore irrational? Maurice denies the charge. He justifies himself by saying that: Mountaineering frightens him, so he prefers visiting Rome. Sightseeing bores him, so he prefers staying at home. But to stay at home when he could have gone mountaineering would, he believes, be cowardly. That is why, if he had the choice between staying at home and going mountaineering, he would choose to go mountaineering. (Broome (, ))

So arguably, Maurice’s preferences are really transitive after all. We just didn’t cut up the objects of his preferences finely enough. We must divide H into two more specific possibilities: staying at home without having declined a mountaineering trip (H ) and staying at home having declined a mountaineering trip (H ). Maurice prefers H to R and R to M. If his preferences are to be transitive, he must therefore prefer H to M, but transitivity does not require him to prefer H to M. The fact that Maurice would choose mountaineering over staying at home, given a choice between the two, shows only that he prefers M over H ; it does not show that he prefers M over H , which would violate transitivity. Broome’s worry is that if you can always chop up the objects of your preferences ever more finely to escape allegations of having intransitive preferences, then the transitivity requirement has no bite. Of course, as just noted, transitivity does still require that Maurice prefer H over M, but it is impossible for Maurice to ever face a choice between H and M; he cannot choose between mountaineering and staying home without having declined a mountaineering trip. A preference between H and M is therefore what Broome calls a “nonpractical” preference. Broome concludes that if we are allowed to individuate outcomes ever more finely to escape intransitivity, then “transitivity does not constrain preferences in a practically significant way” (). Broome argues that in order to keep the requirement of transitivity from being empty in this way, we should say that rationality requires you to be indifferent between certain outcomes. In this case, if we think that Maurice’s preferences are irrational, we should say that it is irrational to have a preference between H and H . This is enough to show that Maurice’s preferences are irrational, for either he has intransitive preferences over R, M, and the coarse-grained outcome H, or else

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  he has transitive preferences over R, M, and the fine-grained outcomes H and H but irrationally has a preference between the last two. In short, rational requirements of indifference are needed in order to keep the requirement of transitivity from being empty.26 Non-Humean substantive requirements on preferences are needed in order to give teeth to the purely formal Humean requirements. Here is Broome (): Without requirements of indifference, transitivity would not constrain practical preferences in any way. Only these requirements give transitivity its bite. Yet transitivity is the central condition of consistency; without it, consistency is nothing. If rationality really requires only consistency, then, it requires nothing at all of practical preferences. So the moderate Humean view is untenable. There are two alternatives: either the extreme Humean view that rationality imposes no constraints at all on practical preferences, not even constraints of consistency, or else the non-Humean view that rationality imposes some requirements of indifference. I assume that few people will be attracted to the extreme Humean view. Leaving it aside brings us to the conclusion that there must be rational requirements of indifference.

Now, Broome’s argument is not decisive. Dreier () insists that even if, in a given case where some agent seems to have intransitive preferences, we could reinterpret her as having transitive preferences over more fine-grained objects, sometimes this reinterpretation will simply be incorrect. Whether we should interpret an agent as having intransitive preferences over more coarse-grained objects or as having transitive preferences over more fine-grained objects depends on what preferences the agent actually has! I can put the point no more clearly than Dreier himself (–): Recall how the danger of trivialization arose. We noticed that the re-individuation of alternatives we needed to save some decision theoretic axioms from counting as irrational some kinds of preferences which seem clearly rational, might lead to the erosion of the constraints the axioms place on choice to the point that they could not rule out any possible set of choices as irrational. But this move was too hasty. For we might notice that a peculiar set of preferences like Maurice’s could be rationalized by more finely individuating options, and still ask whether Maurice does in fact individuate the options in that finer way. We may ask Maurice, do you prefer hiking to staying home because you care about the relation the choice of hiking bears to the particular alternative of staying home? If he doesn’t—if, for example, he hadn’t noticed that his pairwise preferences among hiking, Rome, and home were intransitive, or if he did notice it but did not care—then after all his preferences are irrational.

What about Broome’s worry that if—as Dreier concedes—fine-grained individuation of alternatives is sometimes appropriate, norms like transitivity have no bite, 26 Broome (, –) goes on to apply the same considerations to Savage’s Sure-Thing Principle.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles since it then only imposes constraints on an agent’s non-practical preferences? Suppose that after asking Maurice why he would choose Rome over mountaineering, home over Rome, and mountaineering over home (“Are your preferences intransitive, or did we not chop up the alternatives finely enough?”), Maurice tells us that he really does care about what alternative he would be turning down. He really does care about whether or not staying home would be the result of having turned down an opportunity for mountaineering, as this would be cowardly. Broome’s worry is that in this case, transitivity only constrains Maurice’s nonpractical preferences. Maurice prefers H to R and R over M. Transitivity does then require Maurice to prefer H to M, but this is toothless, Broome claims, because Maurice can never face a choice between H and M. The first thing to note (which Dreier does not mention) is that even though non-practical preferences between two alternatives are non-practical in the sense that you cannot face a binary choice between them, they may be practical in the sense of making a difference to expected utility calculations in which more than two alternatives are possible. Suppose, for instance, that Maurice is later offered a choice between two gambles. One gamble has probability p of yielding a mountaineering trip, and the other gamble has probability q of yielding either a trip to Rome or a nice, cozy “staycation”—the choice being Maurice’s. Here, it seems that Maurice’s choice is between having probability p of getting M, on the one hand, and having probability q of getting to choose between H or R, on the other. After all, if Maurice opts for the latter gamble, wins, and opts for the staycation, then he has gotten to stay home without ever having declined a mountaineering trip (though admittedly he did decline a gamble which had a certain probability of yielding a mountaineering trip). So even if Maurice could not face a binary choice between H and M, his preference between them could still affect what he ought to do in more complex choice situations like this one. Dreier also questions why it should be problematic if transitivity did in some cases only constrain an agent’s non-practical preferences. Provided that there is a fact of the matter about what Maurice’s preference is between H and M, transitivity still has something to say. In particular, if Maurice prefers H to M, then transitivity says that Maurice is irrational. Of course, if we were a certain crude brand of revealed preference theorists, holding that an agent prefers A to B just in case she is disposed to choose A over B when offered a choice between them, then we might deny the very existence of non-practical preferences. But then again, this would be problematic in the context of decision theory, for standard axioms require that your preferences be complete—that for any two possibilities, either you prefer one to the other, or the other to the one, or you are indifferent between them. Provided that we are realists about preferences, holding that preferences are

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  real mental states not straightforwardly reducible to dispositions to choose, then we should think that Maurice has all sorts of non-practical preferences and that transitivity and other axioms require that these non-practical preferences have certain structural features on pain of irrationality. In my view, Dreier has given an adequate defense of Humeanism against Broome’s argument, but Parfit’s anti-Humean argument is more persuasive. Parfit argues that preferences can be irrational if they are arbitrary, in the sense of drawing sharp lines between cases that are equally good. This is best seen through examples, from Parfit (, –): A certain hedonist cares greatly about the quality of his future experiences. With one exception, he cares equally about all the parts of his future. The exception is that he has Future-Tuesday-Indifference. Throughout every Tuesday he cares in the normal way about what is happening to him. But he never cares about possible pains or pleasures on a future Tuesday. Thus he would choose a painful operation on the following Tuesday rather than a much less painful operation on the following Wednesday. This choice would not be the result of any false beliefs. This man knows that the operation will be much more painful if it is on Tuesday. Nor does he have false beliefs about personal identity. He agrees that it will be just as much him who will be suffering on Tuesday. Nor does he have false beliefs about time. He knows that Tuesday is merely part of a conventional calendar, with an arbitrary name taken from a false religion. Nor has he any other beliefs that might help to justify his indifference to pain on future Tuesdays. This indifference is a bare fact. When he is planning his future, it is simply true that he always prefers the prospect of great suffering on a Tuesday to the mildest pain on any other day. Consider next someone with a bias towards the next year. This man cares equally about his future throughout the next year, and cares half as much about the rest of his future. [C]onsider a man whose pattern of concern is Within-a-Mile-Altruism. This man cares greatly about the well-being of all those people who are less than a mile from his home, but he cares little about those who are further away.

The preferences involved in Future-Tuesday-Indifference, bias towards the next year, and Within-a-Mile-Altruism are irrational, not because they are based on false or irrational beliefs, and not because they contravene purely formal constraints such as transitivity, but rather because they draw arbitrary sharp distinctions between cases that are extremely similar in all respects worth caring about. If Parfit is right, then Humeanism about preference, even the more moderate variety which allows for formal constraints on rational preferences, is false. There must additionally be some substantive constraints on rational preferences. While Broome’s anti-Humean argument can be resisted if we are sufficiently realist about preferences, resisting Parfit’s argument requires biting the bullet and countenancing that Future-Tuesday-Indifference, bias towards the next year, and Within-aMile-Altruism are perfectly rational.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles

.. Desire as Belief Even if we reject Humeanism, there is still a long way to go to arrive at Preference Uniqueness. The sorts of rational constraints on preferences that Broome and Parfit would impose are very weak and sparse. What could impose the strict demands on your preferences required for Preference Uniqueness to hold? The most promising approach, in my view, is to argue that there are rational requirements linking preferences with beliefs about betterness, such as the requirement that you prefer A to B just in case, and to the extent that, you believe that A is better than B. This is closely related to a widely-discussed thesis called “Desire-as-Belief,” which states that you must desire that A to the extent that you believe A to be good. If rationality requires that your preferences line up with your beliefs about betterness, then Preference Uniqueness follows from Uniqueness for credences (or beliefs). For if your total evidence uniquely fixes what beliefs you ought to have, including your beliefs about betterness, and your beliefs about betterness uniquely fix what preferences you ought to have, then your total evidence uniquely fixes what preferences you ought to have.27 This is a promising avenue, but we must tread carefully. For Lewis (, ) shows that many attractive ways of making precise the thesis linking desires and beliefs about goodness are unworkable, with other authors strengthening his initial results. Luckily, there are versions of this thesis that avoid Lewis’s negative results and can be appealed to by a defender of Preference Uniqueness. I will discuss only two such escape routes, since they yield two interestingly different versions of Preference Uniqueness, a weak version and a strong version. I briefly discuss two other escape routes in footnote . Lewis assumes that a rational agent’s doxastic state and conative states can be represented by a probability function P and a utility function U, respectively, which are related by the following principle of additivity: For any proposition A and any partition E , . . . , E ,  U(A) = i U(A ∧ Ei ) × P(Ei | A)

Lewis also assumes that a rational agent updates her credences by Conditionalization (or, more generally, by Jeffrey Conditionalization). Given that evidence-loss is not at issue here, we can grant this assumption for present purposes. He also assumes that a rational agent’s utilities for maximally specific possibilities do not change when she gains new evidence. Lewis calls this assumption “Invariance,” but we know it by a different name: Utility Conditionalization. Later, I will discuss avoiding Lewis’s result by dropping this assumption, but let’s grant it for now. Lewis 27

Thanks to Timothy Williamson for this suggestion.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  then formalizes the Desire-as-Belief thesis (DAB) as stating that there is a function (the “halo” function) that assigns to any proposition A a proposition A◦ such that, necessarily, for any rational credence-utility function pair < P, U >, DAB: U(A) = P(A◦ )

Interpreting “A◦ ” as expressing the claim that A is good, the equation states that an agent’s utility for A equals her credence that A is good. (Note that Lewis is assuming that we scale utility functions to the closed interval from  to . If utilities could be less than  or greater than , DAB would require that rational credence functions sometimes violate the axioms of the probability calculus.) Lewis shows that if this equation holds prior to some learning experience, then in almost all cases it will fail to hold after the learning experience. To show this, Lewis notes that, given his assumptions, DAB is equivalent to a pair of claims DACB (Desire-as-Conditional-Belief) and IND (Independence). For all A and rational credence functions P: DACB: U(A) = P(A◦ | A) IND: P(A◦ | A) = P(A◦ )

Lewis (, ) writes, “To derive DACB, we recall that DAB is supposed to continue to hold under redistributions of credence, and we redistribute by conditionalizing on A . . . IND follows immediately from DAB and DACB. Conversely, DAB follows from DACB and IND.” But IND leads to contradiction. Take any A and P such that P(A) and P(A◦ | A) are greater than  but less than  (as Lewis notes, if there are no such A and P, then the case is trivial). This means that each of the four propositions A ∧ A◦ , A ∧ ¬A◦ , ¬A ∧ A◦ , ¬A ∧ ¬A◦ gets positive credence. Then, Lewis notes that there will be various ways of updating by Conditionalization that will make IND go from true to false. For instance, if you conditionalize on A ∨ A◦ , then P(A◦ ) will increase while P(A◦ | A) stays the same. So IND has to go, and since DAB entails IND, DAB has to go as well. Fortunately, however, there are ways a defender of Preference Uniqueness looking to appeal to a rational link between desires and beliefs about goodness can avoid Lewis’s results.28 First, as we noted earlier, Lewis assumes that an agent’s

28 In addition to the two strategies I discuss below, let me briefly mention two others from the literature. First, Byrne and Hájek () note that Lewis’s formulation of the principle of additivity for utilities (stated above) is “evidential” rather than “causal.” In determining the utility for A, it weights the utility of A ∧ Ei by P(Ei | A), which measures how strongly the agent regards Ei as being evidence for A. But there is an alternative formulation of the principle of additivity that weights the utility of A∧Ei not by P(Ei | A), but instead by P(Ei A), the agent’s credence that A would be true if Ei were true. Causal Decision Theorists will be sympathetic to the causal version of the principle of additivity,

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles utilities for maximally specific possibilities do not change when she gains new evidence. This is his “Invariance” and our “Utility Conditionalization.” Byrne and Hájek (, ) observe that this “is a crucial assumption for all Lewis-style antiDAB proofs.” In his later proof, it is needed in order for DAB to be equivalent to the conjunction of DACB and IND.29 Actually, as Byrne and Hájek note, we need to reject not only the claim that it is rationally required that your utilities for maximally specific propositions be immune to change as a result of changes in evidence, but also the claim that it is rationally permissible for them to be so immune. This strategy fits nicely with a version of Preference Uniqueness on which total evidence is relevant not only to what your non-ultimate preferences ought to be, but also to what your ultimate preferences ought to be: Weak Preference Uniqueness Given a body of total evidence, there is a unique set of preferences that it is rational to have. Different bodies of total evidence sometimes mandate different sets of ultimate preferences.

Given DAB, Weak Preference Uniqueness embodies a view on which rational agents often ought to be uncertain about even the fundamental facts about betterness—about which maximally specific possibilities are better than which. If the betterness in question is moral betterness, this is a picture on which rational agents often ought to be uncertain about what the true moral theory is, and they can gain evidence which bears on this question. It is worth emphasizing, though, that DAB does not rely on the assumption that the goodness is question is moral goodness; it could be all-things-considered goodness, or something else entirely.

and Byrne and Hájek observe that once we adopt it in place of the evidential version, Lewis’s anti-DAB result no longer goes through. Second, Hájek and Pettit () propose exploiting what they call the “indexicality loophole” in Lewis’s argument. What Lewis shows is that there cannot be one halo function—the same halo function no matter your credence-utility function pair—which obeys DAB. But this leaves open the possibility that for each credence-utility function pair, there is a halo function—a different one, perhaps, for each credence-utility function pair—which obeys DAB. They propose adopting an indexical version of DAB, which says that for every < P, U > pair, there exists a halo function such that for every proposition A, U(A) = P(A◦ ). Indexical DAB avoids Lewis’s negative results, and Hájek and Pettit argue that an indexical interpretation of the halo function fits naturally with subjectivist and expressivist accounts of goodness. I am confident that the Preference Uniqueness is compatible with such subjectivism or expressivism, although I will not argue for this in detail. 29 Thanks to Alan Hájek for helpful discussion of this point. DAB says that U(A) = P(A◦ ). Learning A makes the right hand side go to P(A◦ | A), and it is natural to think that it makes the left hand side go to U(A | A). By Invariance, U(A | A) = U(A), and so U(A) = P(A◦ ), which is precisely what DACB says. IND then follows, since we have both U(A) = P(A◦ ) (DAB) and U(A) = P(A◦ | A) (DACB). In this way, IND, which is the real problem, follows from DAB and Invariance, with the derivation of DACB from the latter two being an intermediate step.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  Turn to the second escape route from Lewis’s anti-DAB result. Actually, the second escape route consists of two specific strategies, but Lewis () shows that they are equivalent. Price () proposes endorsing DACB but not IND (though he does not present his view in these terms). Lewis’s proof relied only on IND; DACB on its own is consistent. Lewis also considers the option of restricting the DAB thesis so that it only applies to maximally specific propositions, those which are true at exactly one possible world. Your credence in A◦ ought to equal your utility for A if A is a maximally specific proposition, but they need not be equal otherwise. He calls this “Desire as Belief Restricted” (DABR). Lewis () acknowledges that DACB and DABR are each consistent; neither falls foul of his anti-DAB result. However, he shows that each is equivalent to a thesis that he dubs “Desire by Necessity” (DBN). According to DBN, “necessarily and regardless of one’s credence distribution, certain point-values [i.e. utilities for maximally specific possibilities] must be high and the rest low. Scale these as  and . Let G be the union of point-propositions with necessarily high value: the objectively desirable point-propositions” (Lewis (, )). Then, for any proposition A and credence function P, DBN: U(A) = P(G | A)

According to DBN, every agent at every time must have the same ultimate preferences, the same values for maximally specific possibilities, and she must prefer any proposition A to the extent that she believes that one of the objectively desirable maximally specific possibilities will obtain if A obtains. Modulo Lewis’s simplifying assumption that maximally specific possibilities receive one of only two values, so that there are no intermediate degrees of goodness, DBN amounts to a very strong version of Preference Uniqueness. While Weak Preference Uniqueness says that different bodies of total evidence can mandate different sets of ultimate preferences, Strong Preference Uniqueness says that every body of total evidence mandates the same set of ultimate preferences, and evidence plays a role only in determining what non-ultimate preferences you ought to have: Strong Preference Uniqueness There is a unique set of ultimate preferences that it is rational for one to have. These ultimate preferences, plus a body of total evidence, then uniquely fix all of the preferences— ultimate and non-ultimate—that one rationally ought to have via a principle of additivity for utilities.

Strong Preference Uniqueness, unlike Weak Preference Uniqueness, entails Utility Conditionalization, in the sense that if at all times you obey Strong Preference

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles Uniqueness you will at all times have the same ultimate preferences, and hence obey Utility Conditionalization as a byproduct. This parallels the way in which, given Uniqueness for credences, if at all times you obey Synchronic Conditionalization and your evidence grows monotonically, you will satisfy diachronic Conditionalization as a byproduct. Strong Preference Uniqueness is a very strong view, but there are things that can nonetheless be said in its favor once we assume DACB or DABR. Plausibly, the fundamental normative facts, such as which moral theory is correct, are a priori. And arguably, ideally rational agents are certain of all a priori facts. After all, an agent’s evidence always a priori entails these facts, and it is natural to think that an ideally rational agent will be certain of everything that is a priori entailed by her evidence. Putting these two things together, we get the result that whenever one world is better than another, an ideally rational agent will be certain that the one world is better than the other, and by DACB or DABR, she will prefer the one world to the other. Since this holds for all ideally rational agents, we get the result that it is a requirement of rationality that you prefer one world to another just in case the one is in fact better than the other, and this entails Strong Preference Uniqueness. Again, the defender of Strong Preference Uniqueness need not take a stand on exactly what sense of betterness is in play here, whether it is moral betterness, all-things-considered betterness, or some other sort of betterness.30 Note also that Strong Preference Uniqueness is compatible with its sometimes being indeterminate whether one world is better than another. In such a case, it says that it is indeterminate whether you ought to prefer the one to the other. Strong Preference Uniqueness is also compatible with the ranking of worlds being incomplete, in the sense that for some worlds w and w , neither is better than the other, nor are they equally good. In that case, a defender of Strong Preference Uniqueness

30 However, Strong Preference Uniqueness, and Time-Slice Rationality, are incompatible with the relevant sort of betterness being prudential betterness. If prudential betterness is the relevant sort of betterness, then w is better (relative to you) than w (and so you ought to prefer w to w ) just in case you have higher well-being in w than in w . But this view is extremely implausible. It requires that you privilege even the smallest increase in your own well-being over even quite large increases in the well-being of others. It requires you to be indifferent between worlds in which you have the same total well-being, even if other people are all happy in the one world and miserable in the other. Moreover, it is unclear why you should have to care only about your own total well-being throughout your life rather than, say, your present well-being or your total future well-being. Prudence requires you to privilege your own well-being over that of others, but prohibits you from privileging your present well-being over your past or future well-being. It requires you to be partial with respect to personhood (caring more about yourself than others) but impartial with respect to time (caring equally about all of your time-slices). In the words of Parfit (, ), prudence is incompletely relative and objectionable for this reason.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  might say that you ought to have imprecise preferences (see Chapter , Section ..), with your conative state being represented by a set of utility functions, some of which rank w higher than w , some of which rank w higher than w , and some of which rank them equally. These caveats are important. Suppose that the sort of betterness in play is moral betterness. Some moral theories, such as Utilitarianism, will yield a complete ranking of possible worlds in terms of their total amounts of happiness (though even here some indeterminacy may arise, since it may be indeterminate whether one world contains more happiness than another). But other moral theories will at best yield incomplete rankings of possible worlds. Probably most non-consequentialist theories are like this, and even many consequentialist views will have this feature, for instance if they are based on pluralist axiologies on which multiple incommensurable kinds of value are morally relevant. So Strong Preference Uniqueness is compatible with a range of moral theories, and also with a range of different views about betterness and its structure, if the sort of betterness in play is something other than moral betterness. We have seen two strategies for defending Preference Uniqueness by appeal to rational requirements linking preferences with beliefs about betterness which avoid falling foul of Lewis’s anti-DAB result. The first is to give up Utility Conditionalization (Lewis’s Invariance), yielding Weak Preference Uniqueness on which different bodies of evidence sometimes mandate different ultimate preferences. The second is to adopt DACB or DABR, each of which is equivalent to DBN, which says that all agents must have the same ultimate preferences. Combined with Uniqueness for credences, this yields Strong Preference Uniqueness—all rational agents must have the same ultimate preferences, and their evidence, by uniquely fixing what credences they ought to have, uniquely fixes what nonultimate preferences they ought to have as well. I will not take a stand on whether Weak Preference Uniqueness or Strong Preference Uniqueness is superior. Each entails Preference Uniqueness, and that is all that is needed for us to account for the Alleged Datum (that rational agents change their preferences only in response to new information) without appeal to Utility Conditionalization as a fundamental requirement of rationality. Lewis, of course, rejected both of these strategies. With respect to the latter, he wrote that DBN is “a form of anti-Humeanism, sure enough, but not the right form of anti-Humeanism” (, ).31 And he closed his  paper with a defense of Invariance (Utility Conditionalization). What is odd, however, is that his defense

31 This could, however, be read not as a rejection of DBN as implausible, but rather as stating that the sort of anti-Humeanism embodied by DBN was not the kind of anti-Humeanism he was concerned with, one which links contingent belief with contingent desire.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles of Invariance can be used to support DBN as well, making it unattractive to espouse Invariance while rejecting DBN. In response to the suggestion that you could rationally change your beliefs about how good some maximally specific possibility is (contra Invariance), Lewis (, ) writes: But the subcase [i.e. maximally specific possibility] was supposed to be maximally specific in all relevant respects-and that includes all relevant propositions about what would and would not be good. The subcase has a maximally specific hypothesis about what would be good built right into it. So in assigning it a value, we do not need to consult our opinions about what is good. We just follow the built-in hypothesis.

But if maximally specific possibilities have hypotheses about value built right into them which can be read off by the agent, then shouldn’t all rational agents have the same beliefs about the goodness of maximally specific possibilities, just as DBN says? (Lewis might have in mind an agent-relative conception of goodness, so that the built-in hypotheses are about goodness relative to the agent in question, but then the question arises why we shouldn’t go further and have an agent- and timerelative conception of goodness, which would scuttle his defense of Invariance.) Lewis thus faces a dilemma: either maximally specific possibilities don’t have builtin hypotheses about value, in which cases his defense of Invariance fails, or they do, in which case DBN looks compelling. For my part, I prefer to think of maximally specific possibilities as metaphysically possible worlds. They are not linguistic entities, so they don’t have value hypothesis “built-in” in any straightforward sense. Nevertheless, normative facts, including facts about goodness, supervene in an a priori knowable way on nonnormative facts, so each possible world (or, more accurately, each singleton set thereof) entails a proposition about how good it is. Then, the relevant issue is whether rational agents must be a priori omniscient. If they needn’t be a priori omniscient, then this suggests that Invariance is false, as new evidence can make a different to how good one ought to believe a world to be. This is the picture given by Weak Preference Uniqueness. And if ideally rational agents are a priori omniscient, then this supports Strong Preference Uniqueness, as argued above. As Byrne and Hájek () rightly note, this issue turns on how much we are idealizing when we theorize about rationality. In general, I find it theoretically fruitful to idealize away from ignorance of a priori matters, which makes me sympathetic to Strong Preference Uniqueness, but I won’t take a definitive stand here. Preference Uniqueness, Weak or Strong, is sufficient for my purposes. A short note about time-bias. One of the considerations I raised in discussing Utility Conditionalization was that it prohibited most forms of time-bias (the exception being exponential discounting). How well time-bias fares with respect to Preference Uniqueness depends on how we conceive of the good. Given the

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing diachronic principles  background of justifying Preference Uniqueness by appeal to a desire-as-belief thesis, rational time-bias requires that goodness be understood in a time-relative manner. The goodness of a world relative to a given time will depend on how pleasures and pains are distributed relative to that time—whether pleasures and pains are in the future vs. the past, or in the near future vs. the far future. And this means that as we move through time, propositions can go from being good to being bad, and vice versa. Strong Preference Uniqueness requires all agents at all times to have the same ultimate preferences, and so it can only be motivated by a desire-as-belief thesis if we have a time-neutral conception of the good. Therefore Strong Preference Uniqueness deems time-bias to be irrational. Weak Preference Uniqueness, by contrast, is compatible with a time-relative conception of the good and hence with time-bias’s being rational. If we assume Weak Preference Uniqueness and time-relative goodness, then evidence will fix what ultimate preferences you ought to have not just by bearing on a priori normative facts, but also by bearing on the question of where you are located in time. This means that whether to favor the Strong or the Weak version of Preference Uniqueness will depend not only on whether rationality requires a priori omniscience, but also on whether time-bias is rationally permissible. Let me close by reiterating that if you don’t like either of these forms of Preference Uniqueness, then in my view you should reject the Alleged Datum rather than endorse Utility Conditionalization as a sui generis diachronic principle. If we reject the Alleged Datum, your preferences—even your ultimate preferences—can change for reasons other than the receipt of new evidence. They can simply change as you grow older or have experiences whose import cannot be characterized merely as an change in evidence. This is probably how changes in ultimate preferences occur in real life. Thus, whether or not you are sympathetic to Preference Uniqueness, I claim that you should endorse the conditional claim that if the Alleged Datum is true, then this is because of Preference Uniqueness rather than an irreducibly diachronic principle like Utility Conditionalization. And the truth of the conditional, rather than the truth of the consequent, is all that is required for Time-Slice Rationality.

. Coda: Uniqueness, Coherence, and Kolodny In this chapter, I have shown that insofar as your credences ought to be related to each other as Conditionalization demands (setting aside the issue of lost evidence), this can be explained by Uniqueness, in the form of Synchronic Conditionalization, obviating the need for a principle of diachronic coherence as such.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing diachronic principles And insofar as your preferences ought to change only in response to changes in evidence, this can be explained by Preference Uniqueness, obviating any need for a diachronic principle for preferences. These are instances of a more general trade-off. The more we can say about which particular attitudes you ought to have, the less need there is for principles of coherence, and vice versa. This trade-off applies not only in the diachronic case which was the focus of this chapter, but also in the synchronic case. Consider contradictory beliefs. Kolodny (a) argues that there is no need for a principle of coherence stating that it is irrational to have contradictory beliefs, since if you believe both H and ¬H, then at least one of your beliefs is not supported by your evidence, and hence is irrational. But note that this assumes that your evidence must either support H, support ¬H, or support neither. So if evidence uniquely determines which beliefs you ought to have, then there is no need for a separate principle of coherence. Similar remarks apply in the practical sphere. It is irrational to have intransitive preferences. It is irrational to prefer w to w , w to w , and w to w . If Preference Uniqueness is true, then this datum can be explained by saying that if you have these preferences, then at least one of these three preferences is by itself irrational. Either you ought not to prefer w to w , or you ought not to prefer w to w , or you ought not to prefer w to w . Suppose, for instance, that your preferences ought to track betterness. Because betterness is transitive, if you have intransitive preferences, then in at least one case you must be preferring a worse thing over a better thing, and your preferences are irrational for this reason. But if Preference Uniqueness is false, then we need a norm of coherence as such to explain why it is irrational to have intransitive preferences. If rationality does not dictate which particular preferences you ought to have among w , w , and w , then the fact that it is irrational to prefer w to w , w to w , and w to w cannot be because one of these preferences is by itself irrational. Rather, this combination of preferences is irrational; any set of preferences among these worlds is rational, so long as you don’t mix-and-match and have them come out intransitive. This means that we would need a principle of coherence for preferences which directly states that they must be transitive. I have conceded that uniqueness is more plausible for beliefs than for preferences. But the trade-off is the same in each case. If a uniqueness thesis holds for a certain sort of attitude, then coherence principles—whether diachronic or synchronic—are unecessary, whereas if uniqueness fails, then coherence principles would be needed if we wanted to rule out certain combinations of attitudes as irrational.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 Replacing Reflection Principles . Expert Deference In the previous chapter, I looked at replacing diachronic principles, which are incompatible with Time-Slice Rationality, with synchronic principles intended to do much the same work while being compatible with my time-slice-centric picture of rationality. Now I turn my attention to Reflection principles, while are likewise incompatible with Time-Slice Rationality. Can we replace Modified Reflection, a distinctively first-personal principle of deference to your future opinions, with some impersonal principle of deference which avoids reliance on the relation of personal identity over time? The most promising avenue is to replace Modified Reflection with a principle of deference to expert opinion. Recall the original, unmodified Reflection principle for beliefs/credences. Where P is your credence function at t and P (H) = n is the proposition that at t you will have credence n in H, the principle states: Reflection It is a requirement of rationality that, for all H, P (H | P (H) = n) = n

As we saw, Reflection is subject to counterexamples involving anticipated future irrationality and anticipated evidence loss, but we can get around these counterexamples by adding some explicit caveats to Reflection, resulting in: Modified Reflection It is a requirement of rationality that, for all H, P (H | P (H) = n) = n, unless you believe that at t you will be irrational or will have lost evidence.

But these modifications come at the cost of making the principle inelegant. Moreover, insofar as Modified Reflection appears plausible, this plausibility stems entirely from the thought that if your future self is rational and has more evidence than you, then your future self is an expert relative to your current self, and you ought to defer to expert opinion. A principle of expert deference would subsume Modified Reflection as a special case of a more general and bettermotivated principle. And because it would avoid reference to personal identity

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing reflection principles over time, an expert deference principle would, unlike Modified Reflection, satisfy Impartiality.1 In formulating a principle of expert deference, it will be useful to start with an extremely strict criterion of what it takes to count as an expert. Let us say that an agent is an expert with respect to you just in case she is (perfectly) rational and has strictly more evidence than you. Given the arguments of the previous chapter, we can interpret being perfectly rational as a matter of obeying Synchronic Conditionalization, and we can interpret having strictly more evidence than you as a matter of assigning probability  to strictly more propositions than you. So, an agent is an expert relative to you just in case (i) the set of propositions to which she assigns probability  is a proper superset of the set of propositions to which you assign probability , and (ii) her credences are the result of taking the uniquely rational prior probability function and conditionalizing it on the conjunction of the propositions to which she assigns probability . Admittedly, this definition of expertise is somewhat artificial. First, everyday experts must be rational, but they needn’t be perfectly rational. Satisfying Synchronic Conditionalization is an extremely high standard to meet, so high that almost certainly no real-life experts in fact satisfy it. As we ordinarily think of expertise, someone needn’t be perfectly rational to count as an expert; she need only be more rational than you. Second, we ordinarily think that someone might count as better informed than you even if the set of evidence propositions that she possesses is not a proper superset of yours. Intuitively, someone might count as better informed than you even if she lacks a bit of the evidence you have, provided that that lack is compensated for by her having a great deal of other relevant evidence that you lack. In this case, neither your total evidence nor hers is a superset of the other, but her total evidence is much bigger than yours. Last, on my formal definition of expertise, someone counts as an expert on all subject matters 1 Such a principle might also be able to subsume David Lewis’s () Principal Principle as a special case. The Principal Principle enjoins you to defer to objective chance by matching your credences to what you believe the chances to be. Why defer to what you take the objective chances to be? Well, there is a line of thought in the literature about what objective chances are that ultimately motivates the Principal Principle by subsuming it under an expert deference principle (for discussion, see especially van Fraassen (), Hall (), and Handfield ()). On this line of thinking, the chances at a time are whatever results from taking a rational prior probability function and conditionalizing it on the strongest true proposition about the history of the world up to that time. Put more simply, the chances at a time are the credences of a hypothetical rational agent whose evidence consists of all facts about the past. So, the chances are the credences of a hypothetical expert, since they are the result of taking a rational prior probability function (the rational prior probability function, if the argument of that last chapter is correct) and conditionalizing it on more evidence than you yourself possess, viz. all the evidence about the past. If this conception of chance is right, then the reason you ought to defer to what you take the chances to be is that you ought to regard the chances as expert credences, and you ought to defer to what you take to be expert credences.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing reflection principles  or on none. There is no relativization of expertise to a subject matter, whereas we ordinarily think that someone can count as an expert on global warming but not on macroeconomics, for example. All of these idealizations have the cost of making the formal definition of expertise employed here less like our ordinary conception of expertise and making the deference principle less applicable to daily life. But they have the benefit of making things more precise and formally tractable, which will be essential in what follows. As for real-life experts, I suspect that there will be no exceptionless formal principle for how to take their opinions into account, but in most cases the way in which you ought to defer to the opinions of real-life experts will more or less approximate that given in our formal principle below. With this definition of expertise in hand, we can formulate a principle of S (H) = deference to expert opinion. Where Pyou is your credence function and Pex n is the proposition that S is an expert with credence n in H, we get: Expert Deference2 S (H) = n) = n It is a requirement of rationality that, for all H, Pyou (H | Pex

Expert Deference is intuitively plausible. But there is a serious worry that it may be inconsistent. For in cases where two experts disagree with each other, you cannot simultaneously defer to one’s opinion and to the other’s. If Alice the Expert thinks that rain is . likely while Bob the Expert thinks it is . likely, you cannot match your credence in rain to Alice’s credence and to Bob’s credence. It is of course possible for experts to disagree. It is possible for there to be one rational agent with more evidence than you who has credence n in H and another who has credence m  = n in H. But the mere possibility of disagreeing experts is not a problem for Expert Deference. There needn’t be anything inconsistent about conditional credences such that P(H | E ) = n and P(H | E ) = m  = n. Expert Deference would, however, yield inconsistent recommendations if you could be certain of two experts that they have different credences n and m in H. For then the principle would instruct you to have both credence n and credence m in H. But we can actually show that it is impossible for you to rationally be certain of two experts that they have credences n and m in H, where n  = m. Suppose you know (or are certain) that Alice and Bob are experts, relative to you, and you know what credence each of them has in some proposition H. Because you know what credence each of Alice and Bob has, each of them also knows what credences the other has (since experts by definition know more than you). And

2 Closely related impersonal expert deference principles are discussed in Gaifman (), Hall (), Elga (), and Titelbaum ().

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing reflection principles because you know that Alice knows what Bob’s credences are, Bob knows that Alice knows what Bob’s credences are (and vice versa). And so on. Thus Alice and Bob have common knowledge of what credences they have in H.3 Moreover, because Alice and Bob are experts, they are rational. Assuming Uniqueness and Synchronic Conditionalization, this means that they have common priors. Now Aumann’s () famous “No Agreeing to Disagree” result kicks in. Aumann showed that if two agents with common priors have common knowledge of each other’s credences in a proposition H, then their credences in H must be the same. So Alice and Bob must have the same credence in H. Therefore, Expert Deference will not give conflicting advice in this case. In sum, by the definition of expertise, experts must satisfy Aumann’s assumptions of common priors and, if you know what their credences in H are, they must also have common knowledge of each other’s credences in H, and so cannot have different credences in H. Unfortunately, we have only blocked one potential source of inconsistency, where you are certain of two experts that they disagree. But this is not the only way that Expert Deference could turn out to be inconsistent. Expert Deference entails that, if you are certain that A is an expert, then your credence in H should be your expectation of A’s credence in H. So Expert Deference will be inconsistent if you are certain that A and B are experts, but your expectation of A’s credence in H differs from your expectation of B’s credence in H. Appeal to Aumann is of no use here. You do not know what A’s and B’s credences in H are (though you have credences about what their credences are), and so A and B needn’t have common knowledge of each other’s credence in H. So, we need a more general defense of Expert Deference. But fortunately, there is just such a defense readily available. Weisberg () and Briggs () show that on certain assumptions, Modified Reflection actually follows from the axioms of the probability calculus and hence must be consistent. But interestingly, their proof also shows that Expert Deference follows from the axioms, on the same assumptions. The crucial assumptions are (i) that at any time the possible propositions that an agent might have as her total evidence are mutually exclusive, (ii) that you are 3 I hasten to add that I employ talk of knowledge merely for convenience. We could replace “knows H” with “assigns credence  to H,” and everything would go through in just the same way, provided that we assume that credence  is only assigned to truths. A more explicit proof that knowledge of the credences of two experts entails that they have common knowledge of each other’s credences relies on a positive introspection principle that says that if you are certain of a proposition, then you are certain that you are certain of it. If only known propositions get credence , then positive introspection amounts to the KK thesis, which says that if you know H then you know that you know H. This is certainly far from uncontroversial, even as an idealizing assumption. See Greco (forthcoming) for a defense of KK.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing reflection principles  certain of what the deferee’s (the expert’s or your later self ’s) priors are, and (iii) that you and the deferee have the same priors. For the case of Modified Reflection, where the deferee is your later self, assumptions (ii) and (iii) amount to the claim that you are a perfect introspector of your conditional credences and that you are certain you will update by Conditionalization. For the case of Expert Deference, assumption (ii) amounts to the claim that you are certain of what the uniquely rational prior probability function4 is (since an expert must have this function as her prior), and assumption (iii) amounts to the claim that you are rational (since this entails having this same uniquely rational prior probability function as your prior). Admittedly, these assumptions are very strong. We will return to them shortly, but for now I want to emphasize that these assumptions are required not only for a defense of Expert Deference, but also for a parallel defense of Modified Reflection. And we will see below that if we drop these assumptions, Modified Reflection (and hence also Expert Deference) gives manifestly wrong results. So even if you are unhappy with these assumptions, it remains the case that if Modified Reflection is true, then so is Expert Deference, and so whatever truth there is behind the firstpersonal deference principle of Modified Reflection really lies in the impersonal principle of Expert Deference. Granting assumptions (i)–(iii) for now, the intuitive idea behind Briggs’s and Weisberg’s proof, for the case of Expert Deference, is this. Suppose that S is an expert with credence n in H. By assumption (ii), you are certain of what the expert’s priors are, and so you can reverse engineer what evidence S might have. Suppose that the propositions that S might have as her total evidence which would license credence n in H are E , E , . . . ,En . So, conditional on the claim that S has credence n in H, you are certain that one of the Ei is true, even though you don’t know which it is. That is, you are certain of the disjunction E ∨ E ∨ . . . ∨ En . But if you are rational, you and the expert have the same priors, namely the rational ones (assumption (iii)). And it is a theorem of the probability calculus that if the Ei are mutually exclusive (assumption (i)) and for all Ei , P(H | Ei ) = n, it follows that P(H | E ∨ . . . ∨ En ) = n. This means that your credence in H, conditional on the claim that S has credence n in H, must also be n. In sum, conditional on S being an expert with credence n in H, you can figure out which bits of evidence she might have, and using their disjunction as your total evidence, you’ll come out with credence n in H as well. More formally: 4 Here I am assuming a version of Uniqueness based on precise credences, rather than one based on imprecise credences (represented by sets of credence functions). If we employ imprecise credences, we should run the proof for each particular probability function in the agent’s representor (or set of probability functions).

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing reflection principles Proof S (H | E ) = n. Let E , . . . En be the propositions Ei such that Pex i S (H) = n) =  () By assumption (ii), Pyou ((E ∨ . . . ∨ En ) ≡ Pex S (H) = n) = P () By (), Pyou (H | Pex you (H | E ∨ . . . ∨ En ) S (H | E ) = n () By assumption (iii), for all Ei , Pyou (H | Ei ) = Pex i () By () and assumption (i), Pyou (H | E ∨ . . . ∨ En ) = n S (H) = n) = n () By () and (), Pyou (H | Pex Q.E.D.

Thus, if we make assumptions (i)–(iii), Expert Deference follows from the axioms of the probability calculus and hence must be consistent.5 But as I noted earlier, the assumptions are extremely strong. Let’s take them in reverse order, starting with (iii). This assumption says that you and the expert have the same priors. If you are rational, this will be true. This is because by Uniqueness, any (ideally)6 rational agent must have the uniquely rational prior probability function as her prior, so any two rational agents must have the same priors. Assumption (ii) says that you are certain of what the expert’s priors are; that is, you are certain of what the uniquely rational priors are. This is a very strong assumption, but it may be possible to defend it on the grounds that it is an a priori matter what the rational priors are, and ideal rationality arguably requires a priori omniscience. So, understood as a principle of ideal rationality, Expert Deference may be able to avail itself of assumption (ii).7

5 If Expert Deference follows from the axioms, does this make it uninteresting? Here I agree with Briggs (, ). Writing about her having proved that a formalized version of Modified Reflection follows from the axioms, she says, “Even so, it is useful in roughly the way Bayes’s theorem is useful: it expresses a hard-to-calculate quantity in terms of easier-to-calculate parts.” 6 Of course, if you are not perfectly rational, then even if your credences obey the axioms of the probability calculus, you may not obey Expert Deference. But Expert Deference is supposed to be a requirement of rationality, so it is fair to assume that you are otherwise perfectly rational when we defend the principle. 7 In the proof that Modified Reflection follows from the axioms, assumption (ii) amounts to the claim that you know for certain what your own conditional credences (conditional on the evidence you might gain in the future) are, and this assumption is no more plausible, in my view, than the assumption that if you are ideally rational, you will be certain of what the uniquely rational priors are. Additionally, the assumption that you know for certain what your conditional credences are is not just necessary to derive Modified Reflection from the axioms. It is also needed to keep Modified Reflection from giving incorrect results. To take an extreme case, suppose you in fact have credence function P but you are certain that you have some other credence function P . Let the Ei be the propositions such that P (H | Ei ) = n, and let them be such that P(H | Ei ) = m = n. This means that, conditional on the claim that you will later have credence n in H, you are certain that the disjunction of the Ei is true. But your actual credence in H, conditional on the disjunction of the Ei , is m. So in this case, your credence in H, conditional on the claim that you will later have credence n in H, should be m, rather than n, as Modified Reflection requires. So perfect introspection of your priors is needed to keep Modified Reflection from going awry.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing reflection principles  But what about (i), the assumption that the propositions an expert might have as her total evidence are mutually exclusive? This amounts to a sort of perfect introspection. If you could not introspect perfectly, it might be possible for you to gain as evidence either E or E ∧ E , which are not mutually exclusive. For you might gain E as evidence without realizing that you had not also gained E as evidence. By contrast, if you could introspect perfectly, you could not gain E but not E as evidence without also gaining as evidence the proposition that you gained E but not E as evidence. Your evidence would include a sort of “that’s all” clause, so to speak. And this “that’s all” clause ensures that the pieces of evidence you might gain must be mutually exclusive.8 Now, perhaps ideal rationality requires perfect introspection, so that even if imperfect agents are not perfect introspectors, ideally rational agents are, making assumption (i) hold at least for ideally rational agents. I myself will remain neutral on whether ideal rationality requires perfect introspection, and hence on whether assumption (i) holds. The important point is while this mutual exclusivity assumption is needed for a defense of Expert Deference, it is also needed for Modified Reflection. If mutual exclusivity fails, then Modified Reflection will in some cases give manifestly wrong results. Williamson (, Ch. ) gives such a case. Suppose there are three possible worlds, w , w , and x. Right now, you assign credence / to each world’s being actual. You are certain that you will shortly gain some evidence. You are certain that if w is actual, then you will gain as evidence the proposition {w , x} that the actual world is either w or x, and you will wind up with credence / in the proposition {w , w }. Similarly, if w is actual, then you will gain as evidence the proposition {w , x} and wind up with credence / in {w , w }. But if x is actual, you will gain as evidence the proposition {x} and wind up with credence  in {w , w }. In tabular form: Actual World w w x

Evidence Gained {w , x} {w , x} {x}

Resulting Credence in {w , w } / / 

In this case, the possible propositions you might gain as evidence fail to be mutually exclusive. And Modified Reflection is violated (as is Expert Deference, 8 See Weisberg () for a more detailed discussion of why mutual exclusivity requires perfect introspection. The case from Williamson below also illustrates how failing to perfectly introspect will allow for failures of mutual exclusivity. As he presents his case, it is one in which your evidence consists of all and only the propositions that you know. But the different things you might know are not mutually exclusive, since you might know E without knowing that you do not know E ∧ E .

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing reflection principles since you regard your future self as an expert). Your current credence in {w , w }, conditional on the claim that you will later assign credence / to {w , w }, is not /, but rather . For the only worlds in which you later assign credence / to {w , w } are w and w .9 So Modified Reflection gives the wrong answer—it tells you to have credence / in {w , w }, conditional on the claim that you will later have credence / in {w , w }, but your conditional credence should in fact be .10 So if the possible propositions that an agent might have as her evidence fail to be mutually exclusive, then Modified Reflection (and Expert Deference) will give incorrect results. So mutual exclusivity is needed not only in the proof that these principles follow from the axioms of the probability calculus. It is also needed to keep them from giving manifestly bad advice. What I hope to have shown, then, is that if mutual exclusivity holds (along with assumptions (ii) and (iii)), then Expert Deference is consistent and, in fact, follows from the axioms. And if mutual exclusivity does not hold, then neither Expert Deference nor Modified Reflection is right. This means that to the extent that you are sympathetic to Modified Reflection, you should also be sympathetic to Expert Deference. Any truth behind the first-personal Modified Reflection really lies in the impersonal Expert Deference. This, in turn, supports Impartiality, the claim that in determining what attitudes you ought to have, your beliefs about what attitudes you have at other times play the same role as your beliefs about the attitudes that other people have. As far as deference principles go, any intrapersonal requirements of rationality follow from more general principles that apply equally in the interpersonal case.

. Preference Deference In the previous section, I formulated an impersonal deference principle which entails Modified Reflection. An analogous impersonal principle of deference for preferences, which entails Preference Reflection, would go as follows. Again, an expert is taken to be someone who is rational and has more evidence than you: 9 Williamson notes an even stranger result of the case: you currently assign credence / to {w , w }, but you are certain that your later credence in that proposition will be lower, for it will be either / or . 10 Connoisseurs of the Principal Principle may have observed that what is going on here bears some similarity to the Big Bad Bug (Lewis ()). If you come to have credence / in {w , w }, you will not be certain that you have this credence. On this basis, one might hope that Williamson’s case might be dealt with by further modifying Modified Reflection along the lines of an explicit admissibility clause (Lewis ()) or the New Principle (Hall ()). I will not pursue this strategy here, except to note that insofar as it might be used to defend Modified Reflection, it is likely also to be of use in a defense of Expert Deference.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing reflection principles  Preference Deference It is a requirement of rationality that if you believe that some expert S prefers A to B, then you prefer A to B.

If we assume some sort of desire-as-belief thesis, so that you ought to prefer A to B just in case you believe A is better than B, then Preference Deference follows from Preference Uniqueness and Expert Deference. Suppose you believe that some expert S prefers A to B. By the desire-as-belief thesis, you must therefore believe that expert S believes that A is better than B. By Expert Deference, you yourself ought to believe that A is better than B.11 By the desire-as-belief thesis, you therefore ought to prefer A to B. If Preference Deference is grounded in uniqueness theses for preferences and credences (since Expert Deference requires Uniqueness), then it is not vulnerable to the bootstrapping objection that I raised against Preference Reflection in Chapter . The worry was that if you believe that if you do A, you will later prefer A to B (and vice versa) then Preference Reflection allows you to give yourself a reason to do A just by coming to believe that you will do it. But Preference Reflection only applies in cases where you think your future preference will be rational and based on at least as much information as you now have. And we now have the resources to argue that some of these future preferences must be irrational or involve forgetting, thereby blocking illegitimate bootstrapping reasoning. In the present context, deference to anticipated future preferences is rational only when it amounts to deference to what you take to be expert opinion about goodness. Bootstrapping would require the ability to knowingly gather evidence (in particular, evidence about goodness) in a biased way. It would require thinking that if you do A, you’ll later believe that A is better than B, and that if you do B, you’ll believe that B is better than A, without either of these possibilities involving any forgetting or future irrationality on your part. But as I show in Chapter , Section , this sort of intentionally biased evidence-gathering is impossible. If you think that one act will lead to your believing H and another will lead to your believing ¬H, then one of these possibilities must involve forgetting or future irrationality on your part. This means that with Preference Deference being grounded in uniqueness theses for preferences and credences, the impossibility of 11 Note, however, that this does not follow if we adopt the indexicality loophole of Hájek and Pettit () to avoid Lewis’s anti-DAB result. On that approach, the proposition expressed by “A is good” (or, formally, by “A◦ ”) is a function of the credence-utility function pair in question. Deferring to an expert’s belief in “A is good” would therefore be like deferring to the expert’s belief in “I am tall.” Believing that an expert believes the proposition she expresses by saying “A is good” does not mean that you ought to believe the proposition that you would express by saying “A is good,” and for the desire-as-belief thesis, it is only your belief in the proposition that you express by “A is good” that determines whether you ought to desire A.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing reflection principles rational bootstrapping follows from a more general claim about the impossibility of intentionally biased evidence-gathering. Preference Deference is naturally put in terms of fine-grained attitudes as S = U be the proposition that S is an expert whose preferences follows. Letting Uex are representable by utility function U, we get: Utility Deference It is a requirement of rationality that for all A, S = U) = U(A) Uyou (A | Uex

Utility Deference is of course structurally isomorphic to the principle of Utility Reflection considered in Chapter . There, I argued that Utility Reflection is inconsistent unless you must be certain about what ultimate preferences you will have in the future, conditional on the assumption that you will be rational. This is because, if you are uncertain about what ultimate preferences your future rational selves might have, we can get different results from Utility Reflection by replacing one of your possible future utility functions U with a positive affine transformation thereof. This assumes, of course, that we cannot solve the problem of interpersonal comparisons of utility—the problem of settling how to set the zero point and scale of the utility function which is to represent each set of preferences. This same worry is relevant for Utility Deference. Suppose that the problem of interpersonal comparisons of utility is insoluble. Then, Utility Deference is inconsistent unless you are certain about what ultimate preferences a rational agent must have. This means that inconsistency is a threat unless Strong Preference Uniqueness is true. But if Strong Preference Uniqueness is true, then all rational agents must have the same ultimate preferences, and so all experts must have the same ultimate preferences, since experts are by definition rational. Then, by our earlier stipulation if two agents have the same ultimate preferences, they should be represented by utility functions which agree on the values they assign to maximally specific possibilities, the inconsistency worry evaporates. For if all the utility S = functions Ui that you think an expert might have (i.e. all the Ui such that P(Uex Ui ) > ) must assign the same values to maximally specific possibilities, we cannot play the game of replacing one of the Ui by a positive affine transformation thereof to generate a conflicting claim about what your utilities ought to be. In short, Utility Deference is consistent if Strong Preference Uniqueness is true, but inconsistency threatens if only Weak Preference Uniqueness (or, of course, some version of Humeanism) is true.12 12 Suppose that Weak Preference Uniqueness is true and the sense of betterness in play is moral betterness. New evidence can mandate changes in your ultimate preferences by mandating changes in your credences about which moral theory is true. Then, the inconsistency worry for Utility Deference

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

replacing reflection principles  Indeed, if Strong Preference Uniqueness is true, then Utility Deference follows trivially from Expert Deference. The idea is simple. Each utility function uniquely fixes a particular credence function, in the sense that having that utility function requires having that credence function. Let utility function U be such that having it entails having credence function P. Then, conditional on the claim that S is an expert with utility function U, you are certain that S has credence function P. By Expert Deference, your credence function, conditional on expert S’s having credence function P, must also be P. You must also assign the same utilities to maximally specific possibilities as the expert does, if you are to be rational, since by Strong Preference Uniqueness any two rational agents have the same ultimate preferences, and experts are by definition rational. Thus, conditional on S’s being an expert, you must have the same credences and the same utilities for maximally specific possibilities as S. Hence, you must have the same utility function simpliciter. Q.E.D. This means that in both the case of credences and the case of preferences, strong uniqueness theses ground impersonal deference principles. A pleasing symmetry, in my view. If you don’t like any form of Preference Uniqueness, you shouldn’t like any form of Preference Deference either. This will mean rejecting the first-personal Preference Reflection principle without replacing it by any impersonal deference principle. Nevertheless, in this case we can still account for why Preference Reflection seems to give appropriate results in a variety of cases. Let me mention just two reasons why it will often be appropriate on independent grounds to defer to anticipated future preferences. First, your future preferences are often based on better information than your present preferences. Suppose you are presented with a plate covered by a cloche. You do not know what sort of food is under the cloche, could be avoided by providing a solution to the problem of intertheoretic comparisons of value—the problem of comparing degrees of goodness or badness across different moral theories. How does the degree to which lying is bad according to Kantianism compare with the degree to which killing to save the greater number is good according to Utilitarianism? Offhand, nothing in the moral theories themselves seems to address this question. This is a clear analog of the problem of interpersonal comparisons of utility. Indeed, given a link between preferences and credences about betterness, a solution to the problem of intertheoretic comparisons of value would amount to a solution to the problem of interpersonal comparisons of utility. If we could fix on a particular value function, with a particular zero point and scale, to represent each moral theory, one’s credences in various moral theories would fix one’s particular utility function. Solving the problem of intertheoretic comparisons of value would thus allow defenders of Weak Preference Uniqueness to embrace Utility Deference. Various proposals for solving the problem of intertheoretic comparisons of value have been made in the literature on decision-making under normative uncertainty. See especially Lockhart (), Sepielli (, ). I am skeptical of the possibility of solving this problem for the same reasons I am skeptical of the possibility of solving the problem of interpersonal comparisons of utility, given in Chapter .

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 replacing reflection principles but you are told that when the plate is uncovered in five minutes, you won’t want to eat what’s on the plate. It seems you shouldn’t wait until the plate is uncovered to form a desire not to eat the mystery meal. You should just adopt that desire now. This is because you expect your future desire to be based on more evidence—in particular, the evidence of seeing whatever it is on the plate—than you have now (and you expect to be otherwise rational). But of course, we don’t need a Preference Reflection principle to capture this. An epistemic principle stating that you ought to defer to expert opinion is enough. A second reason to defer to preferences you anticipate having in the future is a prudential one. If it is at least permissible to care about your future well-being, and your future well-being is partly determined by your future degree of preference satisfaction, then if you do in fact care about your future well-being your beliefs about what your future preferences will be are relevant for how best to satisfy your current preference that you have a high level of future well-being. Of course, as a defender of Time-Slice Rationality, I do not think that it is rationally obligatory that you care in some special way about your future well-being (or even about the well-being of your future psychological continuants). But I do think that if Preference Uniqueness is false, then it is rationally permissible to care specially about your future well-being, and this claim of permissibility, unlike the claim of rational obligatoriness, is not in conflict with Time-Slice Rationality. The foregoing should make it plausible that even if we reject Preference Uniqueness and hence Preference Deference, we can still give an error theory for Preference Reflection—an account of why it might have initially seemed plausible, even if it is ultimately false—using only resources available to the defender of Time-Slice Rationality.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 Doxastic Processes and Responsibility . Doxastic Justification I have argued that there are good reasons for thinking that requirements of rationality must be synchronic and impersonal (i.e. for believing Synchronicity and Impartiality). I have argued against some widely-defended principles which conflict with my view, and I have rebutted the most powerful argument—the Diachronic Tragedy Argument—in their favor. And finally, I have shown that these problematic principles can be replaced by improved, time-slice-centric principles. But it might be objected that I have been ignoring the most important place where the theory of rationality must make reference to diachronic facts and to personal identity over time. Whether a belief of yours is rational or justified depends not only on whether the belief is supported by your evidence, but also on the basis on which you hold that belief. If your belief is based on wishful thinking, then even if it happens to be supported by your evidence, it is unjustified. In order for a belief of yours to be justified, it has to be not only supported by your evidence but also held for the right reasons. And it might be objected that in order to account for these facts—facts about the basis on which you hold some belief—we must invoke personal identity over time and/or facts about the past history of your belief. Many epistemologists distinguish between propositional justification and doxastic justification. This is the distinction implicit in the previous paragraph. In the jargon, whether you have propositional justification for believing H depends only on whether your reasons support H. Assuming that the only reasons for belief stem from your evidence, this means that whether you have propositional justification for believing H depends only on whether your evidence supports H. It does not depend on facts about the basis on which you in fact believe H, or even on whether you believe H at all. You can have propositional justification for believing H even if you don’t believe H, or even if you believe H on the basis of wishful thinking. By contrast, you can be doxastically justified in believing H only if you believe H,

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 doxastic processes and responsibility and moreover you believe H for good reasons. Propositional justification has a forward-looking flavor, while doxastic justification is backward-looking. Of course, analogous distinctions can be drawn in the cases of preferences and actions, though the terminology of doxastic justification would need changing. So the objection is that reference to your past attitudes may be unnecessary to account for the facts about propositional justification, but they are needed to account for the facts about doxastic justification. Why think this? Why couldn’t a theory of doxastic justification also be time-slice-centric? The worry is that facts determining what your belief is based on may have to include facts about the past history of your belief. Whether your belief in H is based on evidence E is at least in part a matter of whether your belief in H was caused by your belief in E (e.g. whether you came to believe H by properly reasoning from the premise that E). If this is right, then whether you are doxastically justified in believing H will depend at least in part on facts about how you were in the past. The theory of doxastic justification will make ineliminable reference to other times and to the relation of personal identity over time in just the way that Time-Slice Rationality forbids. Let me start with a concessive remark, which is to say that, as formulated earlier, Time-Slice Rationality is strictly speaking only a theory about propositional justification (and its analogs for preferences and actions). In presenting Time-Slice Rationality, I talked about whether how you ought to be depends on facts about your past and future attitudes, and whether it depends in any special way on your beliefs about your past and future attitudes. The locution “you rationally ought to φ” naturally suggests that we are concerned with propositional justification. The claim that you ought to believe H amounts to the claim that you have propositional justification for believing H. It does not entail that you are doxastically justified in believing H. After all, the claim that you ought to believe H does not entail that you in fact believe H, let alone that you believe H on the basis of the right reasons.1 Moreover, the motivations marshalled in favor of Time-Slice Rationality apply in the first instance on propositional justification. First, the judgments made about the personal identity puzzle cases were judgments about propositional justification. I argued that in particular puzzle cases like Double Teletransportation or one of the intermediate cases in the Combined Spectrum, once we specify what evidence is possessed at each time by the various characters in the case, we have specified all we need to know in order to determine what each ought to believe. This is a claim about propositional justification. 1 There may also be uses of “you rationally ought to φ” which do have to do with doxastic justification. My claim is simply that this is not the most natural reading of the phrase.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

doxastic processes and responsibility  Second, the internalist considerations I marshaled in favor of Synchronicity were based on considerations specifically about propositional justification. For instance, in the Shangri-La case discussed in Chapter , I argued that upon entering Shangri-La, you ought to have credence . that you traveled by the Mountains, rather than credence  (or close to ), as diachronic Conditionalization demands. This is a claim about what credence you have propositional justification for having. So my concessive remark is to say that Time-Slice Rationality is only a theory about propositional justification (and its analogs for preferences and actions). The truth of Time-Slice Rationality allows that a time-slice-centric, impersonal theory of doxastic justification may be impossible. Would this outcome make Time-Slice Rationality a less interesting theory? I don’t think so (though even if it did, this would not be relevant to the question of its truth). For I take propositional justification to be the more central notion. I care about rationality primarily because I want to know what to believe, what to desire, and what to do. I care about evaluating my past beliefs, desires, and actions, or those of other people, only insofar as doing so can help me to determine what to believe, desire, and do now. For instance, evaluating whether another person’s beliefs are rational is relevant to whether I ought to defer to her beliefs or not (Dogramaci ()). Taking this forward-looking stance amounts to treating propositional justification as the sort of justification that is of central importance in a theory of rationality. Let me now turn to a stronger, less concessive, response to worries about doxastic justification, a response inspired by Williamson. Williamson (, ) endorses the claim that knowledge is the norm of belief. He puts this by saying that “Knowledge sets the standard of appropriateness for belief,” that “Mere believing is a kind of botched knowing,” and that “belief aims at knowledge.” A natural interpretation of these remarks is that a belief cannot be fully justified unless it constitutes knowledge. This is apt to strike one as an extreme view, but Williamson (forthcoming) softens the blow by pointing out that for any given norm, one can define derivative norms that can be satisfied even when one violates the primary norm. Where N is a norm, “there is a secondary norm DN of having a general disposition to comply with N, of being the sort of person who complies with N” (). We can also define “a tertiary norm ODN of doing what someone who complied with DN would do in the situation at issue” (). Complying with one or more of these derivative norms will often constitute a legitimate excuse for failing to comply with the primary norm N. Let N be the norm of believing H only when one knows H. A person S who believes H without knowing H violates N, but that does not mean that

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 doxastic processes and responsibility there is nothing that can be said in favor of either the person or her belief. She may nonetheless have a general disposition to have a belief only if it constitutes knowledge (complying with DN). And it may be that a person with a general disposition to have a belief only if it constitutes knowledge would believe as S does in S’s situation (so that S is complying with ODN in believing H).2 Williamson is surely right that given a primary norm, one can define various derivative norms. Supposing we grant that knowledge is the norm of belief, what should a Williamsonian say about doxastic justification (assuming we accept the ideology of doxastic justification as legitimate in the first place)? One option would be to identify doxastic justification with complying with N. This is roughly the line that Williamson pursues in that paper, though he does not employ the ideology of doxastic and propositional justification. On this view, if an agent has a belief that falls short of knowledge, then her belief is not justified, but she may be blameless for having the belief if she nonetheless complies with DN or ODN, or both. This option is particularly friendly to Time-Slice Rationality. For provided that knowledge is a mental state, it is a picture on which even doxastic justification supervenes on your present mental states. (Similar consequences would result from taking N to be the norm of having only beliefs which are proportioned to your evidence, rather than having only beliefs which constitute knowledge.) Another option would be to identify doxastic justification with complying with ODN. S’s belief is doxastic justified just in case, in having that belief, S is doing as someone generally disposed to have only beliefs that constitute knowledge would do in S’s situation. Now, complying with ODN may well not be a timeslice-centric affair. Whether, in believing H, S has done as a generally welldisposed epistemic agent would do in that situation, may depend on how S initially formed the belief. So this may well not result in a time-slice-centric conception of doxastic justification. But nevertheless, I think it is still in the spirit of Time-Slice Rationality, for it is a picture on which doxastic justification is identified with compliance with a norm (ODN) which is derivative from a more fundamental norm (N) which is time-slice-centric. As Williamson (forthcoming, ) writes, “Typically, any normative significance that DN possesses is merely derivative from

2 Note that it is possible for S to comply one of DN and ODN but not the other. It may be that she has a general disposition to believing H only if it would constitute knowledge, but fails to manifest that disposition on this particular occasion. In that case, she complies with DN but violates ODN. Conversely, she may lack a general disposition to believe H only when she knows H, but she may nonetheless have done very well on this occasion and believed just as a well-disposed agent would. In that case, she violates DN but complies with ODN.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

doxastic processes and responsibility  that of N, and any normative significance that ODN possesses is merely derivative from that of DN, and thence from that of N.”3 I do not want to debate whether to identify doxastic justification with compliance with N or compliance with ODN. “Doxastic justification” is philosophers’ lingo, after all, and I would even be happy to drop this often murky ideology altogether. What I want to hang my hat on, however, is that the primary epistemic norms are time-slice-centric. They are the norm of having only beliefs that are supported by your evidence, and perhaps also the norm of having only beliefs that constitute knowledge. (These two norms amount to the same thing, of course, if E=K and evidential support is entailment by one’s evidence.) Given these primary, time-slice-centric norms, we can define all sorts of derivative norms, including Williamson’s DN and ODN and many others besides. This liberal stance toward derivative norms is appropriate; we employ epistemic evaluations for lots of different purposes, so it would be narrow-minded to be too militant about what can count as a norm. Now, some of these derivative norms will be time-slicecentric, and others won’t, but what is important for my purposes is that all of them trace their significance back to our primary norms which are all time-slicecentric. In one way or another, they all trace back to the primary norms of having beliefs supported by your evidence, and having those beliefs meet the standard 3 Even apart from Williamson’s conception of a hierarchy of norms, it is likely that doxastic justification can be defined partly in terms of propositional justification, but not vice versa. A common way of thinking about doxastic justification is that you are doxastically justified in believing H just in case (i) you are propositionally justified in believing H, and (ii) you believe H on the basis of the facts that make you propositionally justified in believing H. This is to start with the notion of propositional justification and define doxastic justification in terms of it, along with the notion of basing. But I do not see how to start with doxastic justification and define the notion of propositional justification in terms of it. For instance, we cannot say that you have propositional justification for believing H just in case, were you to form the belief that H, your belief that H would be doxastically justified. For it might be that you have evidence that strongly supports H even though, were you to come to believe H, you would do so on the basis of wishful thinking. More promising would be to say that you have propositional justification for believing H just in case it is possible for you to come to have a doxastically justified belief that H. But this too is problematic. It might be that your evidence strongly supports H, but you have a psychological quirk that means you cannot actually believe H on the basis of this evidence. Another attempt to analyze propositional justification in terms of doxastic justification might be to say that your evidence consists of all and only the propositions that you are justified in believing. But provided that you can be doxastically justified in believing a falsehood, this theory would have the unattractive feature of treating evidence as non-factive (Littlejohn ()). And, of course, this approach could avoid non-factivity of evidence by holding that a belief is doxastically justified just in case it constitutes knowledge, but as we have seen, this Williamsonian approach is quite amenable to Time-Slice Rationality. Of course, the foregoing by no means constitutes a proof that it is impossible to define propositional justification in terms of doxastic justification, but it nonetheless points toward that conclusion. This asymmetry between propositional and doxastic justification would further suggest that the former is the more fundamental normative notion.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 doxastic processes and responsibility of knowledge. Even if some of our practices of epistemic evaluation have a nontime-slice-centric flavor on the surface, nevertheless the bedrock of epistemology is “time-slice first.”

. What about Reasoning? In this book, I have said little about reasoning. The norms I have defended are norms about what to believe, desire, or do, rather than norms about how to reason your way to conclusions about what to believe, desire, or do. This may seem like a serious omission. It is tempting to think that theorizing about rationality is in large part a matter of theorizing about reasoning, distinguishing good reasoning from bad. It is no coincidence that I have said little about reasoning thus far. Time-Slice Rationality says that rationality is a synchronic and impersonal matter—it is not concerned, in the first instance, with how a person is over time. But reasoning is an activity that takes time, and it is first and foremost something that a single person engages in.4 I have a twofold response to this worry. First, I am skeptical that a theory of rationality must give norms for reasoning. Second, even if I am wrong about this, it may be that we can evaluate patterns of reasoning as good or bad insofar as they are reliable or unreliable means of coming to better satisfy the purely synchronic norms advocated by Time-Slice Rationality. Even if reasoning is itself a diachronic matter, this conception of reasoning and how to evaluate it finds its source in purely sychronic norms about propositional justification and thus is still in the spirit of Time-Slice Rationality. Start with my first response. Why think that it is part of the task of a theory of rationality to come up with norms for reasoning? Kolodny (b, –) has an argument for the extreme position that the only norms of rationality are norms that apply to reasoning (or to mental processing, broadly construed). Kolodny starts by distinguishing state requirements from process requirements: State requirements require that you be a certain way at a given time. Process requirements require you to do something over time, where do is understood broadly, so as to include forming and revising beliefs.

4 Of course, groups may also be said to engage in reasoning, but arguably this is a derivative sense; groups reason only in virtue of and to the extent that their members reason and attempt to persuade others of their views.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

doxastic processes and responsibility  Process requirements include, as a central case, requirements that apply to reasoning. Kolodny then argues that the focus of a theory of rationality should be on process requirements:5 . . . I aimed to account for the thought that at least some requirements of rationality are normative or deontic: that they can function as advice or guide one’s deliberation. Process requirements can be normative in this sense, since they tell you to do something. But state requirements cannot be normative in this sense, since they do not tell you to do anything. At most, state requirements might be evaluative requirements: that is, necessary conditions for qualifying for a certain kind of appraisal.

First of all, I am not sure why it is supposed to be so bad if requirements of rationality are only evaluative requirements in Kolodny’s sense. Indeed, Kolodny never says much about what the evaluative vs. normative distinction even is (other than the link with advice and guidance), much less about why we should want requirements that are normative rather than just evaluative. Perhaps it is that deontic and normative requirements are linked with heavy-duty moral-sounding notions like responsibility, praise- and blame-worthiness, and obligation.6 If that is what Kolodny is thinking, then I am happy to accept that requirements of rationality are merely evaluative. If, by contrast, Kolodny is thinking that a requirement is deontic or normative if it is categorical, in the sense of being binding on all agents regardless of what they happen to care about, then I would want requirements of rationality to be deontic or normative. But regardless of how Kolodny thinks we should understand the distinction between evaluative and deontic/normative requirements, I am not convinced by Kolodny’s argument that only process requirements can be normative. He thinks that a requirement must provide advice or guidance in order to be normative. But why can’t a requirement provide guidance of some sort without being a requirement that you do something? I can give you guidance by telling you to be in a certain state, even if I don’t tell you what steps you might take in order to reach that state. If I tell you to be at home this evening, I am giving you guidance, but I am just telling you to be in a certain state. I am not telling you to perform any particular sort of action; I’m not telling you to cycle home, or to call your friends to tell them you can’t go out. So I can give you guidance—even useful 5 In a similar vein, Korsgaard (, ) writes, “On my view, rational requirements do not govern combinations of our attitudes. They govern thinking, the activity of thinking; and that means that they govern someone who is actively trying to determine what she has reason to believe or do. And thinking has a certain temporal direction. To be rational is not just to have a set of attitudes that happen to conform to a rational requirement. It is to follow to [sic] a rational requirement, to take it as an instruction.” 6 This gloss on what it is for a requirement to be deontic or normative is suggested but not endorsed by Pryor ().

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 doxastic processes and responsibility guidance—by telling you to be in a certain state, even if I don’t tell you to perform a certain action. By the same token, if I tell you to have credences which match your expectation of the objective chances (i.e. if I tell you to obey the Principal Principle), I have given you guidance, even though I am not telling you exactly how to bring your credences into line with this requirement. So if Kolodny just thinks that a requirement must provide some guidance in order to count as normative or deontic, then he has not given a convincing reason why requirements must be process requirements—requirements of how to reason—in order to be normative or deontic.7 But perhaps state requirements for beliefs and preferences provide insufficient guidance, and hence must be at least supplemented with norms for reasoning, because it is necessary for agents to reason in order to satisfy these other state requirements of rationality. Reasoning is necessary in order for agents to proportion their beliefs to the evidence, and so we must give norms for reasoning in addition to norms for belief. Now, to begin with, even if we need to reason in order to respond appropriately to evidence, it is not clear that reasoning is special in this regard. Some of us may also need to perform other sorts of actions—drinking coffee, debating with colleagues, and the like—in order for our beliefs to be responsive to the evidence. Reasoning may not even be the only mental action that is helpful for proportioning our beliefs to the evidence; meditating, brainstorming, and imagining are other mental actions that may be useful for similar reasons. But in any event this claim about reasoning being necessary in order to proportion your beliefs to the evidence (or to satisfy other state requirements on beliefs and preferences) strikes me as false. Many of our beliefs are not 7 Kolodny’s argument is reminiscent of Alston’s criticism of what he calls the deontological conception of epistemic justification (Alston ()), on which the central normative concepts governing belief are deontic ones like ought, permission, obligation, and the like. Alston thinks that it is a mistake to talk about whether you ought or ought not believe some proposition, for this is to apply deontic concepts to beliefs. He himself takes this to mean that we just shouldn’t use deontic language in theorizing about the rationality of beliefs, but one could alternatively conclude (with Kolodny) that requirements of rationality should govern voluntary activities like reasoning instead of beliefs. Alston holds that deontic concepts should not be applied to beliefs, since (i) beliefs are not under your direct voluntary control, and (ii) by the principle that “ought implies can,” ought and related concepts apply only to things under your direct voluntary control. But Alston does not argue for the claim that the “can” in “ought implies can” should be interpreted as “is under your direct voluntary control” as opposed to something weaker like “is physically and psychologically possible, holding fixed a certain (possibly contextually determined) set of background facts.” And it can be physically and psychologically possible for you to have some given belief, even if coming to have this belief is not under your direct voluntary control. Until we have some argument against this weaker interpretation of the “can” in “ought implies can,” Alston has not given us reason to refrain from applying ought and related concepts to beliefs, rather than just processes like reasoning that are under direct voluntary control.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

doxastic processes and responsibility  the result of reasoning. Some of these beliefs are ones that arguably couldn’t be the result of reasoning, such as perceptual beliefs which are caused by perceptual experiences rather than by any sort of conscious inferences. But even many of our non-perceptual beliefs may not be the result of any sort of reasoning. Of course, there will be facts about the computational processing in our brains that underpins changes in our beliefs, and this processing can be evaluated for its reliability in leading to rational beliefs, but I doubt that this processing will look much like reasoning as ordinarily conceived by philosophers (though of course this is an empirical question). I suspect that only a small subset of our beliefs—scientific beliefs are a paradigm example—are the result of conscious reasoning as opposed to just some sort of subconscious mental processing. Moreover, there is no reason that I can see why all of our beliefs could not be directly and automatically responsive to the evidence in this way. There is no principled reason why our beliefs could not all be directly caused by our evidence without our having to engage in any sort of reasoning or otherwise conscious thinking. Insofar as we often have to reason in order to determine what our evidence supports, this is a contingent fact having to do with the limitations of our psychology. We can certainly imagine creatures intellectually superior to ourselves that would have no need for reasoning. These creatures, upon gaining new evidence, would automatically update their beliefs accordingly. Indeed, I think that such creatures represent the epistemic ideal, so that our need to engage in reasoning is an indication of our intellectual failings. Broome () concedes this point but nonetheless thinks that the theory of rationality must say a great deal about reasoning. He acknowledges that, “Very often your rational disposition works automatically, causing you to satisfy individual requirements without your doing anything about it” (). And he concedes both that ideally rational beings might have no need for reasoning at all, and that reasoning is not alone in being an action that helps fallible, non-ideal, beings like us come closer to satisfying rational requirements (): Some ideally rational creatures such as angels may have a rational disposition that works infallibly in this automatic manner. They find themselves automatically satisfying every rational requirement they are under. Even a mortal can improve the automatic operation of her rational disposition by cultivating it. Training is one way. You can train your memory, for instance, and then you will more often satisfy persistence requirements. By cultivating your rational disposition, you can make yourself more rational: you can bring yourself to satisfy more requirements of rationality in the future. But we mortals will never match up to angels. Some requirements are too hard for our automatic processes to cope with . . . But when automatic processes let us down, our mortal rational disposition equips us with a further, self-help mechanism. We have another way

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 doxastic processes and responsibility of improving our score by our own efforts. We can do it through the mental activity of reasoning.

So why does Broome think that rationality is partly about rules for reasoning? The closest he comes to an argument that reasoning is of special interest in theorizing about rationality is the following (): Some philosophers who write on rationality seem to think they have finished their job when they have described requirements of rationality. But they have not. They would have done if they could rely on automatic processes to cause us to satisfy all the requirements we are under. But that is too hopeful. Sometimes we have to do some work for ourselves, in order to satisfy particular requirements.

It seems that Broome thinks that the theory of rationality must provide not only requirements that must be satisfied in order to qualify as (ideally) rational, but also instructions that mortal, non-ideal agents like us can follow in order to satisfy these strict requirements. But why think this? It is more natural to say that the theory of rationality gives us requirements that any being—no matter her contingent limitations owing to the details of the mechanism that underlies her mentality—must satisfy in order to count as perfectly rational. In this way, requirements of rationality are necessary. But how well you are able to satisfy these requirements and what steps would be effective in helping you to come closer to satisfying them are contingent matters which depend heavily on the exact ways in which you fall short of the ideal. Broome agrees that requirements of rationality are necessary and that this should keep us from wanting the requirements of rationality to which we are subject to vary depending on our contingent psychological limitations. He makes the assumption that “most requirements of rationality are necessary within what I called the domain of rationality. They apply to you at all worlds where you are a rational being. This means that, if a requirement would apply to you were you a superior sort of rational being such as an angel, it applies to you as a human being” (–). But then why think that the theory of rationality should include, in addition to requirements that take no account of our contingent psychological limitations, norms for reasoning, which is merely a tool for helping us to overcome our contingent psychological limitations? Broome’s emphasis on reasoning is in tension with his acknowledgment of the necessary status of requirements of rationality. In my view, we should say that the normative theory of rationality includes requirements of rationality which are necessary and hence do not vary depending on an agent’s contingent psychological limitations. And this means that it is not a desideratum on a theory of rationality that it issue norms for reasoning, since it is only because of our contingent psychological limitations that

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

doxastic processes and responsibility  we resort to reasoning in the first place in attempting to come closer to satisfying those requirements; we reason precisely because we fall short of ideal rationality. This is a hard-line view, that the theory of rationality needn’t say anything about reasoning. But even if you disagree with this extreme view, I do think that it is possible to say something about evaluating reasoning as good or bad in a way that is in the spirit of Time-Slice Rationality. The idea is that, as Broome suggests, the value of reasoning is instrumental; it is valuable only in virtue of and to the extent that it helps you come closer to satisfying the requirements of ideal rationality. This thought leads naturally to a way of evaluating reasoning as good or bad, rational or irrational. A pattern of reasoning undertaken by an agent is good (or rational) to the extent that it reliably helps (or can be expected to reliably help) that agent to come closer to satisfying the requirements of ideal rationality than she otherwise would.8 This means that what counts as a good (or rational) pattern of reasoning may be an agent-relative matter, in the sense that whether a pattern of reasoning that an agent undertakes is good may depend on contingent facts about that agent’s psychology. What is good (i.e. reliably effective) reasoning for me may be different from what is good reasoning for you, and may be different from what is good reasoning for more superhuman, deific creatures which lack some of our computational limitations. For instance, a creature with superhuman intelligence might be well-served by inferring complicated arithmetical truths directly from the Peano axioms, whereas it would not be a good idea for us to attempt to follow such reasoning. (When reasoning takes a premise-conclusion form, we might also evaluate reasoning as correct or incorrect according as the premises do or do not entail or evidentially support the conclusion.) On this picture, even though reasoning is something that takes place over time, our evaluations of patterns of reasoning as good or bad is grounded in how well they do in helping you to satisfy (or better satisfy) the purely synchronic requirements espoused by Time-Slice Rationality. Evaluations of (and perhaps norms for) reasoning piggyback on the synchronic, impersonal norms that are at the heart of the theory of rationality. In this way, we can account for our ordinary willingness to evaluate bits of reasoning as good or bad in a way that is very much in the spirit of Time-Slice Rationality.

8 Compare Macfarlane (ms, ): “Formal argumentation—the controlled drawing of consequences from a set of premises—is a tool. We engage in it (and train our students to engage in it) not for its own sake, but because we think it is useful for telling us what we ought to believe. We infer correctly when we infer in a way that is conducive to this goal.”

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 doxastic processes and responsibility

. Rational Evidence-Gathering Even if we don’t need to bring in diachronic norms to say how you ought to reason about your evidence or what beliefs you ought to have in response to your evidence, perhaps they are needed to govern the evidence-gathering process itself. Rational agents who are engaged in inquiry will gather as much evidence as possible, and will do so in an unbiased way, not seeking only evidence that will support their favored views. Because evidence-gathering is an action that takes place over time, it is tempting to think that diachronic norms on evidencegathering are required in order to ensure that rational agents act in this way.9 But this tempting thought is mistaken. For we already have the tools at our disposal to explain why rational agents will (ceteris paribus) prefer more evidence to less and will not seek evidence in a manner intended to yield support for their desired conclusions. Let us take these in turn, beginning with the claim that rational agents will prefer gaining as much evidence as possible, and then turning to the claim that they will seek evidence in an unbiased way. Whether you ought to seek new evidence depends on the expected benefits of doing so. And there is a well-known theorem (to be discussed shortly) which says that a rational agent will always prefer to acquire more evidence, unless that evidence is irrelevant to her concerns or comes with costs. This means that diachronic norms are not needed to ensure that rational agents will (ceteris paribus) seek to gather as much evidence as possible. As just noted, how much evidence an agent ought to try to acquire will depend on her practical concerns. It is no requirement of rationality that you seek evidence unrelated to the things you care about. After all, you are not rationally required to spend the rest of your days reading articles on wikipedia.10 Similarly, it is no requirement of rationality that you seek evidence, even relevant evidence, when the costs of gaining that evidence are excessive. Even though your health is important to you, you are not rationally required to go to the brink of bankruptcy getting tests to see if you have some minor ailment. How much evidence you are rationally required to seek depends on what you care about and how much that

9 Thus, Kornblith (, ) writes that “A theory of ideal reasoning needs to be supplemented by a theory of ideal evidence gathering in order to provide an adequate account of justification.” 10 See Hall and Johnson () for a defense of the extreme contrary view that you have an epistemic duty to seek more evidence relevant to any proposition about which you are uncertain. Feldman (, ) rejects their argument and agrees with my stance that whether one ought to seek more evidence about a given matter will depend on “what options one has, what one cares about, and other non-epistemic factors” and that “these diachronic questions [about whether and how to gather evidence] are moral or prudential questions rather than epistemic questions.”

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

doxastic processes and responsibility  evidence costs, whether those costs be in terms of money, time, or cognitive effort, to give just a few examples. Given that whether you ought to perform some evidence-gathering act depends on whether that act has highest expected utility, it is straightforward to prove that you always ought to decide to gain more evidence, provided that evidence is cost-free. This is the famous “Value of Knowledge” theorem of Good (). Suppose that you can either decide between the members of some set of acts now, or instead perform some experiment first and then decide between the members of the same set of acts. Let us assume that you are certain that you will be rational (i.e. will maximize expected utility) after having performed the experiment, and also that performing the experiment is cost-free. What Good proved is that the expected utility of performing the cost-free experiment and then choosing the member of the set of acts with highest expected utility relative to your postexperiment credences is always at least as great as the expected utility of choosing now among the members of that set of acts, and strictly greater unless the same member of that set of acts will have highest expected utility no matter the results of the experiment, in which case the information that might be gained from the experiment is irrelevant.11 In this way, agents will always be rationally required to seek more evidence before deciding, unless gaining that new evidence is irrelevant or comes with costs. I relegate a sketch of the proof (following Skyrms ()) to a footnote.12 There is thus no need for diachronic norms on evidence-gathering in order to ensure that rational agents will generally seek new evidence relevant to their concerns. This brings us to a related possible motivation for diachronic norms on evidence-gathering, namely that such norms are needed to prohibit biased 11 Good proved his theorem for a Savage-style decision theory, and it also holds for all the different formulations of causal decision theory. Notably, however, the theorem fails for Jeffrey-style evidential decision theory. It is easy to see why. Consider a variation of the Newcomb problem in which you can either choose between one-boxing and two-boxing now, or instead look in the opaque box first and then choose between one-boxing and two-boxing. Evidentialists will prefer to one-box now, since they know that if they looked in the opaque box before deciding, they would two-box (after peeking, two-boxing would have highest evidential expected utility), but two-boxing is strong evidence that the opaque box is empty. So peeking first and then two-boxing would be strong evidence that you’ll wind up with just the $, in the transparent box, the opaque one being empty. Buchak () also shows that Good’s theorem fails for her risk-sensitive decision theory. While Buchak, at least, takes this to be a positive feature of her view, I take the failure of Good’s theorem for Jeffrey’s and Buchak’s decision theories to be a strike against them. 12 Let the relevant set of acts, among which you can choose either now or later, be {A , . . . , A }.  n Let {S , . . . , Sk } be the possible states of the world which determine the outcomes of the Ai . Let the experiment be cost-free, and let the possible results of the experiment be a partition {E , . . . , Em }. (This partionality assumption is the same one that was appealed to in Chapter  to prove that Expert Deference followed from the axioms of the probability calculus, and so Good’s theorem will not hold in a Williamsonian framework. Thanks to Bernhard Salow for raising this issue.) Now, the expected

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 doxastic processes and responsibility evidence-gathering. Because Good’s theorem only applies when the evidence is cost-free, it does not apply in cases where you have to pay to get the evidence, where it is cognitively taxing to evaluate the evidence, or where gaining the evidence might bring psychological distress. It is this last kind of cost that you might think raises a special need for diachronic norms for evidence-gathering. For while it is certainly rational to decline to gather evidence when you have to pay for it or exert great cognitive effort to evaluate it, it does not likewise seem rational to avoid evidence because it might bring some sort of distress. Avoiding evidence you might not like, or seeking evidence you suspect you will like, does not seem like the sort of thing a rational inquirer should do. The worry, then, is this: If whether and how to seek evidence is determined on the basis of expected utility calculations, and you prefer believing some proposition to disbelieving it, won’t it be the case that sometimes you ought to engineer your evidence so as to get yourself to believe that proposition?13 Consider a concrete example. Pascal famously argued that believing in God has higher expected utility than disbelieving in God or remaining agnostic. Of course, if you cannot just form beliefs at will, you could not respond directly to these expected utility considerations. Pascal concedes the point but says that nonetheless you ought to do things that will likely lead to your coming to believe in utility of choosing now among the Ai is just the maximum of their expected utilities, relative to your current credences P:  maxj i P(S  i ) × U(Aj ∧ Si ) = maxj k i P(Ek | Si ) × P(Si ) × U(Aj ∧ Si ) = maxj k i P(Ek ∧ Si ) × U(Aj ∧ Si ) Meanwhile, the expected utility of performing the experiment and then choosing the Ai with highest expected utility relative to your later credences is:   maxj i P(Si | Ek ) × U(Aj ∧ Si ) k P(Ek ) = k maxj i [P(Ek | Si ) × P(Si )/P(Ek )] × U(Aj ∧ Si ) = k maxj i P(Ek | Si ) × P(Si ) × U(Aj ∧ Si ) = k maxj i P(Ek ∧ Si ) × U(Aj ∧ Si )  If we let f (k, j) be i P(Ek | Si ) × P(Si ) × U(Aj ∧ Si ), we see that the expected utilities of deciding now and experimenting and then deciding are:  EU of Deciding Now = max  j k f (k, j) EU of Experimenting then Deciding = k maxj f (k, j)     But for any function f , k maxj f (k, j) ≥ maxj k f (k, j), and k maxj f (k, j) > maxj k f (k, j) provided that maxj f (k, j) is not the same for all k. What this means is that the expected utility of experimenting then deciding is always at least as great as the expected utility of deciding now. And the expected utility of experimenting then deciding is strictly greater than the expected utility of deciding now, unless the same act will have highest expected utility after the experiment, no matter what the results of the experiment (in which case the new evidence resulting from the experiment is irrelevant). 13 See Parfit () and Kelly () for brief discussion of seeking evidence in a manner intended to support your favored views.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

doxastic processes and responsibility  God, such as attending church, spending time with believers, reading theological texts, and the like. But there is a worry that this sort of behavior—managing your evidence so as to reach a predetermined conclusion—is paradigmatically irrational. If such behavior is recommended by expected utility considerations, then so much for the thought that those considerations determine how you ought to seek evidence. But as Salow (ms) persuasively argues, it is actually impossible to engage in what he calls “intentionally biased inquiry,” provided that engaging in such inquiry does not involve forgetting or becoming irrational in the future (we will shortly address those possibilities).14 Let us say that your inquiry is biased toward H if your expectation of your future credence in H is higher than your current credence in H. Why focus on the mathematical expectation of your future credence in H? Why not just say that your inquiry is biased toward H if you are more confident than not that your future credence in H will be higher than your current credence in H? Because the latter definition would count your inquiry as biased toward H if you were very confident that you would gain weak evidence for H, but also had some credence that you would gain very strong evidence against H. But this sort of inquiry is not at all biased, in the sense of bias that we are interested in. Take some well-confirmed scientific theory and consider an experiment with the potential to disconfirm that theory. I am extremely confident that the experimental result will be just as the theory predicts, and that will be evidence in favor of the theory, albeit very weak evidence, since the theory is already well-confirmed. But I also have some small credence that the result of the experiment will conflict with the theory’s predictions, and if that happens, it will be very strong evidence against the theory. But of course, performing an experiment to see if the predictions of some well-confirmed theory are true does not involve some sort of biased inquiry. Salow’s insight, then, is that provided you don’t lose evidence or become irrational as a result of your inquiry, it is impossible to intentionally bias your evidence-gathering such that your expectation of your future credence in H is higher than your current credence in H. This is because your awareness of what you are doing affects the evidential import of the facts you encounter. Suppose you are a creationist but, to use Salow’s example, you want to reinforce your beliefs by reading lots of creationist texts (and avoiding any biology textbooks). You are well-aware that creationist texts typically contain discussion of phenotypic traits that are (allegedly) difficult for evolutionary theory to explain, such blood clotting, the eye, altruistic behavior, and the like. So, being aware of the fact that 14

See also Titelbaum ().

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 doxastic processes and responsibility you had decided in advance to only read creationist texts, you already know that you will encounter facts about phenotypic traits are that supposedly difficult to explain on evolutionary grounds. If, when you encounter those facts, they are just as compelling as you had expected, then you don’t gain any further evidence for creationism. The force of those facts had already been priced-in to your beliefs, as it were. Of course, you also have some credence, in advance of reading the texts, that the facts you encounter will be even more compelling than you’d anticipated. If this happens, you will indeed gain evidence to further support your beliefs. But if you have some credence that you will encounter evidence that will rationally make you more confident in theism, you must also have some credence that you will encounter evidence that will rationally make you less confident in creationism. You must have some credence that the facts about puzzling phenotypic traits will be far less compelling than anticipated. Perhaps they will be facts such that, when you encounter them, you will quickly be able to come up with adaptationist explanations for why they would evolve. And, knowing that you had already decided to only read creationist texts, if you then encounter facts which are less compelling than you had anticipated, this will be significant evidence against creationism. As Salow emphasizes, the impossibility of intentionally biased inquiry is related to the truth of a principle like Expert Deference. The proof that your expected future credence in H must equal your current credence in H did not rely on any assumptions about which books you would read, or more generally about how you would go about acquiring evidence, but only on the assumption that you do not believe you might lose evidence or become irrational in the future. In this way, Expert Deference entails the impossibility of intentionally biased inquiry in cases where you do not think you might lose evidence or become irrational. Of course, if you were to cause yourself to forget or become irrational, it would be possible to engineer things so that your expectation of your future credence in H is higher than your current credence in H. You might buy a bunch of creationist books but also take an amnesia-inducing pill that will make you forget having decided to only buy the creationist books and not the biology textbooks as well. Then, when you encounter the facts about problematic phenotypic traits, you really do gain evidence for creationism. After all, having forgotten that you had decided to only buy creationist books, the fact that all your books contain evidence of traits that are tough for evolutionary theory to explain is a suprising fact that would be nicely explained by the hypothesis that creationism is true! Compare a case where you have a friend videotape herself tossing a coin a number of times, but you then pay her to erase the shots of all the times the coin landed tails,

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

doxastic processes and responsibility  and also take a pill which causes you to forget ever having asked her to do so. Then, when you watch the video and see the coin landing heads over and over, you really do gain evidence that the coin is biased toward heads. Similarly, you can engineer your future self ’s credences in a biased way if you cause yourself to be irrational in the future. You could attend creationist conferences with the expectation that the camaraderie of those gatherings will cause you to irrationally put undue weight on any evidence for creationism while dismissing any evidence against. But while engaging in biased inquiry based on forgetting or future irrationality involves putting your future self in a suboptimal epistemic position, it needn’t be irrational for you to do so. It can be rational to cause yourself to forget, for instance if there were procedures available that could erase painful memories.15 Similarly, it can be rational to cause your future self to become irrational. Schelling () gives a case where a burglar invades your home and takes your family hostage, threatening to kill them unless you give him access to your bank account. The only way to protect your family while also keeping your money is to take a pill which will cause you to act so erratically and irrationally that the burglar will see that you are completely unresponsive to threats and therefore leave without harming your family. On a natural spelling-out of the case, you clearly rationally ought to take the pill. The foregoing discussion suggests that there is no need for diachronic norms on evidence gathering that specifically rule out intentionally biased inquiry. In most cases, where you don’t forget or become irrational, intentionally biased inquiry is impossible to begin with. But in other cases, where you engage in intentionally biased inquiry by causing yourself to forget or become irrational, it can be rational to do so, despite the fact that it puts your future self in a suboptimal epistemic position. I conclude that there are no cases in which intentionally biased inquiry is irrational which are not already ruled out on independent grounds. We started with the thought that agents don’t just respond to evidence; they also seek it out. But we do not need diachronic norms on evidence gathering to ensure that rational agents will (ceteris paribus) seek to gain as much relevant evidence as possible, and do so in an unbiased way. For Good’s theorem means that expected utility theory already entails that rational agents will prefer to gain relevant costfree information, since doing so will help them to promote their ends. And they

15 As Dan Greco reminded me, this is the theme of the film The Eternal Sunshine of the Spotless Mind, though it focuses on unforeseen technical and emotional problems stemming from targetted memory erasure.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 doxastic processes and responsibility will seek this information in an unbiased way simply because it is impossible to do otherwise except in marginal cases where you cause yourself to forget or become irrational (in which case biased evidence-seeking can be quite rational). So just as we don’t need diachronic norms to say how you ought to respond to evidence, we also don’t need diachronic norms to say how you ought to seek it out.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 Rationality and the Subject’s Point of View In Reasons and Persons, Parfit (, ) writes, “when we are considering both theoretical and practical rationality, the relation between a person now and himself at other times is relevantly similar to the relation between different people.” This book can be seen as an extended defense and elaboration of this guiding idea. Parfit’s idea is, I think, a natural outgrowth of the quick gloss on rationality with which I began this book. There, I said that being rational is a matter of believing, desiring, and behaving in ways that are sensible, given your perspective on the world. This is far from a reductive analysis of rationality, but it is substantive nonetheless, for it suggests that requirements of rationality should avoid reference to things that do not either constitute an agent’s perspective on the world or constitute ways of cashing out the notion of sensibility. First, it supports the idea that what you rationally ought to believe, desire, or do supervenes on your present mental states, for your perspective on the world is constituted by those mental states. This means that requirements of rationality should be synchronic. Second, it suggests that requirements of rationality should be impersonal, avoiding reference to the relation of personal identity over time. For facts about whether your present self is related to some past or future person (or rather time-slice) by the relation of personal identity over time do not help constitute your current perspective on the world. I bolstered this thought by appealing to particular cases in which the facts about the metaphysics of personal identity are vague, unclear, and controversial, and yet the facts about what it would be sensible for you to believe, desire, or do are quite clear. Time-Slice Rationality incorporates these ideas through two central theses. The first is Synchronicity, the claim that all requirements of rationality are synchronic. And the second is Impartiality, the claim that your beliefs about what attitudes you have at other times play the same role in determining how you ought to be now as your beliefs about what attitudes others have.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 rationality and subject’s point of view Time-Slice Rationality is, I think, a very natural view about rationality, but it is revisionary in rejecting particular sorts of principles that have been widely defended by epistemologists, philosophers of science, and others. Chief among them are update rules for beliefs like Conditionalization and first-personal deference principles like Reflection. Interestingly, parallel principles for preferences have received little attention, even though you might expect close connections between principles for beliefs and principles for preferences. I have sought to rectify this omission by considering, and rejecting, the most natural forms of diachronic and reflection principles for preferences. My picture also makes little room for norms of reasoning (an action that takes time), except in the sense that we can evaluate patterns of reasoning for how reliable or unreliable they are in helping one better satisfy the purely synchronic, impersonal requirements of rationality which are the centerpiece of my view. But while Time-Slice Rationality may be a revisionary view about the structure of requirements of rationality, I do not think that it is revisionary in terms of the verdicts it yields for particular cases. Quite the contrary, I think it yields by and large better verdicts than non-time-slice-centric theories of rationality, for instance in Two Roads to Shangri-La and in a variety of decision theory cases considered in Chapter . More generally, I have proposed replacement principles which are synchronic and impersonal and yet do much of the work that the old principles were meant to do. For instance, diachronic principles directly require your attitudes to evolve smoothly over time, but I showed that if we assume Uniqueness theses for beliefs and (admittedly more controversially) for preferences, we only need synchronic principles to get the result that in standard cases, if you are rational at each particular time, your attitudes will happen to evolve smoothly over time and change only in response to changes in evidence. In this way, my view captures the same data (or at least much of the same data), but does so in a simpler and cleaner fashion. It ensures stability over time in the attitudes of rational agents with only synchronic, impersonal principles, obviating the need for a theory with a mix of synchronic and diachronic principles. My theory also has the nice feature that its different parts are mutually supporting. Rather than restricting myself to doing epistemology, I have given a unified picture of rationality encompassing principles for beliefs, preferences, and actions in which the norms governing these different things are structurally parallel. It is important that norms for beliefs, preferences, and actions be unified, in part because there are connections between them. For instance, my time-slicecentric picture of options and norms for rational action, developed in Chapter , yields a rebuttal of a powerful argument for the sorts of non-time-slice-centric principles that I rejected in Chapters  and —the Diachronic Tragedy Argument

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

rationality and subject’s point of view  (Chapter ). Rebutting this argument also allows for a more moderate-seeming Uniqueness thesis for rational doxastic states, in which the uniquely rational doxastic state, given a body of evidence, may be an imprecise or “mushy” credal state rather than one represented by a precise probability function. For the main obstacle in the way of adopting imprecise credences as part of our epistemological tool-kit was Elga’s Diachronic Tragedy Argument against them, an argument which crucially relies on a non-time-slice-centric picture of the rationality of action. I think that the way in which my view is unified and simple and its parts mutually supporting is a strong selling point in its favor. One central implication of Time-Slice Rationality is that it is no requirement of rationality that you have a perfect memory or maximally strong willpower (the ability to self-bind). Denying the irrationality of forgetting played a role in my argument against Conditionalization, and denying the irrationality of having less than optimal willpower was relevant to my rebuttal of the Diachronic Tragedy Argument. But this does not mean that memory and willpower are unimportant. Quite the contrary. My theory entails that you ought, ceteris paribus, to take steps to improve your memory and willpower. Memory and willpower are instrumentally beneficial. In many cases, having a better memory and more willpower would help you better achieve your goals, and so it will be instrumentally rational for you to attempt to improve these faculties. There is no shortage of discussion on how you might do this. The art of memory was a favorite topic in classical times, with mnemonic techniques being discussed by Aristotle, Cicero, and countless others. Perhaps the main technique was to picture a scene and associate the different things to be remembered with different objects in the scene. The Jesuit missionary Matteo Ricci famously sought favour with Ming Dynasty officials by promising to build a “memory palace” whose different parts could be associated with different items to be memorized; memorization was key to passing the civil service examinations (Spence ()). Contemporary neuroscience may also yield methods for improving memory and slowing its decline during aging. And if the extended mind hypothesis (Clark and Chalmers ()) is correct, the contents of notebooks and smartphones could count as part of your memory, so you may be able to improve your memory simply by investing in a new iPhone! The study of willpower and its improvement also has a long history, and contemporary social psychology has revealed useful ways to improve it and to mitigate the bad effects of its finitude. In Chapter , Section , I discussed the work of Roy Baumeister, who has shown that willpower is like a muscle in many respects. You can strengthen it through regular and moderate exercise. Performing small acts of willpower like trying to sit up straight can give you more willpower to use later on more important things. But, like a muscle,

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 rationality and subject’s point of view willpower can be exhausted. Recognizing that your willpower is finite can help you deploy it judiciously and strategically and not attempt, say, to go on a strict diet and quit smoking at the same time. Willpower is even like a muscle in being sensitive to glucose levels, so a quick sugary drink can be an effective way to give your willpower a short-term boost (Baumeister ()). But because memory and willpower are just instrumentally beneficial, whether you ought to take the time and spend the money required to learn about techniques to improve them and then to actually use these techniques will depend on your particular needs and goals. It will depend on the expected utilities of these memory- and willpower-improving actions. Buying a new iPhone may count as a way to improve your memory, but this doesn’t mean that you are rationally required to do so, the claims of certain Apple enthusiasts notwithstanding! Moreover, while having a better memory or more willpower may generally be beneficial, this needn’t always be the case. A perfect memory, for instance, can also be a curse. Solomon Shereshevskii was a subject studied by the neuropsychologist Alexander Luria, and while he had a near-perfect memory, this ability came with trade-offs. Facial recognition was difficult, since the same face seen at different angles were like separate, unconnected memories for him, and he had difficult keeping a steady job, as new memories distracted him from his work (Rose ()). Borges’s short story “Funes el Memorioso” tells the tale of a young man who, as a result of a fall from a horse, comes to have a perfect memory, but remembering each and every detail prevents him from forming generalizations and abstractions, which involve ignoring certain irrelevant details. It seems that while the ability to remember has clear benefits, so does the ability to forget. Rather than just saying that perfect memory and perfect willpower are requirements of rationality and leaving it at that, my view has it that memory and willpower are both tools, and requirements of rationality apply to the deployment of these tools just as they apply to others. Whether and how to use or improve them will depend on the same sorts of costbenefit considerations that apply to all sorts of other actions. Finally, I have advocated adopting the perspective of seeing a person-over-time as akin to a group of people. The various members—time-slices of the same agent, in the one case, and different agents, in the other—stand in both cooperative and combative relationships to each other. A fascinating question, and one which I leave open, is to what extent we should apply the same perspective shift to peopleat-times. To what extent should we see even a particular time-slice of an agent as fragmented, composed of different fragments that have different information, different interests, and both cooperate and compete with each other? There are many precedents for adopting this perspective, but I can do no more here than briefly mention a few. In cognitive science, the strategy of viewing minds

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

rationality and subject’s point of view  as composed of semi-autonomous parts or systems is widespread. Fodor’s hugely influential Modularity of Mind () advocated viewing the mind as composed of different modules—a language module, a visual perception module, etc.— which are informationally encapsulated (only having access to particular kinds of information), automatic and involuntary, rapid-fire, and have a fixed neural architecture. In a related vein, Minsky’s () Society of Mind hypothesis sees the mind as the outcome of the interactions of lots of different parts, which he calls “agents” but which are supposed to be themselves mindless. Each performs some very specific task and they have limited ability to communicate with each other. Research in behavioral economics, led by the work of Kahneman and Tversky, has led to thinking of the mind as divided into two systems: System , which is fast, automatic, and subconscious, and System , which is slow, effortful, and conscious. In philosophy, Lewis () and Stalnaker () have proposed using fragmentation to model agents with contradictory beliefs. Lewis found himself with an inconsistent triad of beliefs: that Nassau Street ran roughly east-west, that the nearby railroad ran roughly north-south, and that the two were roughly parallel. But rather than conceiving of the situation as one in which he has a single belief state which is contradictory, he proposed modeling it as one in which his system of beliefs was broken into fragments, where “Different fragments came into action in different situations, and the whole system of beliefs never manifested itself all at once” (). More recently, Greco () has proposed using fragmentation to model agents who seem to display failures of self-knowledge. Lastly, there are cases of nervous systems which display a striking lack of unity. In split brain cases, also known as brain bisections or commissurotomies, a patient’s corpus callosum is cut. The corpus callosum is the bundle of nerves connecting the two hemispheres of the brain. These operations have the purpose of ameliorating the severity and frequency of seizures in epilepsy sufferers. Split brain patients sometimes have experiences where, speaking loosely, the left and right hemispheres seem to be operating independently, having access to different bodies of information. For instance, in one type of experiment, a split-brain patient is shown an array such that the light from the letters of one word (“hat,” say) hits the left half of her retina (which is processed by her left hemisphere), while the letters of another word (“ball,” say) hit the right half of her retina (which is processed by her right hemisphere). Then, if she is asked to reach into a bucket containing a hat and a ball and to pick out the object denoted by the word that she saw, she will pick out the ball if asked to reach with her left hand (controlled by her right hemisphere, which “saw” the word “ball”) but will pick out the hat if asked to reach with her right hand (controlled by her left hemisphere, which “saw” the word “hat”). In a nutshell, then, split brain patients exhibit a striking separation between their

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 rationality and subject’s point of view hemispheres, in which the person as a whole seemingly cannot simultaneously access the information processed in each of the two hemispheres. Even stranger cases can be found in other species. My favorite is the octopus. Godfrey-Smith () writes that the octopus is an even better example than Nagel’s bat (Nagel ()) of a creature whose subjective experience is scarcely even imaginable. The nervous systems of octopuses and other cephalopods are more distributed than ours. While they have a central brain, about two-thirds of an octopus’s neurons are located in its arms, which display semi-autonomous behavior. Godfrey-Smith quotes the cephalopod researchers Hanlon and Messenger (Hanlon and Messenger ()) as describing the arms as “curiously divorced” from the brain in terms of their movement. To the extent that this is correct, does this mean that the octopus should be seen as something like an agglomeration of loosely-connected agents? It may be, then, that my move of seeing different time-slices of a temporally extended agent as akin to distinct agents, each individually subject to rational norms and interacting strategically with one another, is only the start. Perhaps we need to zoom in even closer and see even agents-at-times as akin to groups. This more radical shift in perspective would clash with standard models of rationality, which presuppose that agents-at-times have fairly unified minds. Expected utility theory, for instance, assumes that each agent has a single credence function and a single utility function, but this and other standard models will have to be modified when we encounter agents who deviate sharply from this idealized picture of mental unity. This presents a challenging and fascinating task for future research, and one where empirical evidence is likely to be highly relevant—the task of extending theories of rationality to agents with fragmented minds. We might call it the task of doing epistemology for cephalopods.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

Bibliography Ainslie, George. . Breakdown of Will. Cambridge: Cambridge University Press. Alchourron, Peter, Carlos Gärdenfors, and David Makinson. . “On the Logic of Theory Change: Partial Meet Contraction and Revision Functions.” Journal of Symbolic Logic :–. Alston, William P. . “The Deontological Conception of Epistemic Justification.” Philosophical Perspectives :–. Arntzenius, Frank. . “Some Problems for Conditionalization and Reflection.” Journal of Philosophy :–. Arntzenius, Frank. . “No Regrets, Or: Edith Piaf Revamps Decision Theory.” Erkenntnis :–. Arntzenius, Frank, Elga Adam, and John Hawthorne. . “Bayesianism, Infinite Decisions, and Binding.” Mind :–. Aumann, Robert. . “Agreeing to Disagree.” Annals of Statistics :–. Baumeister, Roy, Bratlavsky, Ellen, Muraven, Mark, and Dianne Tice. . “Ego Depletion: Is the Active Self a Limited Resource?” Journal of Personality and Social Psychology :–. Baumeister, Roy. . “Ego Depletion and Self-Control Failure: An Energy Model of the Self ’s Executive Function.” Self and Identity :–. Baumeister, Roy. . Willpower: Rediscovering the Greatest Human Strength. Harmondsworth: Penguin. Berker, Selim. . “Luminosity Regained.” Philosophers’ Imprint :–. Berker, Selim. . “Epistemic Teleology and the Separateness of Propositions.” Philosophical Review :–. Bermudez, Jose Luiz. . Decision Theory and Rationality. New York: Oxford University Press. BonJour, Laurence. . The Structure of Empirical Knowledge. Cambridge, MA: Harvard University Press. Bratman, Michael E. . Intentions, Plans, and Practical Reason. Stanford, CA: CSLI. Bratman, Michael E. . Reflection, Planning, and Temporally Extended Agency. In Structures of Agency. Oxford: Oxford University Press. Bratman, Michael. . “Agency, Time, and Sociality.” Presidential address at the th Annual Pacific Division Meeting of the American Philosophical Association. . Bratman, Michael E. . “Temptation and the Agent’s Standpoint.” Inquiry :–. Briggs, Rachael. . “Distorted Reflection.” Philosophical Review :–. Broome, John. . Weighing Goods. Hoboken, NJ: Wiley-Blackwell. Broome, John. . “A Cause of Preference is not an Object of Preference.” Social Choice and Welfare :–.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 bibliography Broome, John. . “Wide or Narrow Scope?” Mind :–. Broome, John. . Rationality through Reasoning. Oxford: Blackwell. Buchak, Lara. . “Instrumental Rationality, Epistemic Rationality, and EvidenceGathering.” Philosophical Perspectives :–. Buchak, Lara. . Risk and Rationality. Oxford: Oxford University Press. Burge, Tyler. . “Individualism and the Mental.” Midwest Studies in Philosophy : –. Burge, Tyler. . “Content Preservation.” Philosophical Review :–. Byrne, A., and A. Hájek. . “David Hume, David Lewis, and Decision Theory.” Mind :–. Carnap, Rudolf. . The Logical Foundations of Probability. Chicago: University of Chicago Press. Chisholm, Roderick M. . The Truths of Reason. In Paul K. Moser (ed.), A Priori Knowledge. Oxford: Oxford University Press. Christensen, David. . “Clever Bookies and Coherent Beliefs.” Philosophical Review :–. Christensen, David. . “Dutch-Book Arguments Depragmatized: Epistemic Consistency for Partial Believers.” Journal of Philosophy :–. Christensen, David. . “Diachronic Coherence Versus Epistemic Impartiality.” Philosophical Review :–. Christensen, David. . “Does Murphy’s Law Apply in Epistemology? Self-Doubt and Rational Ideals.” Oxford Studies in Epistemology :–. Clark, Andy, and David J. Chalmers. . “The Extended Mind.” Analysis :–. Cohen, Stewart. . “Justification and Truth.” Philosophical Studies :–. Davidson, Donald. . “Radical Interpretation.” Dialectica :–. Davidson, Donald. . “Knowing One’s Own Mind.” Proceedings and Addresses of the American Philosophical Association :–. Davidson, Donald, J. C. C. McKinsey, and Patrick Suppes. . “Outlines of a Formal Theory of Value, I.” Philosophy of Science :–. Dennett, Daniel C. . The Intentional Stance. Cambridge, MA: MIT Press. Dogramaci, Sinan. . “Reverse Engineering Epistemic Evaluations.” Philosophy and Phenomenological Research :–. Dougherty, Tom. . “On Whether to Prefer Pain to Pass.” Ethics :–. Dougherty, Tom. . “A Deluxe Money Pump.” Thought :–. Dreier, James. . “Rational Preference: Decision Theory as a Theory of Practical Rationality.” Theory and Decision :–. Elga, Adam. . “Self-Locating Belief and the Sleeping Beauty Problem.” Analysis : –. Elga, Adam. . “Reflection and Disagreement.” Noûs :–. Elga, Adam. . “Subjective Probabilities Should be Sharp.” Philosopher’s Imprint . Feldman, Richard. . “The Ethics of Belief.” Philosophy and Phenomenological Research :–. Feldman, Richard, and Earl Conee. . “Evidentialism.” Philosophical Studies :–. Fisher, Justin C. . “Why Nothing Mental is Just in the Head.” Noûs :–. Fodor, Jerry A. . The Language of Thought. Cambridge, MA: Harvard University Press. Fodor, Jerry A. . The Modularity of Mind. Cambridge, MA: MIT Press.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

bibliography  Frankfurt, Harry. . “Alternate Possibilities and Moral Responsibility.” Journal of Philosophy :–. Fricker, Elizabeth. . “Second-Hand Knowledge.” Philosophy and Phenomenological Research :–. Fricker, Elizabeth. . Is Knowing a Mental State? The Case Against. In Greenough, Patrick, and Duncan Pritchard (eds). Williamson on Knowledge. Oxford: Oxford University Press. Gaifman, Haim. . A Theory of Higher Order Probabilities. In Skyrms, Brian, and William Harper (eds). Causation, Chance, and Credence. Dordrecht: Kluwer. Gärdenfors, Peter. . “Imaging and Conditionalization.” Journal of Philosophy : –. Gauthier, David. . Practical Reasoning. Oxford: Clarendon Press. Gauthier, David. . Morals by Agreement. New York: Oxford University Press. Gibbard, Allan. . Thinking How to Live. Cambridge, MA: Harvard University Press. Godfrey-Smith, Peter. . “On Being an Octopus.” Boston Review May/June:–. Goldman, Alvin. . “What is Justified Belief?” In Epistemology. An Anthology, –. Oxford: Blackwell. Goldman, Alvin. . “Internalism Exposed.” Journal of Philosophy :–. Goldman, Alvin. . The Unity of the Epistemic Virtues. In Fairweather, Abrol, and Linda Zagzebski (eds). Virtue Epistemology: Essays in Epistemic Virtue and Responsibility. Oxford: Oxford University Press. Goldman, Alvin. . Epistemic Relativism and Reasonable Disagreement. In Feldman, Richard, and Ted Warfield (eds). Disagreement. Oxford: Oxford University Press. Good, I. J. . “On the Principle of Total Evidence.” British Journal for the Philosophy of Science :–. Goodman, Nelson. . Fact, Fiction, and Forecast. Cambridge, MA: Harvard University Press. Greaves, Hilary. forthcoming. Ethics, Climate Change, and the Role of Discounting. WIREs Climate Change. Greaves, Hilary, and David Wallace. . “Justifying Conditionalization: Conditionalization Maximizes Expected Epistemic Utility.” Mind :–. Greco, Daniel. . “Iteration and Fragmentation.” Philosophy and Phenomenological Research . Greco, Daniel. forthcoming. “Could KK be OK?” Journal of Philosophy. Hájek, Alan. . “What Conditional Probability Could Not Be.” Synthese :–. Hájek, A., and Philip Pettit. . “Desire Beyond Belief.” Australasian Journal of Philosophy :–. Hall, Ned. . “Correcting the Guide to Objective Chance.” Mind :–. Hall, Ned. . “Two Mistakes About Credence and Chance.” Australasian Journal of Philosophy :–. Hall, Richard J., and Charles R. Johnson. . “The Epistemic Duty to Seek More Evidence.” American Philosophical Quarterly :–. Halpern, Joseph. . “Lexicographic Probability, Conditional Probability, and Nonstandard Probability.” Games and Economic Behavior :–. Handfield, Toby. . A Philosophical Guide to Chance. Cambridge: Cambridge University Press.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 bibliography Hanlon, Roger, and John Messenger. . Cephalopod Behaviour. Cambridge: Cambridge University Press. Hare, Caspar. . “A Puzzle About Other-Directed Time-Bias.” Australasian Journal of Philosophy :–. Hare, Caspar. . “Take the Sugar.” Analysis :–. Hare, Caspar, and Brian Hedden. forthcoming. “Self-Reinforcing and Self-Frustrating Decisions.” Noûs. Harman, Elizabeth. . “ ‘I’ll Be Glad I Did It’: Reasoning and the Significance of Future Desires.” Ethics :–. Harman, Gilbert H. . Change in View. Cambridge, MA: MIT Press. Harman, Gilbert H. . “The Inference to the Best Explanation.” Philosophical Review :–. Harsanyi, John. . “Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of Utility.” Journal of Political Economy :–. Harsanyi, John. . Rational Behavior and Bargaining Equilibrium in Games and Social Situations. Cambridge: Cambridge University Press. Hausman, Daniel M. . “The Impossibility of Interpersonal Utility Comparisons.” Mind :–. Hedden, Brian. . “Incoherence Without Exploitability.” Noûs :–. Hinchman, Edward. . “Trust and Diachronic Agency.” Noûs :–. Holton, Richard. . “Intention and Weakness of Will.” Journal of Philosophy :–. Holton, Richard. . “Rational Resolve.” Philosophical Review :–. Holton, Richard. . Intention as a Model for Belief. In Vargas, Manuel, and Gideon Yaffe (eds). . Rational and Social Agency: The Philosophy of Michael Bratman. Oxford: Oxford University Press. Hume, David. . A Treatise of Human Nature. Oxford Philosophical Texts. Oxford: Oxford University Press. Jackson, Frank, and Robert Pargetter. . “Oughts, Options, and Actualism.” Philosophical Review :–. Jeffrey, Richard. . “A Note on the Kinematics of Preference.” Erkenntnis :–. Jeffrey, Richard. . The Logic of Decision. Chicago: University of Chicago Press. Jeffrey, Richard. . Preference among Preferences. In Probability and the Art of Judgment. Cambridge: Cambridge University Press. Joyce, James. . “A Nonpragmatic Vindication of Probabilism.” Philosophy of Science :–. Kagan, Shelly. . “Do I Make a Difference?” Philosophy and Public Affairs :–. Kavka, Gregory. . “The Toxin Puzzle.” Analysis :–. Kelly, Thomas. . “The Rationality of Belief and Other Propositional Attitudes.” Philosophical Studies :–. Kelly, Thomas. . The Epistemic Significance of Disagreement. In Gendler, Tamar S., and John Hawthorne (eds). Oxford Studies in Epistemology, Volume , –. Oxford: Oxford University Press. Kelly, Thomas. forthcoming. Historical vs. Current Time-Slice Theories of Epistemic Justification. In Kornblith, Hilary, and Brian McLaughlin (eds). Forthcoming. Goldman and his Critics. Hoboken, NJ: Wiley-Blackwell.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

bibliography  Kelly, Thomas. . How to Be an Epistemic Permissivist. In Steup, Matthias, John Turri, and Ernest Sosa (eds). . Contemporary Debates in Epistemology, nd Edition. Hoboken, NJ: Wiley-Blackwell. Kemeny, John. . “Fair Bets and Inductive Probabilities.” Journal of Symbolic Logic :–. Kennedy, Ralph, and Charles Chihara. . “The Dutch Book Argument: its Logical Flaws, its Subjective Sources.” Philosophical Studies :–. Kolodny, Niko. a. “IX-How Does Coherence Matter?” Proceedings of the Aristotelian Society :–. Kolodny, Niko. b. “State or Process Requirements?” Mind :–. Kolodny, Niko. . “Why Be Disposed to Be Coherent?” Ethics :–. Kornblith, Hilary. . “Justified Belief and Epistemically Responsible Action.” Philosophical Review :–. Korsgaard, Christine. . The Constitution of Agency. New York: Oxford University Press. Korsgaard, Christine. . “The Activity of Reason.” Proceedings and Addresses of the American Philosophical Association :–. Kotzen, M. . “In Defence of Objective Bayesianism, by Jon Williamson.” Mind :–. Kratzer, Angelika. . “What Can and Must Can and Must Mean.” Linguistics and Philosophy :–. Kripke, Saul A. . Naming and Necessity. Cambridge, MA: Harvard University Press. Lam, Barry, . The Dynamic Foundations of Epistemic Rationality. Ph.D. thesis, Princeton University. Lehman, R. Sherman. . “On Confirmation and Rational Betting.” Journal of Symbolic Logic :–. Lewis, David. . “How to Define Theoretical Terms.” Journal of Philosophy :–. Lewis, David. . “Radical Interpretation.” Synthese :–. Lewis, David. . Survival and Identity. In Rorty, Amelie (ed.) . The Identities of Persons, –. Oakland, CA: University of California Press. Lewis, David. . “Attitudes de Dicto and de Se.” Philosophical Review :–. Lewis, David. . A Subjectivist’s Guide to Objective Chance. In Jeffrey, Richard (ed.) . Studies in Inductive Logic and Probability, vol. , –. Oakland, CA: University of California Press. Lewis, David. . “Causal Decision Theory.” Australasian Journal of Philosophy :–. Lewis, David. . “Logic for Equivocators.” Noûs :–. Lewis, David. . Philosophical Papers, Vol . New York: Oxford University Press. Lewis, David. . “Desire as Belief.” Mind :–. Lewis, David. . “Desire as Belief II.” Mind :–. Lewis, David. . Why Conditionalize? In Lewis, David. Essays in Metaphysics and Epistemology. Cambridge: Cambridge University Press. List, Christian, and Philip Pettit. . Group Agency: The Possibility, Design, and Status of Corporate Agents. Oxford: Oxford University Press. Littlejohn, Clayton. . “No Evidence is False.” Acta Analytica :–. Lockhart, Ted. . Moral Uncertainty and its Consequences. New York: Oxford University Press.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 bibliography Macfarlane, John, Unpublished. In What Sense (If Any) Is Logic Normative for Thought? . Maher, Patrick. . “Diachronic Rationality.” Philosophy of Science :–. McClennen, Edward F. . “Pragmatic Rationality and Rules.” Philosophy and Public Affairs :–. McDowell, John. . “Criteria, Defeasibility, and Knowledge.” Proceedings of the British Academy :–. McGee, Vann. . Learning the Impossible. In Eells, Ellery, and Brian Skyrms (eds). . Probability and Conditionals: Belief Revision and Rational Decision, –. Cambridge: Cambridge University Press. Meacham, Christopher J. G. . “Sleeping Beauty and the Dynamics of De Se Beliefs.” Philosophical Studies :–. Meacham, Christopher J. G. . Unravelling the Tangled Web: Continuity, Internalism, Non-Uniqueness and Self-Locating Beliefs. In Gendler, Tamar S., and John Hawthorne (eds). . Oxford Studies in Epistemology, Vol . Oxford: Oxford University Press. Meacham, Christopher J. G. . “Impermissive Bayesianism.” Erkenntnis –. Meacham, Christopher J. G., and Jonathan Weisberg. . “Representation Theorems and the Foundations of Decision Theory.” Australasian Journal of Philosophy :–. Mele, Alfred R. . Effective Intentions: The Power of Conscious Will. New York: Oxford University Press. Minsky, Marvin. . The Society of Mind. New York: Simon and Schuster. Moss, Sarah. . “Updating as Communication.” Philosophy and Phenomenological Research :–. Moss, Sarah. forthcoming. “Credal Dilemmas.” Noûs. Muraven, Mark, Baumeister, Roy, and Dianne Tice. . “Longitudinal Improvement of Self-Regulation through Practice: Building Self-Control through Repeated Exercise.” Journal of Social Psychology :–. Nagel, Thomas. . The Possibility of Altruism. Oxford: Clarendon Press. Nagel, Thomas. . “What is It Like to Be a Bat?” Philosophical Review :–. Nozick, Robert. . Anarchy, State, and Utopia. New York: Basic Books. Parfit, Derek. . Reasons and Persons. Oxford: Oxford University Press. Parfit, Derek. . On What Matters: Two-Volume Set. Oxford: Oxford University Press. Portmore, Douglas W. Unpublished. What Are Our Options? . Portmore, Douglas W. . “Perform Your Best Option.” Journal of Philosophy :–. Price, Huw. . “Defending Desire-as-Belief.” Mind :–. Pryor, James. . “The Skeptic and the Dogmatist.” Noûs :–. Pryor, James. . “Highlights of Recent Epistemology.” British Journal for the Philosophy of Science :–. Putnam, Hilary. . “The Meaning of ‘Meaning’ .” Minnesota Studies in the Philosophy of Science :–. Putnam, Hilary. . Reason, Truth, and History. Cambridge: Cambridge University Press. Quine, W.V.O. . Reply to Morton White. In Hahn, Lewis (ed.). . The Philosophy of W.V. Quine. Library of Living Philosophers. Chicago: Open Court Publishing Company. Raiffa, Howard. . Decision Analysis. Addison-Wesley.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

bibliography  Ramsey, Frank. . Truth and Probability. In Ramsey, Frank. . The Foundations of Mathematics and Other Logical Essays. New York: Routledge. Rawls, John. . A Theory of Justice. Cambridge, MA: Harvard University Press. Resnik, Michael. . Choices: An Introduction to Decision Theory. Minneapolis, Minn: University of Minnesota Press. Rose, Steven. . The Making of Memory. London: Vintage. Rosen, Gideon. . “Nominalism, Naturalism, Epistemic Relativism.” Noûs :–. Rosenkranz, Roger. . Foundations and Applications of Inductive Probability. Atascadero, CA: Ridgeview Press. Ross, Jacob. . “Sleeping Beauty, Countable Additivity, and Rational Dilemmas.” Philosophical Review :–. Salow, Bernhard. ms. Intentionally Biased Inquiry and Access to One’s Evidence. . Savage, Leonard. . The Foundations of Statistics. New York: John Wiley and Sons. Schelling, Thomas. . The Strategy of Conflict. Cambridge, MA: Harvard University Press. Schoenfield, Miriam. . “Permission to Believe: Why Permissivism Is True and What It Tells Us About Irrelevant Influences on Belief.” Noûs :–. Sepielli, Andrew. . “What to Do When You Don’t Know What to Do.” Oxford Studies in Metaethics :–. Sepielli, Andrew. . “Moral Uncertainty and the Principle of Equity Among Moral Theories.” Philosophy and Phenomenological Research :–. Skyrms, Brian. . “Dynamic Coherence and Probability Kinematics.” Philosophy of Science :–. Skyrms, Brian. . “The Value of Knowledge.” In Savage, C. Wade (ed.). Scientific Theories. Minnesota Studies in the Philosophy of Science, Vol. . Minneapolis MN. University of Minnesota Press. Smithies, Declan. . Rationality and the Subject’s Point of View. Ph.D. thesis, New York University. Spence, Jonathan. . The Memory Palace of Matteo Ricci. New York: Penguin. Spohn, Wolfgang. . The Laws of Belief: Ranking Theory and its Philosophical Applications. Oxford: Oxford University Press. Stalnaker, Robert. . Inquiry. Cambridge, MA: MIT Press. Strotz, Robert. –. “Myopia and Inconsistency in Dynamic Utility Maximization.” Review of Economic Studies  ():–. Talbott, William. . Bayesian Epistemology. In Stanford Encyclopedia of Philosophy. . Taylor, Shelley, and Jonathon Brown. . “Illusion and Well-Being: A SocialPsychological Perspective on Mental Health.” Psychological Bulletin :–. Teller, Paul. . “Conditionalization and Observation.” Synthese :–. Titelbaum, Michael. . “Tell Me You Love Me: Bootstrapping, Externalism, and No-Lose Epistemology.” Philosophical Studies :–. Titelbaum, Michael. . Quitting Certainties: A Bayesian Framework Modeling Degrees of Belief. Oxford: Oxford University Press. van Fraassen, Bas. . “Belief and the Will.” Journal of Philosophy :–. van Fraassen, Bas. . Laws and Symmetry. New York: Oxford University Press.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 bibliography van Fraassen, Bas. . Figures in a Probability Landscape. In Dunn, J. Michael, and Anil Gupta (eds). . Truth or Consequences: Essays in Honor of Nuel Belnap. New York: Springer. von Neumann, John, and Oskar Morgenstern. . Theory of Games and Economic Behavior. Princeton: Princeton University Press. Wedgwood, Ralph. . “Internalism Explained.” Philosophy and Phenomenological Research :–. Weirich, Paul. . “A Decision Maker’s Options.” Philosophical Studies :–. Weisberg, Jonathan. . “Conditionalization, Reflection, and Self-Knowledge.” Philosophical Studies :–. Weisberg, Jonathan. a. “Communatitivity or Holism: A Dilemma for Conditionalizers.” British Journal for the Philosophy of Science :–. Weisberg, Jonathan. b. “Locating IBE in the Bayesian Framework.” Synthese : –. White, Roger. . “Problems for Dogmatism.” Philosophical Studies :–. White, Roger. . “Epistemic Permissiveness.” Philosophical Perspectives :–. White, Roger. . Evidential Symmetry and Mushy Credence. In Gendler, Tamar S., and John Hawthorne (eds). . Oxford Studies in Epistemology, Vol . Oxford: Oxford University Press. Williams, Bernard. . Problems of the Self. Cambridge: Cambridge University Press. Williams, Bernard. . Persons, Character, and Morality. In Williams, Bernard. . Moral Luck. Cambridge: Cambridge University Press. Williamson, Jon. . In Defense of Objective Bayesianism. New York: Oxford University Press. Williamson, Timothy. forthcoming in Dutant, Julien, and Daniel Dohrn (eds). The New Evil Demon. Oxford: Oxford University Press. Williamson, Timothy. . Knowledge and its Limits. Oxford: Oxford University Press.

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

Index a priori , , , , –,  action-guiding –, , –, , – Arntzenius, Frank , , , , , , , 

fission ,  forgetting , –, –, –, –, , , –, –

Bayesianism –, , , , , , –, , – behavioral economics , ,  Bratman, Michael , , –, – Briggs, Rachael , , , , , , – Broome, John , , , , , , –, , , –, , –, –

Goldman, Alvin , , ,  Greaves, Hilary , , , , – groups , –, , 

causal decision theory , ,  causation –, –, , , , –, – Christensen, David –, , –, – coherence , , , , , – Combined Spectrum , ,  conditional credence see conditional probability conditional probability –, , –, , , , –, , –,  Conditionalization , , –, – Jeffrey Conditionalization –,  Synchronic Conditionalization , , –, , , ,  Utility Conditionalization –, –, , , – consciousness , , ,  conservatism ,  deference –, –, ,  de se vs. de dicto ,  diachronic dutch book see Diachronic Tragedy Diachronic Tragedy , , –, –, , , , – Discounting –, , ,  divided minds ,  Elga, Adam , , , , ,  evidence –, –, , –, –, , –, –, , , , –, , , , –, –, –, – evidence-gathering – evidentialism – expected utility theory –, , , , , , , –, –, , , , , , , , , –,  experts and expertise –, – externalism about content , – about rationality –, –

Hájek, Alan –, –, ,  Humeanism –, ,  idealization –, , – Impartiality , , , , –, , , ,  indeterminacy , – , – inference , ,  intentionally biased inquiry , – intentions , , , , , , – internalism , –, , , –, , , ,  interpersonal comparisons of utility –, – Jeffrey Conditionalization see Conditionalization, Jeffrey justification doxastic , , , , , , – propositional , , – Kelly, Thomas –, ,  knowledge –, , –, –, –, , –, ,  Kolodny, Niko , –, – Lewis, David –, –, , –, , , , –, , –,  luminosity , –,  Meacham, Christopher , , ,  memory , –, , –, –, , , –, –, , , – mental states –, , , , , , , –, , , –, , ,  metaphysics , , ,  money pump see Diachronic Tragedy morality –, , , –, –,  Moss, Sarah , , – narrow-scope vs. wide-scope norms , – options , –, , –, , 

i

i i

i

i

i

OUP CORRECTED PROOF – FINAL, //, SPi i

i

 index Parfit, Derek –, –, , –, –, , , , , , –, , ,  personal identity at a time , – over time , , –, –, , , –, , –, , –, – preferences , , , , , , , –, –, –, –, , , –, –, , –, –, ,  intransitive –, , –,  Prisoner’s Dilemma –,  probability calculus , , , , , –, ,  psychological connectedness –, ,  psychological continuity , , –, , , 

semantics ,  Sleeping Beauty problem  supervenience –, –, –, , , , –, , , , , , ,  Synchronicity , , , , , , –, , ,  teletransportation –, –, , , , –,  time-bias –, –, –, – Utilitarianism –, ,  utility , , –, –, –, –, – Utility Conditionalization see Conditionalization, Utility

quasi-memory ,  R-relatedness , –, , – ratio analysis – reasoning , , , , –, , , , , –,  reflection principles , –, –, –, , , , , –,  rigidity ,  Savage, Leonard , , , ,  self-binding –, –, –, , , 

van Fraassen, Bas , –, , , , , ,  Weisberg, Jonathan , , , , , , ,  well-being , –, , , –, , , ,  Williamson, Timothy , –, , , –, –, , , –, , , –, –,  willpower , –

i

i i

i